Used for reading deprecated “bigfile” objects generated by the deprecated mogtool(1) utility. This is for reading legacy data and not recommended for new projects. MogileFS itself is capable of storing standalone objects of arbitrary length (as long as the underlying database and underlying filesystem on the DAV devices accept them).
returns a big_info hash if successful
# File lib/mogilefs/bigfile.rb, line 13 def bigfile_stat(key) bigfile_parse_info(get_file_data(key)) end
returns total bytes written and the big_info hash if successful, raises an exception if not. wr_io is expected to be an IO-like object capable of receiving the write method.
# File lib/mogilefs/bigfile.rb, line 20 def bigfile_write(key, wr_io, opts = { :verify => false }) info = bigfile_stat(key) total = 0 t = @get_file_data_timeout # we only decode raw zlib deflated streams that mogtool (unfortunately) # generates. tarballs and gzip(1) are up to to the application to decrypt. if info[:compressed] || opts[:verify] wr_io = MogileFS::Bigfile::Filter.new(wr_io, info, opts) end info[:parts].each_with_index do |part,part_nr| next if part_nr == 0 # info[:parts][0] is always empty begin sock = MogileFS::HTTPReader.first(part[:paths], t) rescue # part[:paths] may not be valid anymore due to rebalancing, however we # can get_keys on key,<part_nr> and retry paths if all paths fail part_key = "#{key.sub(/^_big_info:/, '')},#{part_nr}" paths = get_paths(part_key) paths.empty? and raise MogileFS::Backend::NoDevices, "no device for key=#{part_key.inspect}", [] sock = MogileFS::HTTPReader.first(paths, t) end begin w = MogileFS.io.copy_stream(sock, wr_io) ensure sock.close end wr_io.respond_to?(:md5_check!) and wr_io.md5_check!(part[:md5]) total += w end wr_io.flush total += wr_io.flushed_bytes if wr_io.respond_to?(:flushed_bytes) [ total, info ] end
Generated with the Darkfish Rdoc Generator 2.