Why do we have an IMG element?

本文回顾了 HTML 中 img 标签的历史背景和技术讨论过程,揭示了该标签是如何从早期浏览器开发者的提议中诞生,并逐渐成为现代 Web 开发中不可或缺的一部分。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

I’d like to propose a new, optional HTML tag:

IMG

Required argument is SRC="url".

This names a bitmap or pixmap file for the browser to attempt to pull over the network and interpret as an image, to be embedded in the text at the point of the tag’s occurrence.

An example is:

<IMG SRC="file://foobar.com/foo/bar/blargh.xbm">

(There is no closing tag; this is just a standalone tag.)

This tag can be embedded in an anchor like anything else; when that happens, it becomes an icon that’s sensitive to activation just like a regular text anchor.

Browsers should be afforded flexibility as to which image formats they support. Xbm and Xpm are good ones to support, for example. If a browser cannot interpret a given format, it can do whatever it wants instead (X Mosaic will pop up a default bitmap as a placeholder).

This is required functionality for X Mosaic; we have this working, and we’ll at least be using it internally. I’m certainly open to suggestions as to how this should be handled within HTML; if you have a better idea than what I’m presenting now, please let me know. I know this is hazy wrt image format, but I don’t see an alternative than to just say “let the browser do what it can” and wait for the perfect solution to come along (MIME, someday, maybe).

Xbm and Xpm were popular graphics formats on Unix systems.

“Mosaic” was one of the earliest web browsers. (”X Mosaic” was the version that ran on Unix systems.) When he wrote this message in early 1993, Marc Andreessen had not yet founded the company that made him famous, Mosaic Communications Corporation, nor had he started work on that company’s flagship product, “Mosaic Netscape.” (You may know them better by their later names, “Netscape Corporation” and “Netscape Navigator.”)

“MIME, someday, maybe” is a reference to content negotiation, a feature of HTTP where a client (like a web browser) tells the server (like a web server) what types of resources it supports (like image/jpeg) so the server can return something in the client’s preferred format. The Original HTTP as defined in 1991 (the only version that was implemented in February 1993) did not have a way for clients to tell servers what kind of images they supported, thus the design dilemma that Marc faced.

A few hours later, Tony Johnson replied:

I have something very similar in Midas 2.0 (in use here at SLAC, and due for public release any week now), except that all the names are different, and it has an extra argument NAME="name". It has almost exactly the same functionality as your proposed IMG tag. e.g.

<ICON name="NoEntry" href="http://note/foo/bar/NoEntry.xbm">

The idea of the name parameter was to allow the browser to have a set of “built in” images. If the name matches a “built in” image it would use that instead of having to go out and fetch the image. The name could also act as a hint for “line mode” browsers as to what kind of a symbol to put in place of the image.

I don’t much care about the parameter or tag names, but it would be sensible if we used the same things. I don’t much care for abbreviations, ie why not IMAGE= and SOURCE=. I somewhat prefer ICON since it imlies that the IMAGE should be smallish, but maybe ICON is an overloaded word?

Midas was another early web browser, a contemporary of X Mosaic. It was cross-platform; it ran on both Unix and VMS. “SLAC” refers to the Stanford Linear Accelerator Center (now the SLAC National Accelerator Laboratory). SLAC hosted the first web server in the United States (in fact the first web server outside Europe). When Tony wrote this message, SLAC was an old-timer on the WWW, having hosted five pages on their web server for a whopping 441 days.

Tony continued:

While we are on the subject of new tags, I have another, somewhat similar tag, which I would like to support in Midas 2.0. In principle it is:

<INCLUDE HREF="...">

The intention here would be that the second document is to be included into the first document at the place where the tag occured. In principle the referenced document could be anything, but the main purpose was to allow images (in this case arbitrary sized) to be embedded into documents. Again the intention would be that when HTTP2 comes along the format of the included document would be up for separate negotiation.

“HTTP2” is a reference to Basic HTTP as defined in 1992. At this point in early 1993, it was still largely unimplemented. The draft known as “HTTP2” evolved and was eventually standardized as “HTTP 1.0” (albeit not for another three years). HTTP 1.0 did include request headers for content negotiation, a.k.a. “MIME, someday, maybe.”

Tony continued:

An alternative I was considering was:

<A HREF="..." INCLUDE>See photo</A>

I don’t much like adding more functionality to the <A> tag, but the idea here is to maintain compatibility with browsers that can not honour the INCLUDE parameter. The intention is that browsers which do understand INCLUDE, replace the anchor text (in this case “See photo”) with the included document (picture), while older or dumber browsers ignore the INCLUDE tag completely.

This proposal was never implemented, although the idea of text-if-an-image-is-missing is an important accessibility technique which was missing from Marc’s initial <IMG> proposal. Many years later, this feature was bolted on as the <img alt> attribute, which Netscape promptly broke by erroneously treating it as a tooltip.

A few hours after that, Tim Berners-Lee responded:

I had imagined that figues would be reprented as

<a name=fig1 href="fghjkdfghj" REL="EMBED, PRESENT">Figure </a>

where the relation ship values mean

EMBED	 Embed this here when presenting it
PRESENT	 Present this whenever the source document is presented

Note that you can have various combinations of these, and if the browser doesn’t support either one, it doesn’t break.

[I] see that using this as a method for selectable icons means nesting anchors. Hmmm. But I hadn’t wanted a special tag.

This proposal was never implemented, but the rel attribute is still around.

Jim Davis added:

It would be nice if there was a way to specify the content type, e.g.

<IMG HREF="http://nsa.gov/pub/sounds/gorby.au" CONTENT-TYPE=audio/basic>

But I am completely willing to live with the requirement that I specify the content type by file extension.

This proposal was never implemented, but Netscape did later add arbitrary embedding of media objects with the <embed> element.

Jay C. Weber asked:

While images are at the top of my list of desired medium types in a WWW browser, I don’t think we should add idiosyncratic hooks for media one at a time. Whatever happened to the enthusiasm for using the MIME typing mechanism?

Marc Andreessen replied:

This isn’t a substitute for the upcoming use of MIME as a standard document mechanism; this provides a necessary and simple implementation of functionality that’s needed independently from MIME.

Jay C. Weber responded:

Let’s temporarily forget about MIME, if it clouds the issue. My objection was to the discussion of “how are we going to support embedded images” rather than “how are we going to support embedded objections in various media”.

Otherwise, next week someone is going to suggest ‘lets put in a new tag <AUD SRC="file://foobar.com/foo/bar/blargh.snd">‘ for audio.

There shouldn’t be much cost in going with something that generalizes.

With the benefit of hindsight, it appears that Jay’s concerns were well-founded. It took a little more than a week, but HTML5 did finally add new <video> and <audio> elements.

Responding to Jay’s original message, Dave Raggett said:

True indeed! I want to consider a whole range of possible image/line art types, along with the possibility of format negotiation. Tim’s note on supporting clickable areas within images is also important.

Later in 1993, Dave Raggett proposed HTML+ as an evolution of the HTML standard. The proposal was never implemented, and it was superceded by HTML 2.0. HTML 2.0 was a “retro-spec,” which means it formalized features already in common use. “This specification brings together, clarifies, and formalizes a set of features that roughly corresponds to the capabilities of HTML in common use prior to June 1994.”

Dave later wrote HTML 3.0, based on his earlier HTML+ draft. HTML 3.0 was also never implemented (outside of the W3C’s own reference implementation, Arena), and it was superceded by HTML 3.2. HTML 3.2 was also a “retro-spec” — “HTML 3.2 adds widely deployed features such as tables, applets and text flow around images, while providing full backwards compatibility with the existing standard HTML 2.0.”

Dave later co-authored HTML 4.0 and developed HTML Tidy, and went on to help with XHTML, XForms, MathML, and other modern W3C specifications.

Getting back to 1993, Marc replied to Dave:

Actually, maybe we should think about a general-purpose procedural graphics language within which we can embed arbitrary hyperlinks attached to icons, images, or text, or anything. Has anyone else seen Intermedia’s capabilities wrt this?

Intermedia was a hypertext project from Brown University. It was developed from 1985 to 1991 and ran on A/UX, a Unix-like operating system for early Macintosh computers.

The idea of a “general-purpose procedural graphics language” did eventually catch on. Modern browsers support both SVG (declarative markup with embedded scripting) and <canvas> (procedural direct-mode graphics API), although the latter started as a proprietary extension before being “retro-specced” by the WHATWG.

Bill Janssen replied:

Other systems to look at which have this (fairly valuable) notion are Andrew and Slate. Andrew is built with _insets_, each of which has some interesting type, such as text, bitmap, drawing, animation, message, spreadsheet, etc. The notion of arbitrary recursive embedding is present, so that an inset of any kind can be embedded in any other kind which supports embedding. For example, an inset can be embedded at any point in the text of the text widget, or in any rectangular area in the drawing widget, or in any cell of the spreadsheet.

“Andrew” is a reference to the Andrew User Interface System (although at that time it was simply known as the Andrew Project).

Meanwhile, Thomas Fine had a different idea:

Here’s my opinion. The best way to do images in WWW is by using MIME. I’m sure postscript is already a supported subtype in MIME, and it deals very nicely with mixing text and graphics.

But it isn’t clickable, you say? Yes your right. I suspect there is already an answer to this in display postscript. Even if there isn’t the addition to standard postscript is trivial. Define an anchor command which specifies the URL and uses the current path as a closed region for the button. Since postscript deals so well with paths, this makes arbitrary button shapes trivial.

Display Postscript was an on-screen rendering technology co-developed by Adobe and NeXT.

This proposal was never implemented, but the idea that the best way to fix HTML is to replace it with something else altogether still pops up from time to time.

Tim Berners-Lee, March 2, 1993:

HTTP2 allows a document to contain any type which the user has said he can handle, not just registered MIME types. So one can experiment. Yes I think there is a case for postscript with hypertext. I don’t know whether display postcript has enough. I know Adobe are trying to establish their own postscript-based “PDF” which will have links, and be readable by their proprietory brand of viewers.

I thought that a generic overlaying language for anchors (Hytime based?) would allow the hypertext and the graphics/video standards to evolve separately, which would help both.

Let the IMG tag be INCLUDE and let it refer to an arbitrary document type. Or EMBED if INCLUDE sounds like a cpp include which people will expect to provide SGML source code to be parsed inline — not what was intended.

HyTime was an early, SGML-based hypertext document system. It loomed large in many early discussions of HTML, and later XML.

Tim’s proposal for an <INCLUDE> tag was never implemented, although you can see echoes of it in <object>, <embed>, and the <iframe> element.

Finally, on March 12, 1993, Marc Andreessen revisited the thread:

Back to the inlined image thread again — I’m getting close to releasing Mosaic v0.10, which will support inlined GIF and XBM images/bitmaps, as mentioned previously. …

We’re not prepared to support INCLUDE/EMBED at this point. … So we’re probably going to go with <IMG SRC="url"> (not ICON, since not all inlined images can be meaningfully called icons). For the time being, inlined images won’t be explicitly content-type’d; down the road, we plan to support that (along with the general adaptation of MIME). Actually, the image reading routines we’re currently using figure out the image format on the fly, so the filename extension won’t even be significant.

I don’t really know why I wrote this. It wasn’t what I set out to write. That happens. But I am extraordinarily fascinated with all aspects of this almost-17-year-old conversation. Consider:

  • HTTP still exists. HTTP successfully evolved from 0.9 into 1.0 and later 1.1. And still it evolves.
  • HTML still exists. That rudimentary data format — it didn’t even support inline images! — successfully evolved into 2.0, 3.2, 4.0. And still it, too, evolves. HTML is an unbroken line. A twisted, knotted, snarled line, to be sure. There were plenty of “dead branches” in the evolutionary tree, places where standards-minded people got ahead of themselves (and ahead of authors and implementors). But still. Here we are, in 2009, and web pages from 1990 still render in modern browsers. I just loaded one up on my Android phone, and I didn’t even get prompted to “please wait while importing legacy format…”
  • HTML has always been a conversation between browser makers, authors, standards wonks, and other people who just showed up and liked to talk about angle brackets. Most of the successful versions of HTML have been “retro-specs,” catching up to the world while simultaneously trying to nudge it in the right direction. Anyone who tells you that HTML should be kept “pure” (presumably by ignoring browser makers, or ignoring authors, or both) is simply misinformed. HTML has never been pure, and all attempts to purify it have been spectacular failures, matched only by the attempts to replace it.
  • None of the browsers from 1993 still exist in any recognizable form. Netscape Navigator was abandoned in 1998 and rewritten from scratch to create the Mozilla Suite, which was then forked to create Firefox. Internet Explorer had its humble “beginnings” in “Microsoft Plus! for Windows 95,” where it was bundled with some desktop themes and a pinball game. (But of course that browser can be traced back further too.)
  • Some of the operating systems from 1993 still exist, but none of them are relevant to the modern web. Most people today who “experience” the web do so on a PC running Windows 2000 or later, a Mac running Mac OS X, a PC running some flavor of Linux, or a handheld device like an iPhone. In 1993, Windows was at version 3.1 (and competing with OS/2), Macs were running System 7, and Linux was distributed via Usenet. (Want to have some fun? Find a graybeard and whisper “Trumpet Winsock” or “MacPPP.”)
  • Some of the same people are still around and still involved in what we now simply call “web standards.” That’s after almost 20 years. And some were involved in predecessors of HTML, going back into the 1980s and before.
  • Speaking of predecessors… With the eventual popularity of HTML and the web, it is easy to forget the contemporary formats and systems that informed its design. Andrew? Intermedia? HyTime? And HyTime was not some rinky-dink academic research project; it was an ISO standard. It was approved for military use. It was Big Business. And you can read about it yourself… on this HTML page, in your web browser.

But none of this answers the original question: why do we have an <img> element? Why not an <icon> element? Or an <include> element? Why not a hyperlink with an include attribute, or some combination of rel values? Why an <img> element? Quite simply, because Marc Andreessen shipped one, and shipping code wins.

That’s not to say that all shipping code wins; after all, Andrew and Intermedia and HyTime shipped code too. Code is necessary but not sufficient for success. And I certainly don’t mean to say that shipping code before a standard will produce the best solution. Marc’s <img> element didn’t mandate a common graphics format; it didn’t define how text flowed around it; it didn’t support text alternatives or fallback content for older browsers. And 16, almost 17 years later, we’re still struggling with content sniffing, and it’s still a source of crazy security vulnerabilities. And you can trace that all the way back, 17 years, through the Great Browser Wars, all the way back to February 25, 1993, when Marc Andreessen offhandedly remarked, “MIME, someday, maybe,” and then shipped his code anyway.

The ones that win are the ones that ship.

§

Fifty five comments here (latest comments)

  1. “March 12, 2003″? I hope you’re ten years off.

    — Jesper #

  2. Bah. Fixed.

    — Mark #

  3. Some of the operating systems from 1993 still exist, but none of them are relevant to the modern web. Most people today who “experience” the web do so on a PC running Windows 2000 or later, a Mac running Mac OS X.

    Actually, MacOSX carries a huge legacy from NextStep (greater, many would say, than its legacy from Classic MacOS). NextStep was not only around in 1993, it was the OS on which the Web was invented.

    — Jacques Distler #

  4. Terrific post. I particularly like the reminder that HTML is not utopian. There is no YEAR ZERO for HTML. It’s a conversation:

    HTML has always been a conversation between browser makers, authors, standards wonks, and other people who just showed up and liked to talk about angle brackets. Most of the successful versions of HTML have been “retro-specs,” catching up to the world while simultaneously trying to nudge it in the right direction. Anyone who tells you that HTML should be kept “pure” (presumably by ignoring browser makers, or ignoring authors, or both) is simply misinformed. HTML has never been pure, and all attempts to purify it have been spectacular failures, matched only by the attempts to replace it.

    — Joe Crawford #

  5. Who are you calling a graybeard?

    As it happens, I now live right around the corner from the old MacSLIP office, where I picked up my copy mumblemumble years ago.

    — Adam Rice #

  6. IMG tags – Ataxia (pingback)
  7. Xbm and Xpm were popular graphics formats on Unix systems. They are still supported in modern browsers. (I tested in Internet Explorer 8 and Google Chrome.)

    IE8 is not a modern browser.

    — Rob #

  8. I’d always had hope for SMIL catching on someday. Alas.

    — Anil #

  9. Awesome post. Thanks for the interesting history.

    — Andy #

  10. Shipping doesn’t mean you win, but not shipping means you lose.

    — Steve #

  11. Great post, definitely interesting to read, considering I remember ‘craving’ Netscape Navigator to start learning about the web and HTML – this was when I was 10 or 11, and I remember Netscape Navigator being bundled with a book that taught you web design – wow how times have changed!

    — Jason #

  12. A/UX is a Unix.

    — rshxd #

  13. Greybeard!? I’m under 30 and I remember (not fondly, I assure you) fighting with both Trumpet Winsock *and* MacPPP!

    — Brian #

  14. Great article. It’s great to take a look back and see how far we have come, though that include tag would of been handy.

    — Ryan #

  15. love this. i hope i can contribute well to the future.

    — typesett #

  16. To me standards work has always had an air of self-importance that it didn’t deserve, but after reading this, I feel more of the fun game of trying things and seeing what sticks.

    — Thomas #

  17. It’s the Stanford linear accelerator, not the “Standard” linear accelerator…

    — miguel #

  18. Very nice post. It’s fun to think about a world in which the cement we take for granted now is as pliable as our current ideas.

    — Adam Barth #

  19. Also under 30, and just had a flash of nostalgia about the sheer feeling of triumph when MacPPP would successfully connect my old Performa (through my Global Village TelePort!) to Earthlink (having weaned my family off AOL.)

    Connecting to Wi-Fi just isn’t the same.

    — Len #

  20. So, you are saying that if Microsoft ships support for InfoPath processing instructions and proprietary elements in IE we’ll have them all over pretty soon?

    — Peter #

  21. I know my Google-fu may not be very good, but I was unable to find support for XPM in any browser that I had heard of. As for XBM, support was removed from Firefox 3.next on the 29th of July, and you apparently need a registry edit to enable it in IE, although I was unable to get that to work.

    — Neil #

  22. Interesting post. Loved the conversation about the formats. Kinda applies to the HTML5 video/audio tag discussions, doesn’t it?

    — Rakesh Pai #

  23. X Mosaic (also known as NSCA Mosaic) where not renamed to Netscape Navigator. While Marc Andreessen where involved in both, and they had a similar name at one point, NSCA Mosaic and Netscape Navigator is usually considered two different browsers.

    — HK #

  24. All those malformed snippets wake me wince in a post-XML world. I suppose at the time, element and attribute casing & quoting were fairly arbitrary.
    Tag-soup (aka street HTML) seems to have been very acceptable in these discussions, rather than just the later consequence of tolerant browser parsing.

    — James Pearce #

  25. NCSA X Mosaic and Netscape are unrelated browsers.
    Many people also make the same mistake about Spyglass Mosaic and NCSA Mosaic – including Wikipedia: http://upload.wikimedia.org/wikipedia/commons/7/74/Timeline_of_web_browsers.svg

    “Yes, we licensed the technology and trademarks from NCSA (at the University of Illinois), but we never used any of the code.”
    - Eric Sink, Spyglass Mosaic project lead.

    — RichB #

  26. Can I call you Gandalf and buid a shrine? Thanks a lot for such writing such excellent posts.

    — Divya #

  27. Fantastic history! (now on reddit homepage).

    — Yvo #

  28. Great read. Your writing continues to draw me in: Dive Into Mark/Python/HTML5 and your WHATWG posts are deliciously packed with information and links. Cheers.

    — Adam Backstrom #

  29. I hate content sniffing with an unending passion (and not just on the web, mind you; file(1) and magic(4) are the spawn of Satan). The idea that one can rely on figuring out the type/format of a file by examination of the byte stream is one of the worst mistakes ever made in the history of computing, in my opinion (it’s unclear whether that or file typing extension is actually the worst). For me, IE actually deserves those vulnerabilities.

    — Pierre Lebeaupin #

  30. That was a great trip back in time. Now can you explain why we have DIV tags? I’d like to eventually kill them off -> http://www.russellheimlich.com/blog/death-to-the-div/

    — Russell Heimlich #

  31. Great post. Thanks.

    — KT #

  32. I’m 25 and I think I’m at the cutoff of the youngest people who remember trumpet winsock. Anyway, I’m concerned your notion of “shipping code wins” is an endorsement to Microsoft’s policy of doing whatever the heck they want and standardizing it later by killing the competition, or taking over the standards body. A standard without a good reference implementation loses. Not shipping the code loses. But shipping alone is insufficient.

    — Z.T. #

  33. Z.T.: I believe “shipping code wins” was meant to be a conclusion based on empirically observed facts. I don’t think Mark was advocating shipping whatever the hell you want, standards be damned.

    — Karl von L. #

  34. Great post, Mark,

    Re: James Pearce’s comment:

    All those malformed snippets wake me wince in a post-XML world. I suppose at the time, element and attribute casing & quoting were fairly arbitrary.

    Not so much arbitrary, but “up to the individual”–because well-formed SGML allows for that–more flexibility / variety in the markup than XML.

    Compared with XML, with SGML, there was more concern that markup should be able to adapt to individual content and authoring idiosyncrasies. And, pre-web, there wasn’t much hindsight about what happens when tens of thousands of people are authoring in the same few markup syntaxes and expecting compatible output from many general purpose parsers / reader-apps.

    XML has more of an idea about markup conformity, e.g., to make for a simpler DOM that’s easier to parse and work with.

    — Jay Fienberg #

  35. Random comment: Gecko no longer supports XBM, I’m not sure whether Safari does, and Chrome has a bug on file for removing that support soon.

    — Peter Kasting #

  36. 23, hardly any gray hears at all, and already used MacPPP ;-) Great post nevertheless, and to be honest, I feel almost flattered to be called a “graybeard”.

    — Stephan #

  37. Nice article, but a few corrections:

    Andreessen’s browser was just called “Mosaic”, or officially “NCSA Mosaic” since it was developed at the National Center For Supercomputing Applications at the University of Illinois. “X Mosaic” just refers the X-Windows implementation; there were also Mac Mosaic and Windows Mosaic.

    The company Andreessen co-founded was called Mosaic and the new browser he wrote was called Netscape. But before they shipped, NCSA blocked them from using that company name, so they renamed the company Netscape and disambiguated the browser by calling it Netscape Navigator.

    Firefox doesn’t directly descend from Netscape Navigator. The whole point of the Mozilla project that resulted in Firefox was to throw out the awful Navigator source code and rewrite everything.

    — Anonymous #

  38. Just want to say thank you for a great article!

    — Chris L #

  39. OK, I think I’ve fixed all of the Mosaic/Netscape/Mozilla/Firefox lineage bits. Sorry for the confusion.

    — Mark #

  40. “Real artists ship.” –Steve Jobs

    — Alderete #

  41. Woohoo! Always fun to see mentions of Intermedia. I was the principal engineer who implemented the structured graphics component of Intermedia that included graphical anchors and hypertext links.

    I believe the key innovation of HTTP and HTML was a more-or-less open spec that gave anyone the ability to set up their own servers. In many ways HTML is the ultimate example of the “worse is better” principle.

    Anyways, Intermedia shipped, but it was bound up in a lan-based networking model, and using A/UX made it really hard to distribute widely.

    — Charlie Evett #

  42. The latest Webkit nightly running on SnowLeopard supports XBM.

    — drew #

  43. Opera 10 still supports XBM.

    — Shmuel #

  44. Insightful article documenting some of the early discussions around HTML. Many thanks for taking the time to put this together!!

    — Jon Jackson #

  45. Sometimes crap things that deserve to die get lucky. HTML is one of those things.

    — Mike #

  46. @anonymous> “The company Andreessen co-founded was called Mosaic and the new browser he wrote was called Netscape. But before they shipped, NCSA blocked them from using that company name,”

    IIRC, MCC shipped Netscape late summer 2004 with the Mosaic Communications Corp company name, but only became Netscape with the 1.1N version (which used the Netscape logo/throbber icon from a competition).

    I wish I could purge this useless crap from my brain and remember useful things instead.

    — RichB #

  47. Oh wow this post has brought up so many memories!
    I can barely remember MacSLIP; I remember MacPPP being krotchety; and I remember FreePPP (and it’s monkey) being an awesome replacement eventually.

    — Ryan Jay Schotte #

  48. I agree with your sentiments – getting it done is sometimes more important than getting it “right”.

    — Ben #

  49. “Some of the operating systems from 1993 still exist, but none of them are relevant to the modern web.” … “In 1993, [...] Linux was distributed via Usenet”.

    Slackware 1.0 was released in 1993, and distributed over FTP. (I’m sure it was also distributed over USENET, because everything was distributed over USENET, and still is.) The path from 1993 Linux to 2009 Linux is at least as direct as the path from 1993 HTML to 2009 HTML.

    — Slacker #

  50. Oh man, MacPPP, those were the days, Zipping into The Dorsai Embassy in Long Island City, getting excited over that hot new 14.4k modem to replace the ol’ 2400 bad boy… Nostalgia!

    — Justin D #

  51. Man, I loved reading this. Thank you!

    — John Walker #

  52. The real lesson here is this:

    HTML and HTTP were hobby projects for geniuses working at particle accelerators.

    Full time, salaried positions for thinking about inane alt text features of pictures on the web is a secondary, or even tertiary effect of a global asset bubble that is in (initial) process of popping.

    As we cross the event horizon of real deflation, most of you folks would be well advised to learn a trade of some kind. Unless you’re working at the accelerator.

    — Graybeard #

  53. As we cross the event horizon of real deflation, most of you folks would be well advised to learn a trade of some kind. Unless you’re working at the accelerator.

    If we are, indeed, facing such an economic calamity, I can assure you the high energy physics (”the accelerator”) will be the first thing on the chopping block. In such an eventuality, what “most of you folks” have to fear is a flood of very smart ex-physicists seeking IT jobs.

    Full disclosure: I’m a high energy physicist, for whom this stuff is, indeed, a “hobby project”.

    — Jacques Distler #

  54. This was an awesome read. Thanks!

    — Laurie #

  55. Historia del tag “img” : Dorsumi - D-news - Dorsuminews (pingback)
from data import COCODetection, get_label_map, MEANS, COLORS from yolact import Yolact from utils.augmentations import BaseTransform, FastBaseTransform, Resize from utils.functions import MovingAverage, ProgressBar from layers.box_utils import jaccard, center_size, mask_iou from utils import timer from utils.functions import SavePath from layers.output_utils import postprocess, undo_image_transformation import pycocotools from data import cfg, set_cfg, set_dataset import numpy as np import torch import torch.backends.cudnn as cudnn from torch.autograd import Variable import argparse import time import random import cProfile import pickle import json import os from collections import defaultdict from pathlib import Path from collections import OrderedDict from PIL import Image import matplotlib.pyplot as plt import cv2 def str2bool(v): if v.lower() in ('yes', 'true', 't', 'y', '1'): return True elif v.lower() in ('no', 'false', 'f', 'n', '0'): return False else: raise argparse.ArgumentTypeError('Boolean value expected.') def parse_args(argv=None): parser = argparse.ArgumentParser( description='YOLACT COCO Evaluation') parser.add_argument('--trained_model', default='weights/yolact_base_105_101798_interrupt.pth', type=str, help='Trained state_dict file path to open. If "interrupt", this will open the interrupt file.') parser.add_argument('--top_k', default=5, type=int, help='Further restrict the number of predictions to parse') parser.add_argument('--cuda', default=True, type=str2bool, help='Use cuda to evaulate model') parser.add_argument('--fast_nms', default=True, type=str2bool, help='Whether to use a faster, but not entirely correct version of NMS.') parser.add_argument('--cross_class_nms', default=False, type=str2bool, help='Whether compute NMS cross-class or per-class.') parser.add_argument('--display_masks', default=True, type=str2bool, help='Whether or not to display masks over bounding boxes') parser.add_argument('--display_bboxes', default=True, type=str2bool, help='Whether or not to display bboxes around masks') parser.add_argument('--display_text', default=True, type=str2bool, help='Whether or not to display text (class [score])') parser.add_argument('--display_scores', default=True, type=str2bool, help='Whether or not to display scores in addition to classes') parser.add_argument('--display', dest='display', action='store_true', help='Display qualitative results instead of quantitative ones.') parser.add_argument('--shuffle', dest='shuffle', action='store_true', help='Shuffles the images when displaying them. Doesn\'t have much of an effect when display is off though.') parser.add_argument('--ap_data_file', default='results/ap_data.pkl', type=str, help='In quantitative mode, the file to save detections before calculating mAP.') parser.add_argument('--resume', dest='resume', action='store_true', help='If display not set, this resumes mAP calculations from the ap_data_file.') parser.add_argument('--max_images', default=-1, type=int, help='The maximum number of images from the dataset to consider. Use -1 for all.') parser.add_argument('--output_coco_json', dest='output_coco_json', action='store_true', help='If display is not set, instead of processing IoU values, this just dumps detections into the coco json file.') parser.add_argument('--bbox_det_file', default='results/bbox_detections.json', type=str, help='The output file for coco bbox results if --coco_results is set.') parser.add_argument('--mask_det_file', default='results/mask_detections.json', type=str, help='The output file for coco mask results if --coco_results is set.') parser.add_argument('--config', default=None, help='The config object to use.') parser.add_argument('--output_web_json', dest='output_web_json', action='store_true', help='If display is not set, instead of processing IoU values, this dumps detections for usage with the detections viewer web thingy.') parser.add_argument('--web_det_path', default='web/dets/', type=str, help='If output_web_json is set, this is the path to dump detections into.') parser.add_argument('--no_bar', dest='no_bar', action='store_true', help='Do not output the status bar. This is useful for when piping to a file.') parser.add_argument('--display_lincomb', default=False, type=str2bool, help='If the config uses lincomb masks, output a visualization of how those masks are created.') parser.add_argument('--benchmark', default=False, dest='benchmark', action='store_true', help='Equivalent to running display mode but without displaying an image.') parser.add_argument('--no_sort', default=False, dest='no_sort', action='store_true', help='Do not sort images by hashed image ID.') parser.add_argument('--seed', default=None, type=int, help='The seed to pass into random.seed. Note: this is only really for the shuffle and does not (I think) affect cuda stuff.') parser.add_argument('--mask_proto_debug', default=False, dest='mask_proto_debug', action='store_true', help='Outputs stuff for scripts/compute_mask.py.') parser.add_argument('--no_crop', default=False, dest='crop', action='store_false', help='Do not crop output masks with the predicted bounding box.') parser.add_argument('--image', default=None, type=str, help='A path to an image to use for display.') parser.add_argument('--images', default='E:/yolact-master/coco/images/train2017', type=str, help='Input and output paths separated by a colon.') parser.add_argument('--video', default=None, type=str, help='A path to a video to evaluate on. Passing in a number will use that index webcam.') parser.add_argument('--video_multiframe', default=1, type=int, help='The number of frames to evaluate in parallel to make videos play at higher fps.') parser.add_argument('--score_threshold', default=0.15, type=float, help='Detections with a score under this threshold will not be considered. This currently only works in display mode.') parser.add_argument('--dataset', default=None, type=str, help='If specified, override the dataset specified in the config with this one (example: coco2017_dataset).') parser.add_argument('--detect', default=False, dest='detect', action='store_true', help='Don\'t evauluate the mask branch at all and only do object detection. This only works for --display and --benchmark.') parser.add_argument('--display_fps', default=False, dest='display_fps', action='store_true', help='When displaying / saving video, draw the FPS on the frame') parser.add_argument('--emulate_playback', default=False, dest='emulate_playback', action='store_true', help='When saving a video, emulate the framerate that you\'d get running in real-time mode.') parser.set_defaults(no_bar=False, display=False, resume=False, output_coco_json=False, output_web_json=False, shuffle=False, benchmark=False, no_sort=False, no_hash=False, mask_proto_debug=False, crop=True, detect=False, display_fps=False, emulate_playback=False) global args args = parser.parse_args(argv) if args.output_web_json: args.output_coco_json = True if args.seed is not None: random.seed(args.seed) iou_thresholds = [x / 100 for x in range(50, 100, 5)] coco_cats = {} # Call prep_coco_cats to fill this coco_cats_inv = {} color_cache = defaultdict(lambda: {}) def prep_display(dets_out, img, h, w, undo_transform=True, class_color=False, mask_alpha=0.45, fps_str=''): """ Note: If undo_transform=False then im_h and im_w are allowed to be None. """ if undo_transform: img_numpy = undo_image_transformation(img, w, h) img_gpu = torch.Tensor(img_numpy).cuda() else: img_gpu = img / 255.0 h, w, _ = img.shape with timer.env('Postprocess'): save = cfg.rescore_bbox cfg.rescore_bbox = True t = postprocess(dets_out, w, h, visualize_lincomb = args.display_lincomb, crop_masks = args.crop, score_threshold = args.score_threshold) cfg.rescore_bbox = save with timer.env('Copy'): idx = t[1].argsort(0, descending=True)[:args.top_k] if cfg.eval_mask_branch: # Masks are drawn on the GPU, so don't copy masks = t[3][idx] classes, scores, boxes = [x[idx].cpu().numpy() for x in t[:3]] num_dets_to_consider = min(args.top_k, classes.shape[0]) for j in range(num_dets_to_consider): if scores[j] < args.score_threshold: num_dets_to_consider = j break # Quick and dirty lambda for selecting the color for a particular index # Also keeps track of a per-gpu color cache for maximum speed def get_color(j, on_gpu=None): global color_cache color_idx = (classes[j] * 5 if class_color else j * 5) % len(COLORS) if on_gpu is not None and color_idx in color_cache[on_gpu]: return color_cache[on_gpu][color_idx] else: color = COLORS[color_idx] if not undo_transform: # The image might come in as RGB or BRG, depending color = (color[2], color[1], color[0]) if on_gpu is not None: color = torch.Tensor(color).to(on_gpu).float() / 255. color_cache[on_gpu][color_idx] = color return color # First, draw the masks on the GPU where we can do it really fast # Beware: very fast but possibly unintelligible mask-drawing code ahead # I wish I had access to OpenGL or Vulkan but alas, I guess Pytorch tensor operations will have to suffice if args.display_masks and cfg.eval_mask_branch and num_dets_to_consider > 0: # After this, mask is of size [num_dets, h, w, 1] masks = masks[:num_dets_to_consider, :, :, None] # Prepare the RGB images for each mask given their color (size [num_dets, h, w, 1]) colors = torch.cat([get_color(j, on_gpu=img_gpu.device.index).view(1, 1, 1, 3) for j in range(num_dets_to_consider)], dim=0) masks_color = masks.repeat(1, 1, 1, 3) * colors * mask_alpha # This is 1 everywhere except for 1-mask_alpha where the mask is inv_alph_masks = masks * (-mask_alpha) + 1 # I did the math for this on pen and paper. This whole block should be equivalent to: # for j in range(num_dets_to_consider): # img_gpu = img_gpu * inv_alph_masks[j] + masks_color[j] masks_color_summand = masks_color[0] if num_dets_to_consider > 1: inv_alph_cumul = inv_alph_masks[:(num_dets_to_consider-1)].cumprod(dim=0) masks_color_cumul = masks_color[1:] * inv_alph_cumul masks_color_summand += masks_color_cumul.sum(dim=0) img_gpu = img_gpu * inv_alph_masks.prod(dim=0) + masks_color_summand if args.display_fps: # Draw the box for the fps on the GPU font_face = cv2.FONT_HERSHEY_DUPLEX font_scale = 0.6 font_thickness = 1 text_w, text_h = cv2.getTextSize(fps_str, font_face, font_scale, font_thickness)[0] img_gpu[0:text_h+8, 0:text_w+8] *= 0.6 # 1 - Box alpha # Then draw the stuff that needs to be done on the cpu # Note, make sure this is a uint8 tensor or opencv will not anti alias text for whatever reason img_numpy = (img_gpu * 255).byte().cpu().numpy() if args.display_fps: # Draw the text on the CPU text_pt = (4, text_h + 2) text_color = [255, 255, 255] cv2.putText(img_numpy, fps_str, text_pt, font_face, font_scale, text_color, font_thickness, cv2.LINE_AA) if num_dets_to_consider == 0: return img_numpy if args.display_text or args.display_bboxes: for j in reversed(range(num_dets_to_consider)): x1, y1, x2, y2 = boxes[j, :] color = get_color(j) score = scores[j] if args.display_bboxes: cv2.rectangle(img_numpy, (x1, y1), (x2, y2), color, 1) if args.display_text: _class = cfg.dataset.class_names[classes[j]] text_str = '%s: %.2f' % (_class, score) if args.display_scores else _class font_face = cv2.FONT_HERSHEY_DUPLEX font_scale = 0.6 font_thickness = 1 text_w, text_h = cv2.getTextSize(text_str, font_face, font_scale, font_thickness)[0] text_pt = (x1, y1 - 3) text_color = [255, 255, 255] cv2.rectangle(img_numpy, (x1, y1), (x1 + text_w, y1 - text_h - 4), color, -1) cv2.putText(img_numpy, text_str, text_pt, font_face, font_scale, text_color, font_thickness, cv2.LINE_AA) return img_numpy def prep_benchmark(dets_out, h, w): with timer.env('Postprocess'): t = postprocess(dets_out, w, h, crop_masks=args.crop, score_threshold=args.score_threshold) with timer.env('Copy'): classes, scores, boxes, masks = [x[:args.top_k] for x in t] if isinstance(scores, list): box_scores = scores[0].cpu().numpy() mask_scores = scores[1].cpu().numpy() else: scores = scores.cpu().numpy() classes = classes.cpu().numpy() boxes = boxes.cpu().numpy() masks = masks.cpu().numpy() with timer.env('Sync'): # Just in case torch.cuda.synchronize() def prep_coco_cats(): """ Prepare inverted table for category id lookup given a coco cats object. """ for coco_cat_id, transformed_cat_id_p1 in get_label_map().items(): transformed_cat_id = transformed_cat_id_p1 - 1 coco_cats[transformed_cat_id] = coco_cat_id coco_cats_inv[coco_cat_id] = transformed_cat_id def get_coco_cat(transformed_cat_id): """ transformed_cat_id is [0,80) as indices in cfg.dataset.class_names """ return coco_cats[transformed_cat_id] def get_transformed_cat(coco_cat_id): """ transformed_cat_id is [0,80) as indices in cfg.dataset.class_names """ return coco_cats_inv[coco_cat_id] class Detections: def __init__(self): self.bbox_data = [] self.mask_data = [] def add_bbox(self, image_id:int, category_id:int, bbox:list, score:float): """ Note that bbox should be a list or tuple of (x1, y1, x2, y2) """ bbox = [bbox[0], bbox[1], bbox[2]-bbox[0], bbox[3]-bbox[1]] # Round to the nearest 10th to avoid huge file sizes, as COCO suggests bbox = [round(float(x)*10)/10 for x in bbox] self.bbox_data.append({ 'image_id': int(image_id), 'category_id': get_coco_cat(int(category_id)), 'bbox': bbox, 'score': float(score) }) def add_mask(self, image_id:int, category_id:int, segmentation:np.ndarray, score:float): """ The segmentation should be the full mask, the size of the image and with size [h, w]. """ rle = pycocotools.mask.encode(np.asfortranarray(segmentation.astype(np.uint8))) rle['counts'] = rle['counts'].decode('ascii') # json.dump doesn't like bytes strings self.mask_data.append({ 'image_id': int(image_id), 'category_id': get_coco_cat(int(category_id)), 'segmentation': rle, 'score': float(score) }) def dump(self): dump_arguments = [ (self.bbox_data, args.bbox_det_file), (self.mask_data, args.mask_det_file) ] for data, path in dump_arguments: with open(path, 'w') as f: json.dump(data, f) def dump_web(self): """ Dumps it in the format for my web app. Warning: bad code ahead! """ config_outs = ['preserve_aspect_ratio', 'use_prediction_module', 'use_yolo_regressors', 'use_prediction_matching', 'train_masks'] output = { 'info' : { 'Config': {key: getattr(cfg, key) for key in config_outs}, } } image_ids = list(set([x['image_id'] for x in self.bbox_data])) image_ids.sort() image_lookup = {_id: idx for idx, _id in enumerate(image_ids)} output['images'] = [{'image_id': image_id, 'dets': []} for image_id in image_ids] # These should already be sorted by score with the way prep_metrics works. for bbox, mask in zip(self.bbox_data, self.mask_data): image_obj = output['images'][image_lookup[bbox['image_id']]] image_obj['dets'].append({ 'score': bbox['score'], 'bbox': bbox['bbox'], 'category': cfg.dataset.class_names[get_transformed_cat(bbox['category_id'])], 'mask': mask['segmentation'], }) with open(os.path.join(args.web_det_path, '%s.json' % cfg.name), 'w') as f: json.dump(output, f) def _mask_iou(mask1, mask2, iscrowd=False): with timer.env('Mask IoU'): ret = mask_iou(mask1, mask2, iscrowd) return ret.cpu() def _bbox_iou(bbox1, bbox2, iscrowd=False): with timer.env('BBox IoU'): ret = jaccard(bbox1, bbox2, iscrowd) return ret.cpu() def prep_metrics(ap_data, dets, img, gt, gt_masks, h, w, num_crowd, image_id, detections:Detections=None): """ Returns a list of APs for this image, with each element being for a class """ if not args.output_coco_json: with timer.env('Prepare gt'): gt_boxes = torch.Tensor(gt[:, :4]) gt_boxes[:, [0, 2]] *= w gt_boxes[:, [1, 3]] *= h gt_classes = list(gt[:, 4].astype(int)) gt_masks = torch.Tensor(gt_masks).view(-1, h*w) if num_crowd > 0: split = lambda x: (x[-num_crowd:], x[:-num_crowd]) crowd_boxes , gt_boxes = split(gt_boxes) crowd_masks , gt_masks = split(gt_masks) crowd_classes, gt_classes = split(gt_classes) with timer.env('Postprocess'): classes, scores, boxes, masks = postprocess(dets, w, h, crop_masks=args.crop, score_threshold=args.score_threshold) if classes.size(0) == 0: return classes = list(classes.cpu().numpy().astype(int)) if isinstance(scores, list): box_scores = list(scores[0].cpu().numpy().astype(float)) mask_scores = list(scores[1].cpu().numpy().astype(float)) else: scores = list(scores.cpu().numpy().astype(float)) box_scores = scores mask_scores = scores masks = masks.view(-1, h*w).cuda() boxes = boxes.cuda() if args.output_coco_json: with timer.env('JSON Output'): boxes = boxes.cpu().numpy() masks = masks.view(-1, h, w).cpu().numpy() for i in range(masks.shape[0]): # Make sure that the bounding box actually makes sense and a mask was produced if (boxes[i, 3] - boxes[i, 1]) * (boxes[i, 2] - boxes[i, 0]) > 0: detections.add_bbox(image_id, classes[i], boxes[i,:], box_scores[i]) detections.add_mask(image_id, classes[i], masks[i,:,:], mask_scores[i]) return with timer.env('Eval Setup'): num_pred = len(classes) num_gt = len(gt_classes) mask_iou_cache = _mask_iou(masks, gt_masks) bbox_iou_cache = _bbox_iou(boxes.float(), gt_boxes.float()) if num_crowd > 0: crowd_mask_iou_cache = _mask_iou(masks, crowd_masks, iscrowd=True) crowd_bbox_iou_cache = _bbox_iou(boxes.float(), crowd_boxes.float(), iscrowd=True) else: crowd_mask_iou_cache = None crowd_bbox_iou_cache = None box_indices = sorted(range(num_pred), key=lambda i: -box_scores[i]) mask_indices = sorted(box_indices, key=lambda i: -mask_scores[i]) iou_types = [ ('box', lambda i,j: bbox_iou_cache[i, j].item(), lambda i,j: crowd_bbox_iou_cache[i,j].item(), lambda i: box_scores[i], box_indices), ('mask', lambda i,j: mask_iou_cache[i, j].item(), lambda i,j: crowd_mask_iou_cache[i,j].item(), lambda i: mask_scores[i], mask_indices) ] timer.start('Main loop') for _class in set(classes + gt_classes): ap_per_iou = [] num_gt_for_class = sum([1 for x in gt_classes if x == _class]) for iouIdx in range(len(iou_thresholds)): iou_threshold = iou_thresholds[iouIdx] for iou_type, iou_func, crowd_func, score_func, indices in iou_types: gt_used = [False] * len(gt_classes) ap_obj = ap_data[iou_type][iouIdx][_class] ap_obj.add_gt_positives(num_gt_for_class) for i in indices: if classes[i] != _class: continue max_iou_found = iou_threshold max_match_idx = -1 for j in range(num_gt): if gt_used[j] or gt_classes[j] != _class: continue iou = iou_func(i, j) if iou > max_iou_found: max_iou_found = iou max_match_idx = j if max_match_idx >= 0: gt_used[max_match_idx] = True ap_obj.push(score_func(i), True) else: # If the detection matches a crowd, we can just ignore it matched_crowd = False if num_crowd > 0: for j in range(len(crowd_classes)): if crowd_classes[j] != _class: continue iou = crowd_func(i, j) if iou > iou_threshold: matched_crowd = True break # All this crowd code so that we can make sure that our eval code gives the # same result as COCOEval. There aren't even that many crowd annotations to # begin with, but accuracy is of the utmost importance. if not matched_crowd: ap_obj.push(score_func(i), False) timer.stop('Main loop') class APDataObject: """ Stores all the information necessary to calculate the AP for one IoU and one class. Note: I type annotated this because why not. """ def __init__(self): self.data_points = [] self.num_gt_positives = 0 def push(self, score:float, is_true:bool): self.data_points.append((score, is_true)) def add_gt_positives(self, num_positives:int): """ Call this once per image. """ self.num_gt_positives += num_positives def is_empty(self) -> bool: return len(self.data_points) == 0 and self.num_gt_positives == 0 def get_ap(self) -> float: """ Warning: result not cached. """ if self.num_gt_positives == 0: return 0 # Sort descending by score self.data_points.sort(key=lambda x: -x[0]) precisions = [] recalls = [] num_true = 0 num_false = 0 # Compute the precision-recall curve. The x axis is recalls and the y axis precisions. for datum in self.data_points: # datum[1] is whether the detection a true or false positive if datum[1]: num_true += 1 else: num_false += 1 precision = num_true / (num_true + num_false) recall = num_true / self.num_gt_positives precisions.append(precision) recalls.append(recall) # Smooth the curve by computing [max(precisions[i:]) for i in range(len(precisions))] # Basically, remove any temporary dips from the curve. # At least that's what I think, idk. COCOEval did it so I do too. for i in range(len(precisions)-1, 0, -1): if precisions[i] > precisions[i-1]: precisions[i-1] = precisions[i] # Compute the integral of precision(recall) d_recall from recall=0->1 using fixed-length riemann summation with 101 bars. y_range = [0] * 101 # idx 0 is recall == 0.0 and idx 100 is recall == 1.00 x_range = np.array([x / 100 for x in range(101)]) recalls = np.array(recalls) # I realize this is weird, but all it does is find the nearest precision(x) for a given x in x_range. # Basically, if the closest recall we have to 0.01 is 0.009 this sets precision(0.01) = precision(0.009). # I approximate the integral this way, because that's how COCOEval does it. indices = np.searchsorted(recalls, x_range, side='left') for bar_idx, precision_idx in enumerate(indices): if precision_idx < len(precisions): y_range[bar_idx] = precisions[precision_idx] # Finally compute the riemann sum to get our integral. # avg([precision(x) for x in 0:0.01:1]) return sum(y_range) / len(y_range) def badhash(x): """ Just a quick and dirty hash function for doing a deterministic shuffle based on image_id. Source: https://stackoverflow.com/questions/664014/what-integer-hash-function-are-good-that-accepts-an-integer-hash-key """ x = (((x >> 16) ^ x) * 0x045d9f3b) & 0xFFFFFFFF x = (((x >> 16) ^ x) * 0x045d9f3b) & 0xFFFFFFFF x = ((x >> 16) ^ x) & 0xFFFFFFFF return x def evalimage(net:Yolact, path:str, save_path:str=None): frame = torch.from_numpy(cv2.imread(path)).cuda().float() batch = FastBaseTransform()(frame.unsqueeze(0)) preds = net(batch) img_numpy = prep_display(preds, frame, None, None, undo_transform=False) if save_path is None: img_numpy = img_numpy[:, :, (2, 1, 0)] if save_path is None: plt.imshow(img_numpy) plt.title(path) plt.show() else: cv2.imwrite(save_path, img_numpy) def evalimages(net:Yolact, input_folder:str, output_folder:str): if not os.path.exists(output_folder): os.mkdir(output_folder) print() for p in Path(input_folder).glob('*'): path = str(p) name = os.path.basename(path) name = '.'.join(name.split('.')[:-1]) + '.png' out_path = os.path.join(output_folder, name) evalimage(net, path, out_path) print(path + ' -> ' + out_path) print('Done.') from multiprocessing.pool import ThreadPool from queue import Queue class CustomDataParallel(torch.nn.DataParallel): """ A Custom Data Parallel class that properly gathers lists of dictionaries. """ def gather(self, outputs, output_device): # Note that I don't actually want to convert everything to the output_device return sum(outputs, []) def evalvideo(net:Yolact, path:str, out_path:str=None): # If the path is a digit, parse it as a webcam index is_webcam = path.isdigit() # If the input image size is constant, this make things faster (hence why we can use it in a video setting). cudnn.benchmark = True if is_webcam: vid = cv2.VideoCapture(int(path)) else: vid = cv2.VideoCapture(path) if not vid.isOpened(): print('Could not open video "%s"' % path) exit(-1) target_fps = round(vid.get(cv2.CAP_PROP_FPS)) frame_width = round(vid.get(cv2.CAP_PROP_FRAME_WIDTH)) frame_height = round(vid.get(cv2.CAP_PROP_FRAME_HEIGHT)) if is_webcam: num_frames = float('inf') else: num_frames = round(vid.get(cv2.CAP_PROP_FRAME_COUNT)) net = CustomDataParallel(net).cuda() transform = torch.nn.DataParallel(FastBaseTransform()).cuda() frame_times = MovingAverage(100) fps = 0 frame_time_target = 1 / target_fps running = True fps_str = '' vid_done = False frames_displayed = 0 if out_path is not None: out = cv2.VideoWriter(out_path, cv2.VideoWriter_fourcc(*"mp4v"), target_fps, (frame_width, frame_height)) def cleanup_and_exit(): print() pool.terminate() vid.release() if out_path is not None: out.release() cv2.destroyAllWindows() exit() def get_next_frame(vid): frames = [] for idx in range(args.video_multiframe): frame = vid.read()[1] if frame is None: return frames frames.append(frame) return frames def transform_frame(frames): with torch.no_grad(): frames = [torch.from_numpy(frame).cuda().float() for frame in frames] return frames, transform(torch.stack(frames, 0)) def eval_network(inp): with torch.no_grad(): frames, imgs = inp num_extra = 0 while imgs.size(0) < args.video_multiframe: imgs = torch.cat([imgs, imgs[0].unsqueeze(0)], dim=0) num_extra += 1 out = net(imgs) if num_extra > 0: out = out[:-num_extra] return frames, out def prep_frame(inp, fps_str): with torch.no_grad(): frame, preds = inp return prep_display(preds, frame, None, None, undo_transform=False, class_color=True, fps_str=fps_str) frame_buffer = Queue() video_fps = 0 # All this timing code to make sure that def play_video(): try: nonlocal frame_buffer, running, video_fps, is_webcam, num_frames, frames_displayed, vid_done video_frame_times = MovingAverage(100) frame_time_stabilizer = frame_time_target last_time = None stabilizer_step = 0.0005 progress_bar = ProgressBar(30, num_frames) while running: frame_time_start = time.time() if not frame_buffer.empty(): next_time = time.time() if last_time is not None: video_frame_times.add(next_time - last_time) video_fps = 1 / video_frame_times.get_avg() if out_path is None: cv2.imshow(path, frame_buffer.get()) else: out.write(frame_buffer.get()) frames_displayed += 1 last_time = next_time if out_path is not None: if video_frame_times.get_avg() == 0: fps = 0 else: fps = 1 / video_frame_times.get_avg() progress = frames_displayed / num_frames * 100 progress_bar.set_val(frames_displayed) print('\rProcessing Frames %s %6d / %6d (%5.2f%%) %5.2f fps ' % (repr(progress_bar), frames_displayed, num_frames, progress, fps), end='') # This is split because you don't want savevideo to require cv2 display functionality (see #197) if out_path is None and cv2.waitKey(1) == 27: # Press Escape to close running = False if not (frames_displayed < num_frames): running = False if not vid_done: buffer_size = frame_buffer.qsize() if buffer_size < args.video_multiframe: frame_time_stabilizer += stabilizer_step elif buffer_size > args.video_multiframe: frame_time_stabilizer -= stabilizer_step if frame_time_stabilizer < 0: frame_time_stabilizer = 0 new_target = frame_time_stabilizer if is_webcam else max(frame_time_stabilizer, frame_time_target) else: new_target = frame_time_target next_frame_target = max(2 * new_target - video_frame_times.get_avg(), 0) target_time = frame_time_start + next_frame_target - 0.001 # Let's just subtract a millisecond to be safe if out_path is None or args.emulate_playback: # This gives more accurate timing than if sleeping the whole amount at once while time.time() < target_time: time.sleep(0.001) else: # Let's not starve the main thread, now time.sleep(0.001) except: # See issue #197 for why this is necessary import traceback traceback.print_exc() extract_frame = lambda x, i: (x[0][i] if x[1][i]['detection'] is None else x[0][i].to(x[1][i]['detection']['box'].device), [x[1][i]]) # Prime the network on the first frame because I do some thread unsafe things otherwise print('Initializing model... ', end='') first_batch = eval_network(transform_frame(get_next_frame(vid))) print('Done.') # For each frame the sequence of functions it needs to go through to be processed (in reversed order) sequence = [prep_frame, eval_network, transform_frame] pool = ThreadPool(processes=len(sequence) + args.video_multiframe + 2) pool.apply_async(play_video) active_frames = [{'value': extract_frame(first_batch, i), 'idx': 0} for i in range(len(first_batch[0]))] print() if out_path is None: print('Press Escape to close.') try: while vid.isOpened() and running: # Hard limit on frames in buffer so we don't run out of memory >.> while frame_buffer.qsize() > 100: time.sleep(0.001) start_time = time.time() # Start loading the next frames from the disk if not vid_done: next_frames = pool.apply_async(get_next_frame, args=(vid,)) else: next_frames = None if not (vid_done and len(active_frames) == 0): # For each frame in our active processing queue, dispatch a job # for that frame using the current function in the sequence for frame in active_frames: _args = [frame['value']] if frame['idx'] == 0: _args.append(fps_str) frame['value'] = pool.apply_async(sequence[frame['idx']], args=_args) # For each frame whose job was the last in the sequence (i.e. for all final outputs) for frame in active_frames: if frame['idx'] == 0: frame_buffer.put(frame['value'].get()) # Remove the finished frames from the processing queue active_frames = [x for x in active_frames if x['idx'] > 0] # Finish evaluating every frame in the processing queue and advanced their position in the sequence for frame in list(reversed(active_frames)): frame['value'] = frame['value'].get() frame['idx'] -= 1 if frame['idx'] == 0: # Split this up into individual threads for prep_frame since it doesn't support batch size active_frames += [{'value': extract_frame(frame['value'], i), 'idx': 0} for i in range(1, len(frame['value'][0]))] frame['value'] = extract_frame(frame['value'], 0) # Finish loading in the next frames and add them to the processing queue if next_frames is not None: frames = next_frames.get() if len(frames) == 0: vid_done = True else: active_frames.append({'value': frames, 'idx': len(sequence)-1}) # Compute FPS frame_times.add(time.time() - start_time) fps = args.video_multiframe / frame_times.get_avg() else: fps = 0 fps_str = 'Processing FPS: %.2f | Video Playback FPS: %.2f | Frames in Buffer: %d' % (fps, video_fps, frame_buffer.qsize()) if not args.display_fps: print('\r' + fps_str + ' ', end='') except KeyboardInterrupt: print('\nStopping...') cleanup_and_exit() def evaluate(net:Yolact, dataset, train_mode=False): net.detect.use_fast_nms = args.fast_nms net.detect.use_cross_class_nms = args.cross_class_nms cfg.mask_proto_debug = args.mask_proto_debug # TODO Currently we do not support Fast Mask Re-scroing in evalimage, evalimages, and evalvideo if args.image is not None: if ':' in args.image: inp, out = args.image.split(':') evalimage(net, inp, out) else: evalimage(net, args.image) return elif args.images is not None: inp, out = args.images.split('E:/yolact-master/coco/images/train2017: E:/yolact-master/results/output') evalimages(net, inp, out) return elif args.video is not None: if ':' in args.video: inp, out = args.video.split(':') evalvideo(net, inp, out) else: evalvideo(net, args.video) return frame_times = MovingAverage() dataset_size = len(dataset) if args.max_images < 0 else min(args.max_images, len(dataset)) progress_bar = ProgressBar(30, dataset_size) print() if not args.display and not args.benchmark: # For each class and iou, stores tuples (score, isPositive) # Index ap_data[type][iouIdx][classIdx] ap_data = { 'box' : [[APDataObject() for _ in cfg.dataset.class_names] for _ in iou_thresholds], 'mask': [[APDataObject() for _ in cfg.dataset.class_names] for _ in iou_thresholds] } detections = Detections() else: timer.disable('Load Data') dataset_indices = list(range(len(dataset))) if args.shuffle: random.shuffle(dataset_indices) elif not args.no_sort: # Do a deterministic shuffle based on the image ids # # I do this because on python 3.5 dictionary key order is *random*, while in 3.6 it's # the order of insertion. That means on python 3.6, the images come in the order they are in # in the annotations file. For some reason, the first images in the annotations file are # the hardest. To combat this, I use a hard-coded hash function based on the image ids # to shuffle the indices we use. That way, no matter what python version or how pycocotools # handles the data, we get the same result every time. hashed = [badhash(x) for x in dataset.ids] dataset_indices.sort(key=lambda x: hashed[x]) dataset_indices = dataset_indices[:dataset_size] try: # Main eval loop for it, image_idx in enumerate(dataset_indices): timer.reset() with timer.env('Load Data'): img, gt, gt_masks, h, w, num_crowd = dataset.pull_item(image_idx) # Test flag, do not upvote if cfg.mask_proto_debug: with open('scripts/info.txt', 'w') as f: f.write(str(dataset.ids[image_idx])) np.save('scripts/gt.npy', gt_masks) batch = Variable(img.unsqueeze(0)) if args.cuda: batch = batch.cuda() with timer.env('Network Extra'): preds = net(batch) # Perform the meat of the operation here depending on our mode. if args.display: img_numpy = prep_display(preds, img, h, w) elif args.benchmark: prep_benchmark(preds, h, w) else: prep_metrics(ap_data, preds, img, gt, gt_masks, h, w, num_crowd, dataset.ids[image_idx], detections) # First couple of images take longer because we're constructing the graph. # Since that's technically initialization, don't include those in the FPS calculations. if it > 1: frame_times.add(timer.total_time()) if args.display: if it > 1: print('Avg FPS: %.4f' % (1 / frame_times.get_avg())) plt.imshow(img_numpy) plt.title(str(dataset.ids[image_idx])) plt.show() elif not args.no_bar: if it > 1: fps = 1 / frame_times.get_avg() else: fps = 0 progress = (it+1) / dataset_size * 100 progress_bar.set_val(it+1) print('\rProcessing Images %s %6d / %6d (%5.2f%%) %5.2f fps ' % (repr(progress_bar), it+1, dataset_size, progress, fps), end='') if not args.display and not args.benchmark: print() if args.output_coco_json: print('Dumping detections...') if args.output_web_json: detections.dump_web() else: detections.dump() else: if not train_mode: print('Saving data...') with open(args.ap_data_file, 'wb') as f: pickle.dump(ap_data, f) return calc_map(ap_data) elif args.benchmark: print() print() print('Stats for the last frame:') timer.print_stats() avg_seconds = frame_times.get_avg() print('Average: %5.2f fps, %5.2f ms' % (1 / frame_times.get_avg(), 1000*avg_seconds)) except KeyboardInterrupt: print('Stopping...') def calc_map(ap_data): print('Calculating mAP...') aps = [{'box': [], 'mask': []} for _ in iou_thresholds] for _class in range(len(cfg.dataset.class_names)): for iou_idx in range(len(iou_thresholds)): for iou_type in ('box', 'mask'): ap_obj = ap_data[iou_type][iou_idx][_class] if not ap_obj.is_empty(): aps[iou_idx][iou_type].append(ap_obj.get_ap()) all_maps = {'box': OrderedDict(), 'mask': OrderedDict()} # Looking back at it, this code is really hard to read :/ for iou_type in ('box', 'mask'): all_maps[iou_type]['all'] = 0 # Make this first in the ordereddict for i, threshold in enumerate(iou_thresholds): mAP = sum(aps[i][iou_type]) / len(aps[i][iou_type]) * 100 if len(aps[i][iou_type]) > 0 else 0 all_maps[iou_type][int(threshold*100)] = mAP all_maps[iou_type]['all'] = (sum(all_maps[iou_type].values()) / (len(all_maps[iou_type].values())-1)) print_maps(all_maps) # Put in a prettier format so we can serialize it to json during training all_maps = {k: {j: round(u, 2) for j, u in v.items()} for k, v in all_maps.items()} return all_maps def print_maps(all_maps): # Warning: hacky make_row = lambda vals: (' %5s |' * len(vals)) % tuple(vals) make_sep = lambda n: ('-------+' * n) print() print(make_row([''] + [('.%d ' % x if isinstance(x, int) else x + ' ') for x in all_maps['box'].keys()])) print(make_sep(len(all_maps['box']) + 1)) for iou_type in ('box', 'mask'): print(make_row([iou_type] + ['%.2f' % x if x < 100 else '%.1f' % x for x in all_maps[iou_type].values()])) print(make_sep(len(all_maps['box']) + 1)) print() if __name__ == '__main__': parse_args() if args.config is not None: set_cfg(args.config) if args.trained_model == 'interrupt': args.trained_model = SavePath.get_interrupt('weights/') elif args.trained_model == 'latest': args.trained_model = SavePath.get_latest('weights/', cfg.name) if args.config is None: model_path = SavePath.from_str(args.trained_model) # TODO: Bad practice? Probably want to do a name lookup instead. args.config = model_path.model_name + '_config' print('Config not specified. Parsed %s from the file name.\n' % args.config) set_cfg(args.config) if args.detect: cfg.eval_mask_branch = False if args.dataset is not None: set_dataset(args.dataset) with torch.no_grad(): if not os.path.exists('results'): os.makedirs('results') if args.cuda: cudnn.fastest = True torch.set_default_tensor_type('torch.cuda.FloatTensor') else: torch.set_default_tensor_type('torch.FloatTensor') if args.resume and not args.display: with open(args.ap_data_file, 'rb') as f: ap_data = pickle.load(f) calc_map(ap_data) exit() if args.image is None and args.video is None and args.images is None: dataset = COCODetection(cfg.dataset.valid_images, cfg.dataset.valid_info, transform=BaseTransform(), has_gt=cfg.dataset.has_gt) prep_coco_cats() else: dataset = None print('Loading model...', end='') net = Yolact() net.load_weights(args.trained_model) net.eval() print(' Done.') if args.cuda: net = net.cuda() evaluate(net, dataset) Traceback (most recent call last): File "eval.py", line 1105, in <module> evaluate(net, dataset) File "eval.py", line 884, in evaluate inp, out = args.images.split('E:/yolact-master/coco/images/train2017: E:/yolact-master/results/output') ValueError: not enough values to unpack (expected 2, got 1)
06-18
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值