Sunday, March 17, 2019

AI Algorithms Are Now Shockingly Good at Doing Science

Image result for ai
No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day—and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet. Quanta Magazine About Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks—computer-simulated networks of neurons that mimic the function of brains—can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted. Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe. Traditionally, we’ve learned about nature through observation. Think of Johannes Kepler poring over Tycho Brahe’s tables of planetary positions and trying to discern the underlying pattern. He eventually deduced that planets move in elliptical orbits. Science has also advanced through simulation. An astronomer might model the movement of the Milky Way and its neighboring galaxy, Andromeda, and predict that they’ll collide in a few billion years. Both observation and simulation help scientists generate hypotheses that can then be tested with further observations. Generative modeling differs from both of these approaches. “It’s basically a third approach, between observation and simulation,” says Kevin Schawinski, an astrophysicist and one of generative modeling’s most enthusiastic proponents, who worked until recently at the Swiss Federal Institute of Technology in Zurich ETH Zurich. “It’s a different way to attack a problem.” Some scientists see generative modeling and other new techniques simply as power tools for doing traditional science. But most agree that AI is having an enormous impact, and that its role in science will only grow. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who uses artificial neural networks to study the cosmos, is among those who fear there’s nothing a human scientist does that will be impossible to automate. “It’s a bit of a chilling thought,” he said. Discovery by Generation Ever since graduate school, Schawinski has been making a name for himself in data-driven science. While working on his doctorate, he faced the task of classifying thousands of galaxies based on their appearance. Because no readily available software existed for the job, he decided to crowdsource it—and so the Galaxy Zoo citizen science project was born. Beginning in 2007, ordinary computer users helped astronomers by logging their best guesses as to which galaxy belonged in which category, with majority rule typically leading to correct classifications. The project was a success, but, as Schawinski notes, AI has made it obsolete: “Today, a talented scientist with a background in machine learning and access to cloud computing could do the whole thing in an afternoon.” Schawinski turned to the powerful new tool of generative modeling in 2016. Essentially, generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y. The approach has proved incredibly potent and versatile. As an example, suppose you feed a generative model a set of images of human faces, with each face labeled with the person’s age. As the computer program combs through these “training data,” it begins to draw a connection between older faces and an increased likelihood of wrinkles. Eventually it can “age” any face that it’s given—that is, it can predict what physical changes a given face of any age is likely to undergo. None of these faces is real. The faces in the top row A and left-hand column B were constructed by a generative adversarial network GAN using building-block elements of real faces. The GAN then combined basic features of the faces in A, including their gender, age and face shape, with finer features of faces in B, such as hair color and eye color, to create all the faces in the rest of the grid. The best-known generative modeling systems are “generative adversarial networks” GANs. After adequate exposure to training data, a GAN can repair images that have damaged or missing pixels, or they can make blurry photographs sharp. They learn to infer the missing information by means of a competition hence the term “adversarial”: One part of the network, known as the generator, generates fake data, while a second part, the discriminator, tries to distinguish fake data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t actually exist,” as one headline put it. More broadly, generative modeling takes sets of data typically images, but not always and breaks each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original data, and this helps uncover physical processes that are at work in the system. The idea of a latent space is abstract and hard to visualize, but as a rough analogy, think of what your brain might be doing when you try to determine the gender of a human face. Perhaps you notice hairstyle, nose shape, and so on, as well as patterns you can’t easily put into words. The computer program is similarly looking for salient features among data: Though it has no idea what a mustache is or what gender is, if it’s been trained on data sets in which some images are tagged “man” or “woman,” and in which some have a “mustache” tag, it will quickly deduce a connection. Kevin Schawinski, an astrophysicist who runs an AI company called Modulos, argues that a technique called generative modeling offers a third way of learning about the universe. Der Beobachter In a paper published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. The software they used treats the latent space somewhat differently from the way a generative adversarial network treats it, so it is not technically a GAN, though similar. Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation—a sharp reduction in formation rates—is related to the increasing density of a galaxy’s environment. For Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?” First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment—the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.” Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so. The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’” For the process in question, there are two plausible explanations: Perhaps galaxies become redder in high-density environments because they contain more dust, or perhaps they become redder because of a decline in star formation in other words, their stars tend to be older. With a generative model, both ideas can be put to the test: Elements in the latent space related to dustiness and star formation rates are changed to see how this affects galaxies’ color. “And the answer is clear,” Schawinski said. Redder galaxies are “where the star formation had dropped, not the ones where the dust changed. So we should favor that explanation.” Using generative modeling, astrophysicists could investigate how galaxies change when they go from low-density regions of the cosmos to high-density regions, and what physical processes are responsible for these changes. The approach is related to traditional simulation, but with critical differences. A simulation is “essentially assumption-driven,” Schawinski said. “The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “in some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.” The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant—but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data. “It’s not fully automated science—but it demonstrates that we’re capable of at least in part building the tools that make the process of science automatic,” Schawinski said. Generative modeling is clearly powerful, but whether it truly represents a new approach to science is open to debate. For David Hogg, a cosmologist at New York University and the Flatiron Institute which, like Quanta, is funded by the Simons Foundation, the technique is impressive but ultimately just a very sophisticated way of extracting patterns from data—which is what astronomers have been doing for centuries. In other words, it’s an advanced form of observation plus analysis. Hogg’s own work, like Schawinski’s, leans heavily on AI; he’s been using neural networks to classify stars according to their spectra and to infer other physical attributes of stars using data-driven models. But he sees his work, as well as Schawinski’s, as tried-and-true science. “I don’t think it’s a third way,” he said recently. “I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.” Hardworking Assistants Whether they’re conceptually novel or not, it’s clear that AI and neural networks have come to play a critical role in contemporary astronomy and physics research. At the Heidelberg Institute for Theoretical Studies, the physicist Kai Polsterer heads the astroinformatics group — a team of researchers focused on new, data-centered methods of doing astrophysics. Recently, they’ve been using a machine-learning algorithm to extract redshift information from galaxy data sets, a previously arduous task. Polsterer sees these new AI-based systems as “hardworking assistants” that can comb through data for hours on end without getting bored or complaining about the working conditions. These systems can do all the tedious grunt work, he said, leaving you “to do the cool, interesting science on your own.” But they’re not perfect. In particular, Polsterer cautions, the algorithms can only do what they’ve been trained to do. The system is “agnostic” regarding the input. Give it a galaxy, and the software can estimate its redshift and its age — but feed that same system a selfie, or a picture of a rotting fish, and it will output a very wrong age for that, too. In the end, oversight by a human scientist remains essential, he said. “It comes back to you, the researcher. You’re the one in charge of doing the interpretation.” For his part, Nord, at Fermilab, cautions that it’s crucial that neural networks deliver not only results, but also error bars to go along with them, as every undergraduate is trained to do. In science, if you make a measurement and don’t report an estimate of the associated error, no one will take the results seriously, he said. Like many AI researchers, Nord is also concerned about the impenetrability of results produced by neural networks; often, a system delivers an answer without offering a clear picture of how that result was obtained. Yet not everyone feels that a lack of transparency is necessarily a problem. Lenka Zdeborová, a researcher at the Institute of Theoretical Physics at CEA Saclay in France, points out that human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat—“but you don’t know how you know,” she said. “Your own brain is in some sense a black box.” It’s not only astrophysicists and cosmologists who are migrating toward AI-fueled, data-driven science. Quantum physicists like Roger Melko of the Perimeter Institute for Theoretical Physics and the University of Waterloo in Ontario have used neural networks to solve some of the toughest and most important problems in that field, such as how to represent the mathematical “wave function” describing a many-particle system. AI is essential because of what Melko calls “the exponential curse of dimensionality.” That is, the possibilities for the form of a wave function grow exponentially with the number of particles in the system it describes. The difficulty is similar to trying to work out the best move in a game like chess or Go: You try to peer ahead to the next move, imagining what your opponent will play, and then choose the best response, but with each move, the number of possibilities proliferates. Of course, AI systems have mastered both of these games—chess, decades ago, and Go in 2016, when an AI system called AlphaGo defeated a top human player. They are similarly suited to problems in quantum physics, Melko says. The Mind of the Machine Whether Schawinski is right in claiming that he’s found a “third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “on steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it. How far will the AI revolution go in science? Occasionally, grand claims are made regarding the achievements of a “robo-scientist.” A decade ago, an AI robot chemist named Adam investigated the genome of baker’s yeast and worked out which genes are responsible for making certain amino acids. Adam did this by observing strains of yeast that had certain genes missing, and comparing the results to the behavior of strains that had the genes. Wired’s headline read, “Robot Makes Scientific Discovery All by Itself.” More recently, Lee Cronin, a chemist at the University of Glasgow, has been using a robot to randomly mix chemicals, to see what sorts of new compounds are formed. Monitoring the reactions in real-time with a mass spectrometer, a nuclear magnetic resonance machine, and an infrared spectrometer, the system eventually learned to predict which combinations would be the most reactive. Even if it doesn’t lead to further discoveries, Cronin has said, the robotic system could allow chemists to speed up their research by about 90 percent. Last year, another team of scientists at ETH Zurich used neural networks to deduce physical laws from sets of data. Their system, a sort of robo-Kepler, rediscovered the heliocentric model of the solar system from records of the position of the sun and Mars in the sky, as seen from Earth, and figured out the law of conservation of momentum by observing colliding balls. Since physical laws can often be expressed in more than one way, the researchers wonder if the system might offer new ways—perhaps simpler ways—of thinking about known laws. These are all examples of AI kick-starting the process of scientific discovery, though in every case, we can debate just how revolutionary the new approach is. Perhaps most controversial is the question of how much information can be gleaned from data alone—a pressing question in the age of stupendously large and growing piles of it. In The Book of Why 2018, the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “profoundly dumb.” Questions about causality “can never be answered from data alone,” they write. “Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said. “I’m merely saying we can do more with data than we often conventionally do.” Another oft-heard argument is that science requires creativity, and that—at least so far—we have no idea how to program that into a machine. Simply trying everything, like Cronin’s robo-chemist, doesn’t seem especially creative. “Coming up with a theory, with reasoning, I think demands creativity,” Polsterer said. “Every time you need creativity, you will need a human.” And where does creativity come from? Polsterer suspects it is related to boredom—something that, he says, a machine cannot experience. “To be creative, you have to dislike being bored. And I don’t think a computer will ever feel bored.” On the other hand, words like “creative” and “inspired” have often been used to describe programs like Deep Blue and AlphaGo. And the struggle to describe what goes on inside the “mind” of a machine is mirrored by the difficulty we have in probing our own thought processes. Schawinski recently left academia for the private sector; he now runs a startup called Modulos which employs a number of ETH scientists and, according to its website, works “in the eye of the storm of developments in AI and machine learning.” Whatever obstacles may lie between current AI technology and full-fledged artificial minds, he and other experts feel that machines are poised to do more and more of the work of human scientists. Whether there is a limit remains to be seen. “Will it be possible, in the foreseeable future, to build a machine that can discover physics or mathematics that the brightest humans alive are not able to do on their own, using biological hardware?” Schawinski wonders. “Will the future of science eventually necessarily be driven by machines that operate on a level that we can never reach? I don’t know. It’s a good question.” Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. More Great WIRED Stories


PagerDuty IPO: Is AI The Secret Sauce? 

PagerDuty softwarePagerDuty S-1 filing Because of the government shut down earlier in the year, there was a delay in with IPOs as the SEC could not evaluate the filings. But now it looks like the market is getting ready for a flood of deals. One of the first will be PagerDuty, which was actually founded during the financial crisis of 2009. The core mission of the company is “to connect teams to real-time opportunity and elevate work to the outcomes that matter.” Interestingly enough, PagerDuty refers to itself as the central nervous system of a digital enterprise. This means continuously analyzing systems to detect risks but also to find opportunities to improve operations, increase revenues and promote more innovation. Keep in mind that this is far from easy. After all, most data is just useless noise. But then again, in today’s world where people expect quick action and standout customer experiences, it is important to truly understand data. The PagerDuty S-1 highlights this with some of the following findings: The abandon rate is 53% for mobile website visitors if the site takes longer than three seconds to load. A major online retailer can lose up to $500,000 in revenue for every minute of downtime. A survey from PricewaterhouseCoopers shows that 32% of customers say they would ditch a brand after one bad experience. As for PagerDuty, it has built a massive data set from over 10,000 customers. Consider that this has allowed the company to leverage cutting-edge AI Artificial Intelligence models that supercharge the insights. Here’s how PagerDuty describes it in the S-1 filing: “We apply machine learning to data collected by our platform to help our customers identify incidents from the billions of digital signals they collect each day. We do this by automatically converting data from virtually any software-enabled system or device into a common format and applying machine-learning algorithms to find patterns and correlations across that data in real time. We provide teams with visibility into similar incidents and human context, based on data related to past actions that we have collected over time, enabling them to accelerate time to resolution.” The result is that there are a myriad of powerful use cases. For example, the AI helps GoodEggs to monitor warehouses to make sure food is fresh.  Then there is the case with Slack, which uses the technology to remove friction in dealing with the incident response process. For PagerDuty, the result has been durable growth on the top line, with revenues jumping 48% during the past year. The company also has a 139% net retention rate and counts 33% of the Fortune 500 companies as customers. Yet PagerDuty is still in the nascent stages of the opportunity. Note that the company estimates the total addressable market at over $25 billion, which is based on an estimated 85 million users. Data + AI But again, when looking at the IPO, it's really about the data mixed with AI models. This is a powerful combination and should allow for strong barriers to entry, which will be difficult to replicate.  There is a virtuous cycle as the systems get smarter and smarter. Granted, there are certainly risk factors. If the AI fails to effectively detect some of the threats or gives off false positives, then PagerDuty’s business would likely be greatly impacted. But so far, it seems that the company has been able to build a robust infrastructure. Now the PagerDuty IPO -- which will likely hit the markets in the next couple weeks -- will be just one of the AI-related companies that will pull off their offerings. Basically, get ready for a lot more – and fast. Tom serves on the advisory boards of tech startups and can be reached at his site.


CoParenter helps divorced parents settle disputes using AI and human mediation 

A former judge and family law educator has teamed up with tech entrepreneurs to launch an app they hope will help divorced parents better manage their co-parenting disputes, communications, shared calendar and other decisions within a single platform. The app, called coParenter, aims to be more comprehensive than its competitors, while also leveraging a combination of AI technology and on-demand human interaction to help co-parents navigate high-conflict situations. The idea for coParenter emerged from co-founder Hon. Sherrill A. Ellsworth’s personal experience and entrepreneur Jonathan Verk, who had been through a divorce himself. Ellsworth had been a presiding judge of the Superior Court in Riverside County, California for 20 years and a family law educator for 10. During this time, she saw firsthand how families were destroyed by today’s legal system. “I witnessed countless families torn apart as they slogged through the family law system. I saw how families would battle over the simplest of disagreements like where their child will go to school, what doctor they should see and what their diet should be — all matters that belong at home, not in a courtroom,” she says. Ellsworth also notes that 80 percent of the disagreements presented in the courtroom didn’t even require legal intervention — but most of the cases she presided over involved parents asking the judge to make the co-parenting decision. As she came to the end of her career, she began to realize the legal system just wasn’t built for these sorts of situations. She then met Jonathan Verk, previously EVP Strategic Partnerships at Shazam and now coParenter CEO. Verk had just divorced and had an idea about how technology could help make the co-parenting process easier. He already had on board his longtime friend and serial entrepreneur Eric Weiss, now COO, to help build the system. But he needed someone with legal expertise. That’s how coParenter was born. The app, also built by CTO Niels Hansen, today exists alongside a whole host of other tools built for different aspects of the co-parenting process. That includes those apps designed to document communication, like OurFamilyWizard, Talking Parents, AppClose and Divvito Messenger; those for sharing calendars, like Custody Connection, Custody X Exchange and Alimentor; and even those that offer a combination of features like WeParent, 2houses, SmartCoparent and Fayr, among others. But the team at coParenter argues that their app covers all aspects of co-parenting, including communication, documentation, calendar and schedule sharing, location-based tools for pickup and drop-off logging, expense tracking and reimbursements, schedule change requests, tools for making decisions on day-to-day parenting choices like haircuts, diet, allowance, use of media, etc. and more. Notably, coParenter also offers a “solo mode” — meaning you can use the app even if the other co-parent refuses to do the same. This is a key feature that many rival apps lack. However, the biggest differentiator is how coParenter puts a mediator of sorts in your pocket. The app begins by using AI, machine learning and sentiment analysis technology to keep conversations civil. The tech will jump in to flag curse words, inflammatory phrases and offensive names to keep a heated conversation from escalating — much like a human mediator would do when trying to calm two warring parties. When conversations take a bad turn, the app will pop up a warning message that asks the parent if they’re sure they want to use that term, allowing them time to pause and think. If only social media platforms had built features like this! When parents need more assistance, they can opt to use the app instead of turning to lawyers. The company offers on-demand access to professionals as both monthly $12.99mo – 20 credits, or enough for two mediations or yearly $119.99year – 240 credits subscriptions. Both parents can subscribe for $199.99year, each receiving 240 credits. “Comparatively, an average hour with a lawyer costs between $250 and upwards of $500, just to file a single motion,” Ellsworth says. These professionals are not mediators, but are licensed in their respective fields — typically family law attorneys, therapists, social workers or other retired bench officers with strong conflict resolution backgrounds. Ellsworth oversees the professionals to ensure they have the proper guidance. All communication between the parent and the professional is considered confidential and not subject to admission as evidence, as the goal is to stay out of the courts. However, all the history and documentation elsewhere in the app can be used in court, if the parents do end up there. The app has been in beta for nearly a year, and officially launched this January. To date, coParenter claims it has already helped to resolve more than 4,000 disputes and more than 2,000 co-parents have used it for scheduling. Indeed, 81 percent of the disputing parents resolved all their issues in the app, without needing a professional mediator or legal professional, the company says. CoParenter is available on both iOS and Android.


ProBeat: AI and quantum computing continue to collide 

Depending on who you ask, quantum computing is here, not here, and both. A couple things this week reminded me that it doesn’t really matter whether you believe quantum-mechanical phenomena is going to change everything. The mere research into the field is already impacting technology across the board. Binary digits bits are the basic units of information in classical computing, while quantum bits qubits make up quantum computing. Bits are always in a state of 0 or 1, while qubits can be in a state of 0, 1, or a superposition of the two. Quantum computing leverages qubits to perform computations that would be much more difficult for a classical computer. But today’s physical quantum computers are very noisy and there are still no commercially useful algorithms published for them. In short, a true quantum computer is still years, if not decades, away. When has that ever stopped researchers? Last month, Mobileye cofounder Amnon Shashua and a team from Hebrew University in Israel published a paper in Physical Review Letters titled “Quantum Entanglement in Deep Learning Architectures.” Intel acquired the computer vision firm Mobileye for $15.3 billion in March 2017. The paper argues that the latest advancements in deep neural networks could help physicists better understand the quantum behavior of nature. This week, Shashua discussed his computer science research group’s findings at the Science of Deep Learning conference in Washington, DC. He declared that they had mathematically proven that AI can help us understand quantum physics phenomena. It’s a question of when, not if. That’s the argument for AI helping quantum physics. Now let’s go the other way. Also this week, IBM Research, MIT, and Oxford scientists published a paper in Nature titled “Supervised learning with quantum enhanced feature spaces.” The paper describes that as quantum computers become more powerful, they will be able to perform feature mapping on highly complex data structures that classical computers cannot. Feature mapping is a component of machine learning that disassembles data into non-redundant “features.” The authors argue they can use quantum computers to create new classifiers that generate more sophisticated data maps. Researchers would then be able to develop more effective AI that can, for example, identify patterns in data that are invisible to classical computers. IBM did more than just publish a paper, though. The company offered the feature-mapping algorithms to IBM Q Experience users and IBM Q Network organizations through Qiskit Aqua, its quantum information science kit. The company even provided an online demo. Neither of these papers necessarily means that AI will solve our quantum problems or that machine learning will benefit from quantum advancements. The point at which quantum computers surpass classical computers is still out of reach. What did become increasingly clear this week, however, is that the two fields are on a collision course. ProBeat is a column in which Emil rants about whatever crosses him that week.


Facebook used pictures of weed and broccoli to showcase the breakneck speed of its AI in spotting harmful content 

's CTO Mike Schroepfer showed off 's AI software by making it distinguish between cannabis and broccoli. Fortune journalist Michal Lev-Ram spoke to Schroepfer as part of a lengthy piece about 's attempts to move on after a year riddled with scandal. Schroepfer showed Lev-Ram two photographs, and asked her to determine which was broccoli and which was weed. According to Lev-Ram, it was not obvious which was which. "Both pictures looked convincingly cannabis-like—dense, leafy-green buds that are coated with miniature, hair-like growths, or perhaps mold," she wrote. Lev-Ram eventually guessed correctly, but the challenge was Schroepfer's way of talking up 's AI tools for spotting harmful content. He told Lev-Ram that AI is more accurate than humans, and that 's system had concluded which picture was marijuana and which was broccoli with 93.77% and 88.39% certainty, respectively. : 3 things we learned from 's AI chief about the future of artificial intelligence He also said it was way faster than a human. Lev-Ram had taken a more than a second to guess, while the system can do so in "hundredths of milliseconds, billions of times a day." CTO Mike Schroepfer. Greg SandovalBusiness Insider Since critics and politicians have been calling out for the proliferation of harmful content on its platform, the social network has been keen to point to the AI tools it's developing to help combat problems as diverse as illegal drugs, suicide, and hate-speech. 's automated systems are not foolproof. In July, 's systems automatically picked up and blocked a post containing a line from the US Declaration of Independence because the section in question contained racist language referring to "Indian Savages."


×
Drag and Drop
The image will be downloaded by Fatkun

No comments:

Post a Comment

İletişim Formu

Name

Email *

Message *


Get paid to share your links!