Engineering Spotlight: Robotics and Computer Vision with Rich LeGrand, President of Charmed Labs 

Learn about roboticist Rich LeGrand's work with Charmed Labs and CMU CREATE lab and what makes robotics so cool, as well as his thoughts on what makes being an engineer so exciting in today's day and age. Rich LeGrand is President of Charmed Labs, a company that strives to make advanced technology available to a wider audience through low cost and ease of use. This past summer, Charmed Labs worked with Carnegie Mellon University's CMU CREATE lab to release the Pixy 2 camera, an "all-in-one" computer vision device. The Pixy 2 is the second generation of the Pixy Cam, ultimately the 5th generation of the CMU Cam.  Image courtesy of Rich LeGrand. LeGrand's passion for robotics began at a young age. I’ve always been interested in computers and mechanical systems, and robotics just... I don’t know, when I first was exposed to it, it just really clicked. I thought, oh wow this is amazing, I can’t imagine anything cooler than that.  AAC's Chantelle Dubois had a chance to speak to LeGrand about his inspiration to become a roboticist, his work with Charmed Labs and CMU CREATE lab, as well as his thoughts on what makes being an engineer so exciting in today's day and age.  Chantelle Dubois AAC: Tell me a bit about yourself: What did you study and where did you go to school? Rich LeGrand RL: I’ve always thought of myself as a robotics person. I’ve always been attracted to the problem of robotics, how to make autonomous systems, that kind of thing, and that’s the basis of my background. I went to school and studied computer science and electrical engineering with robotics in mind. Eighth grade is about the time that I really became interested in robotics, and I’ve been trying to build my skill set ever since then.  I did my undergrad at Rice University in Houston and I went on to grad school at North Carolina State, and that’s where I did some real focus on robotics.  AAC: What was it about robotics that piqued your interest? RL: Good question. I’ve often wondered that. If you have something that you really enjoy, it’s hard to figure out what it is about that thing that makes it so attractive. I’ve always been interested in computers and mechanical systems, and robotics just, I don’t know, when I first was exposed to it, it just really clicked. I thought, oh wow this is amazing, I can’t imagine anything cooler than that.  I’ll tell you a quick story: Back in 1982, one of my brother’s friends who was about five years older than me was always doing really strange things with electricity. Back then, you would call kids who were good with computers whiz kids, so he was the quintessential whiz kid from the early 80s. One thing that was cool was that whenever I showed up to his house he would show me what he was working on and I really appreciated that. One day, I went over there and he had taken one of those Big Trak toys, a popular toy that had tank-like treads that you can program to do things, and he had taken it apart and hooked it up to his Atari 800 joystick port. He would move his joystick and then the Big Trak would move in the direction he indicated with his joystick. Relays were clicking, lights were flashing, and after it was all done he would hit the return key and it would repeat what he had just done. I thought "whoa, that is crazy I never knew a computer could do that". The 1979 Big Track. Image courtesy of Toys You Had. That was a moment of inspiration, and it was something I wanted to learn more about. AAC: How did you get involved with the Pixy team? RL: I got involved through Carnegie Mellon University, where I’ve been on a couple of projects with the Create Lab at CMU, including a project called GigaPan and the Telepresence Robotics Kit TeRK.  Image courtesy of GigaPan Illah Nourbakhsh, the leader of the lab, has all these great ideas of how to get technology in front of different audiences which usually have an educational spin on them. Charmed Labs has been the device design house for his lab. The CMU Cam goes back to the first release around 2000-2001, and the Pixy Cam is actually the CMU Cam number five because it’s gone through several revisions over the years. It’s now more capable and less expensive. I became involved on a bit of a lark; I had found this processor in the tech news and I thought that would make a good camera chip processor for a low-cost camera so I sent it off to Illah and I guess at that particular time the CMU Cam  was in need of an update, and so he asked if I wanted to work on the next version. I didn’t really have anything in mind, but it sounded like fun. So that’s how I go into Pixy. It was a different project for me. We had a crowdfunding campaign that launched in late 2013, and the crowdfunding was successful, so I’ve been making sure the project's been up to date. This spring is we came out with Pixy 2 and it’s followed the trend of becoming smaller, lower cost, and better performance. It’s been a fun project. AAC: What has been the most interesting challenge while developing the Pixy cam? RL: I think, for me, it’s always how do you find a way for it to be used by the largest amount of people? I found that the way you can get it in front of the most people is cost, which is a huge driver. In addition to cost, how easy it is to use. How much knowledge do you need to get his thing working with whatever thing you want to use it with? Those two problems, cost and ease of use, from the engineering and product design perspective, can be really challenging. I would say definitely those two things are the most challenging parts of Pixy. ...Cost and ease of use, from the engineering and product design perspective, can be really challenging. AAC: What have been some of the most interesting applications of the Pixy cam? RL: There’s a gentleman in Georgia who started his own Kickstarter that uses Pixy as a way to aid in drone navigation. Microsoft used Pixy in one of their Windows 10 demonstrations where Pixy helps a computer play air hockey. There’s also Ben Heck who has his own YouTube channel and his show... did a couple of episodes that featured Pixy. It’s been fun, all these projects are unique and they definitely have a fun factor. AAC: What do you think has been the most interesting development in robotics over the past few years? RL: There are so many different areas of robotics. I think I’ve always been in this robotic niche that I call educational robotics, using robots to teach science, engineering, and technology concepts. That’s kind of my focus. I think it’s fun to interact with that audience. There’s a lot of overlap with the Maker movement. I would say that the most exciting thing for me is this explosion of the Maker movement, where people are being inspired to make stuff. The Maker movement is similar to robotics in that it just has so many facets—it’s so huge no one can really define its boundaries. I think it’s cool that a lot more people are paying attention to making things, and in particular, in my field, making cool robots. AAC: Speaking of your specialty in educational robotics, what do you think is the biggest misconception when it comes to learning about robotics? RL: I would say it’s really easy to underestimate the complexity of a robotics problem just because we, as humans, find the everyday tasks that robots do to be really simple. So a computer has no problem solving a 10th order differential equation, whereas that’s a really hard problem for a person to solve. Folding a towel is a really hard problem for a robot and it’s really easy to fall into that trap where it seems like an easy problem. So when you try to design a robotic system to do that same task, it can be disappointing or discouraging and can be a lot harder than you thought it would be. As humans, we perceive these things that robots are doing or trying to do as really easy things to accomplish. ...It's really easy to underestimate the complexity of a robotics problem just because we, as humans, find the everyday tasks that robots do to be really simple... Folding a towel is a really hard problem for a robot. AAC: Perhaps you could walk us through how a roboticist might break down the towel folding task piece by piece. Could you describe how you might approach that problem at a high level? RL: The towel folding problem I should say has been solved by a couple of companies. Willow Garage made a robot that can do this a couple of years ago and it only took the robot about 30 minutes. But it’s just funny because when I watch the video I think about how someone spent a lot of time breaking down the problem of folding a towel. Willow Garage's PR2 folding a towel in 2013. Screenshot used courtesy of Johan Voets. The technical challenges are interesting even if the actual task of folding a towel isn’t that interesting. However, what that person had to do was break down the problem into a bunch of really specific perceptual and actuator steps.  First, you need to find the towel and then try to estimate its position and angle relative to the robots manipulators, and then you’ve got to manipulate the towel specifically, maybe finding a corner of the towel and picking it up. Even that first task, locating the corner of the towel and grasping it, is a challenging, graduate-level problem. Then, the rest of the task. You’ve got to pick up another corner then you’ve got to pick up the towel with one gripper, hold both corners, then move the other gripper over. Then try to find the place where you can hold the towel so that you can pull it tight into a nice rectangle. When you break down that really simple problem, it looks really complex when you break it down into perceptual and actuation tasks. It’s a really hard problem.  AAC: What's something that inspires you most about the work that you do? RL: One thing that’s cropped up for me lately: I think it’s an amazing time in engineering based on the fact that you can buy a $35 computer that just 20 years ago would cost several millions of dollars. Technology moves so fast, and as engineers sometimes it’s just fun to stop and look around and think wow, we’ve come a huge way. The technological world is a lot more interesting than it was a couple of years ago.  Another quick story: I had the privilege to program a Cray-2 back int he 80s. I had a poster of a Cray-2 on my bedroom wall. The Cray-2. Image courtesy of Cray Super Computers.   Most kids would have a poster of a Lamborghini. Well, I had a poster of a Cray-2. I remember being told the Cray-2 could do so many billion floating point operations per second and I thought 'wow, that is amazing.' Recently, I looked into how a Raspberry Pi compares to a Cray-2. Cray-2 can do two billion floating point operations per second, which was pretty amazing back in the 80s. The Raspberry Pi can do six billion floating point operations per second, which is pretty remarkable. The Cray-2 costs $40 million and each Cray-2—there were only a handful—each one had hundreds of scientists working on it. Today, we can buy a $35 Raspberry Pi and we might use it to control some LEDs. That’s what progress is. You’re not really concerned about wasting computing power, because it’s so available.  AAC: Thank you for sharing your thoughts with us, Rich! 
DevOps: Principles, Practices, and the DevOps Engineer Role 

For a long time, development and operations were isolated modules. Developers wrote code; the system administrators were responsible for its deployment and integration. As there was limited communication between these two silos, specialists worked mostly separately within a project. That was fine when Waterfall development dominated. But since Agile and continuous workflow have taken over the world of software development, this model is out of the game. Short sprints and frequent releases occurring every two weeks or even every day require a new approach and new team roles. Today, DevOps is one of the most discussed software development approaches. It is applied in , Netflix, Amazon, Etsy, and many other industry-leading companies. So, if you are considering embracing DevOps for the sake of better performance, business success, and competitiveness, you take the first step and hire a DevOps engineer. But first, let’s look at what DevOps is all about and how it helps improve product delivery. What Is DevOps? DevOps stands for development and operations. It’s a practice that aims at merging development, quality assurance, and operations deployment and integration into a single, continuous set of processes. This methodology is a natural extension for Agile and continuous delivery approaches. What DevOps looks like. But DevOps isn’t merely a set of actions. It’s more of a culture or even a philosophy that fosters cross-functional team communication. One of the main benefits of DevOps is that it doesn’t require substantial technical changes being rather oriented to changing the way a team works. Teamwork is a crucial part of DevOps culture: The whole success of a process depends on it, and there are principles and practices that DevOps teams use. DevOps Principles In short, the main principles of DevOps are automation, continuous delivery, and fast reaction to feedback. You can find a more detailed explanation of DevOps pillars in the CAMS acronym: Culture represented by human communication, technical processes, and tools Automation of processes Measurement of KPIs Sharing feedback, best practices, and knowledge Adherence to these principles is achieved through a number of DevOps practices that include continuous delivery, frequent deployments, QA automation, validating ideas as early as possible, and in-team collaboration. DevOps Model and Practices DevOps requires a delivery cycle that comprises planning, development, testing, deployment, release, and monitoring with active cooperation between different members of a team. A DevOps lifecycle.  Atlassian. To break down the process even more, let’s have a look at the core practices that constitute the DevOps: Agile Planning In contrast to traditional approaches of project management, Agile planning organizes work in short iterations e.g. sprints to increase the number of releases. This means that the team has only high-level objectives outlined while making detailed planning for two iterations in advance. This allows for flexibility and pivots once the ideas are tested on an early product increment. Check our Agile infographics to learn more about the different methods applied. Continuous Delivery and Automation Continuous delivery, detailed in our dedicated article, is an approach that merges development, testing, and deployment operations into a streamlined process as it heavily relies on automation. Development. Engineers commit code in small chunks multiple times a day for it to be easily tested. Continuous automated testing and integration. A quality assurance team sets committed code testing using automation tools like Selenium, Ranorex, UFT, etc. If bugs and vulnerabilities are revealed, they are sent back to the engineering team. This stage also entails version control to detect integration problems in advance. A Version Control System VCS allows developers to record changes in the files and share them with other members of the team, regardless of its location. The code that passes automated tests is integrated in a single, shared repository on a server. Frequent code submissions prevent a so-called “integration hell” when the differences between individual code branches and the mainline code become so drastic over time that integration takes more than actual coding. The most popular tools for continuous integration are Jenkins, GitLab CI, Bamboo, and TeamCity. Continuous deployment. At this stage, the code is deployed to run in production on a public server. Code must be deployed in a way that doesn’t affect already functioning features and can be available for a large number of users. Frequent deployment allows for a “fail fast” approach, meaning that the new features are tested and verified early. There are various automated tools that help engineers deploy a product increment. The most popular are Chef, Puppet, Azure Resource Manager, and Google Cloud Deployment Manager. Continuous monitoring. The final stage of the DevOps lifecycle is oriented to the assessment of the whole cycle. The goal of monitoring is detecting the problematic areas of a process and analyzing the feedback from the team and users to report existing inaccuracies and improve the product’s functioning. Infrastructure as Code Infrastructure as a code IaC is an infrastructure management approach that makes continuous delivery and DevOps possible. It entails using scripts to automatically set the deployment environment networks, virtual machines, etc. to the needed configuration regardless of its initial state. Without IaC, engineers would have to treat each target environment individually, which becomes a tedious task as you may have many different environments for development, testing, and production use. Having the environment configured as code, you 1 can test it the way you test the source code itself and 2 use a virtual machine that behaves like a production environment to test early. Once the need to scale arises, the script can automatically set the needed number of environments to be consistent with each other. Containerization The next evolutionary stage of virtual machines is containerization. Virtual machines emulate hardware behavior to share computing resources of a physical machine, which enables running multiple application environments or operating systems Linux and Windows Server on a single physical server or distributing an application across multiple physical machines. Containers, on the other hand, are more lightweight and packaged with all runtime components files, libraries, etc. but they don’t include whole operating systems, only the minimum required resources. Containers are used within DevOps to instantly deploy applications across various environments and are well combined with the IaC approach described above. A container can be tested as a unit before deployment. Currently, Docker provides the most popular container toolset. Microservices The microservice architectural approach entails building one application as a set of independent services that communicate with each other, but are configured individually. Building an application this way, you can isolate any arising problems ensuring that a failure in one service doesn’t break the rest of the application functions. With the high rate of deployment, microservices allow for keeping the whole system stable, while fixing the problems in isolation. Learn more about microservices and modernizing legacy monolithic architectures in our article. Cloud Infrastructure Today most organizations use hybrid clouds, a combination of public and private ones. But the shift towards fully public clouds i.e. managed by an external provider such as AWS or Microsoft Azure continues. While cloud infrastructure isn’t a must for DevOps adoption, it provides flexibility, toolsets, and scalability to applications. With the recent introduction of serverless architectures on clouds, DevOps-driven teams can dramatically reduce their effort by basically eliminating server-management operations. An important part of these processes are automation tools that facilitate the workflow. Below we explain why and how it is done. DevOps Tools The main reason to implement DevOps is to improve the delivery pipeline and integration process by automating these activities. As a result, the product gets a shorter time-to-market. To achieve this automated release pipeline, the team must acquire specific tools instead of building them from scratch. Currently, existing DevOps tools cover almost all stages of continuous delivery, starting from continuous integration environments and ending with containerization and deployment. While today some of the processes are still automated with custom scripts, mostly DevOps engineers use various open source products. Let’s have a look at the most popular ones: Jenkins is a tool to use either as a server for continuous integration or a continuous delivery hub that comes with lots of additional plugins to tweak continuous delivery workflow. Selenium is an automated browser that allows QA teams to write scripts and test web products. It’s compatible with eight popular programming languages. Learn more about Selenium in our article on QA automation tools. Git is a Version Control System with a repository for source code management that enables working online and offline. Chef is a tool for infrastructure as code management that runs both on cloud and hardware servers. Another popular tool in this category is Ansible that automates configuration management, cloud provisioning, and application deployment. Docker is an instrument that helps with packaging code into self-contained units, i.e. containers. Nagios is an infrastructure monitoring tool that presents analytics in visual reports. While a DevOps engineer – we’ll discuss this role in more detail below – must operate these tools, the rest of the team also uses them under a DevOps engineer’s facilitation. A DevOps Engineer: Role and Responsibilities In the book Effective DevOps by Ryn Daniels and Jennifer Davis, the existence of a specific DevOps person is questioned: “It doesn’t usually make much sense to have a director of DevOps or some other position that puts one person in charge of DevOps. DevOps is at its core a cultural movement, and its ideas and principles need to be used throughout entire organizations in order to be effective.” Some other DevOps experts partly disagree with this statement. They also believe that a team is key to effectiveness. But in this interpretation, a team – including developers, a quality assurance leader, a code release manager, and an automation architect – work under the supervision of a DevOps engineer. So, the title of a DevOps Engineer is an arguable one. Nonetheless, DevOps engineers are still in demand on the IT labor market. Some consider this person to be either a system administrator who knows how to code or a developer with a system administrator’s skills. DevOps Engineer Responsibilities In a way, both definitions are fair. The main function of a DevOps engineer is to introduce the continuous delivery and continuous integration workflow, which requires the understanding of the mentioned tools and the knowledge of several programming languages. Depending on the organization, job descriptions differ. Smaller businesses look for engineers with broader skillsets and responsibilities. For example, the job description may require product building along with the developers. Larger companies may look for an engineer for a specific stage of the DevOps lifecycle that will work with a certain automation tool. DevOps Engineer Role and Requirements. The basic and widely-accepted responsibilities of a DevOps engineer are: Writing specifications and documentation for the server-side features Management of continuous deployment and continuous integration CICD CICD script writing Performance assessment and monitoring Additionally, a DevOps engineer can be responsible for IT infrastructure maintenance and management, which comprises hardware, software, network, storages, virtual and remote assets, and control over cloud data storage. Scheme of IT Infrastructure management. Smartsheet. This expert participates in IT infrastructure building, works with automation platforms, and collaborates with the developers, operation managers, and system administrators, facilitating processes they are responsible for. DevOps Engineer Skillset While this title doesn’t require a candidate to be a system administrator or a developer, this person must have experience in both fields. When hiring a DevOps engineer, pay attention to the following characteristics: Tech background. A DevOps engineer must hold a degree in computer science, engineering, or other related fields. Work experience must be greater than 2 years. This includes work as a developer, system administrator, or one of the members of a DevOps-driven team. This is an important requirement along with an understanding of all IT operations. Automation tool experience. The knowledge of open source solutions for testing and deployment is a must for a DevOps engineer. If you use a cloud server, make sure that your candidate has experience with such tools as GitHub, Chef, Puppet, Jenkins, Ansible, Nagios, and Docker. A candidate for this job also must have experience with public clouds such as Amazon AWS, Microsoft Azure, and Google Cloud. Programming skills. An engineer not only has to know off-the-shelf tools, but also must have programming experience to cover scripting and coding. Scripting skills usually entail the knowledge of Bash or PowerShell scripts, while coding skills may include Java, C#, C++, Python, PHP, Ruby, etc., or at least some of these languages. Knowledge of database systems. At the deployment stage, an engineer works with data processing, which requires experience with both SQL or NoSQL database models. Communication and interpersonal skills. Although a good candidate must be well-versed in tech aspects, a DevOps expert must have strong communication talents. Heshe must ensure that a team functions effectively, receives and shares feedback to support continuous delivery. The outcome – a product – depends on hisher ability to effectively communicate with all team members. When you hire a DevOps specialist, you need to define the main requirements and responsibilities that this person will bring to your team. Here are several components for a complete job posting: Base the requirements for a candidate on automation tools and programming languages you already use in development. Define the technical knowledge and professional experience heshe must have to cover the requirements for this job. Understand, whether you need a DevOps specialist to work on a particular stage of a cycle, or if heshe should be involved in every stage of a process, product development included. And remember that the DevOps culture is about communication and collaboration, so find a candidate who can be a team player and team leader at the same time. The Benefits of DevOps and Thoughts on Hiring a DevOps Specialist The core advantages of DevOps adoption cover technical, business, and cultural aspects of development: Speed and quality. DevOps speeds up product release by introducing continuous delivery, encouraging faster feedback, and allowing developers to fix bugs in the system in the early stages. Practicing DevOps, the team can focus on the quality of the product and automate a number of processes. Business benefits. With DevOps, a team can react to change requests from customers faster, adding new and updating existing features. As a result, the time-to-market and value-delivery rates increase. Better internal culture. DevOps principles and practices lead to better communication between team members, and increased productivity and agility. Teams that practice DevOps are considered to be more productive and cross-skilled. Members of a DevOps team, both those who develop and those who operate, act in concert. While just having a person with a DevOps engineer title doesn’t mean that you’ll be immediately steeped in the practice, this hire can become the crucial first step towards it. A DevOps engineer is largely considered to be a leader’s position. This person may help you build a cross-functional team that works in compliance with DevOps principles. Originally published at AltexSoft Tech Blog "DevOps: Principles, Practices, and DevOps Engineer Role"
This company uses AI to test unlikely engineers for their potential 

In 2015, Tim Reed was a 24-year-old unemployed, college dropout. Money troubles had forced him to put his electrical engineering degree on hold at Morgan State University before a failing grade at the Community College of Baltimore County put another halt in his education plans. Reed had no choice but to move back into his parent’s home. Needless to say, he needed to make some moves. It was around this time that he came across a vague Craigslist advertisement that read: “Get paid to become a software developer” with mention that 20-weeks of training would be provided at no cost to the jobseeker and a guaranteed job would be waiting upon completion. Reed thought the ad was a scam and moved on with his job search. That was until he saw the same ad on job search site Indeed. He still wasn’t convinced. “It took me a week to fully, in my spirit, to say ‘just go for it,'” he says. One day, when the house was empty and his siblings were at school, Reed completed the required assessment, which took nearly three hours. He says thought he wouldn’t get an offer because he was stumped by a few calculus questions. What Reed didn’t know at the time was that the calculus questions were meant to induce anxiety to screen test-takers for their cognitive agility and problem-solving abilities under pressure. In short, while Reed was taking the assessment, the technology was watching how he was taking the test. When the test isn’t about the questions Exactly what is being measured here? Keystrokes to measure against the thousands of variables that the AI and predictive analytics system will later use to spit out a probability score to determine potential. For instance, to test for grit, the technology watches how an individual reacts in stressful situations. Do they open multiple browsers when they’re stuck on a problem, which signals that they know where to go even if they don’t initially know the answer? Or when the test-taker is provided more information in a later question, do they then go back to an earlier question to change their previous answer? The company says if they do, this shows cognitive agility. The reasoning for the assessment is to identify potentially high-performing engineers who are often overlooked for opportunities because of cultural markers and outdated hiring practices. An “onshoring” model to improve chances of upward mobility Catalyte is the company behind the proprietary artificial intelligence and predictive analytics. The idea was planted in Michael Rosenbaum’s head almost two decades ago while he was an advisor in the Clinton White House. At the time, the Clinton administration was encouraging the private sector to grow in low-income communities to solve for untapped market potential. Rosenbaum, a Harvard economics and law fellow, argued that the bigger opportunity is that underserved urban populations held untapped labor markets. In the late 1990s, manufacturing jobs were going overseas, yet the internet boom meant there was an influx of opportunities for people who could do simple HTML coding. Rosenbaum had an idea: What if we could move people from the side of the economy that’s contracting to the side that’s growing? That is, the tech industry, particularly high-demand software engineering jobs. By 2020, the U.S. Department of Labor projects there will be 1.4 million job openings for computer and technology specialists, yet only enough graduates to fill 29% of those openings. These realities, along with the recent push to outsource domestically so that software teams are closely tied with the business and customers, resulted in an “onshoring” model that is quickly shifting the long-standing IT services outsourcing one. Rosenbaum believes that enabling people can change the trajectory of entire families through the transformative opportunity for upward socioeconomic mobility. If there was a way to test for potential, then someone who grew up in generations of poverty can break the chains and make six-figure salaries within a handful of years the national average salary for a software engineer is $115,462, according to Glassdoor. If the model is successful, the wage gap narrows and desolated, post-manufacturing cities forgotten about after their industries collapsed, can start rebuilding themselves. Fast-forward 18 years since Catalyte’s inception in 2000 and the company now has development centers in Baltimore, Portland, and Chicago. Catalyte claims it’s a hyperlocal model, meaning the workforce in the development centers reflects the demographics in the area. For example, the Baltimore metro area is 28% African-American and the development center in that city employs 26% African-American engineers. According to the company, its model is scalable and it plans to have 20 centers up and running by 2020, including the Denver and Dallas centers projected to be open for business by the end of the year. “One of our biggest missions is improving social mobility, so we look at places that have big socioeconomic inequalities,” says CEO Jacob Hsu. Other factors include metro areas where there’s a large pool of local talent and “a big anchor client” that commits to a number of people they want to bring on locally. Ending resume pedigree with predictive analytics About a month after Reed completed Catalyte’s assessment, he was asked to come in for an interview at the development center located in Baltimore’s Federal Reserve Building. Next, Reed was invited to complete 20 weeks of in-person training, which would cover the ins and outs of coding required of a full-stack enterprise software developer, while also being groomed with the soft skills needed to be a successful consultant for Catalyte’s Fortune 500 clients, including Under Armour, Aetna, Microsoft, and AT&T. As the ad promised, there is no cost for the training bootcamp but full-time attendance, from Monday through Friday, 9 a.m. to 4 p.m., is required. A stipend is offered in the last six weeks of the program. Needless to say, additional financial support is needed for aspiring developers to complete Catalyte’s program, and may be one of the reasons why 25% of the people who start do not finish. The vast majority drop out early on. Reed worked odd jobs in recycling and truck driving to make ends meet and graduated alongside 12 of the 20 he started the program with. Upon completion of the program, graduates receive a salaried full-time offer with benefits, health insurance, and an annual stipend with Catalyte. The condition is that they stay with the company for two years and, to cover training costs, salaries during this time are around $40,000–less than half the salary of an entry-level software engineer. If the individual breaks the two-year contract, there is a monetary penalty, which according to this Glassdoor review from February 2018 and this one from September 2014, is $25,000. If hires decide to stay with Catalyte after the completion of their two-year contract, a market adjustment is made in the third year, which takes into account factors like individual skill-sets and performance in client work. At this point, “most people go from right around $40,000 to the average $75,000 salary,” says Paige Cox Lisk, Catalyte’s chief people officer. According to Rosenbaum, not everyone can be a great software developer, but great software developers can come from anywhere. Graduates from the program come from a spectrum of industries: construction workers, baristas, even PhD and MBA degree holders who are looking for a career change. The youngest test-taker who passed was a 17-year-old high school senior and the oldest person was a 72-year-old retired civil engineer. At Catalyte, developers look like Reed, who is the first engineer in a family working mostly in skilled trades jobs. Or Nastassia Pishchykava, who immigrated from Belarus and couldn’t get an entry-level engineering job, despite having the degree for it, because she needed experience first. Or Rob Winkler, who previously worked at the United Postal Service before leaving to make a career change. In Winkler’s third year working at Catalyte, his salary increased by 60% from when he started. Today, the company employs about 300 full-time people, with 60% coming from its training program. In the last three years, a lot has changed at Catalyte. The company survived its first major layoffs, which hit Reed in the middle of his first year working for the company in 2016. “I was hurt, disdained, angry,” Reed says of the experience. “There was no real explanation behind the layoff except for performance…both the company’s and mine. About six months later, Catalyte rehired most of the people it laid off, including Reed. Inside company walls, the layoff is referred to as “the Red Monday.” In January 2017, Silicon Valley executive Hsu joined the company as CEO. Shortly after, Catalyte was among the first investments for the Rise of the Rest Seed Fund raised by AOL cofounder Steve Case’s D.C. venture firm Revolution. The $150 million seed fund was created to invest in local companies outside major tech havens, with the idea being that without access to funding, these communities are forgotten and sometimes left to die a slow and painful death 75% of VC dollars in 2016 were invested in companies in only three states: California, Massachusetts, and New York. Earlier this year, Catalyte used some of the $27 million in funding it received to acquire Surge, a Seattle-based software development company that provides senior-level engineers, who will now work on a 1099 basis for Catalyte and handle specialized projects. Recently, the company hired Tom Iler as its chief product officer, a 20-plus year veteran in IT who was attracted to Catalyte’s model because he’s long hired people for aptitude and attitude, instead of skills and experience. In fact, he tells Fast Company this approach is his “secret to success” because “you can train them to do just about anything” if they have the right aptitude and attitude. Under Iler’s guidance, Catalyte’s assessment, aside from testing for an individual’s innate ability to grasp technical concepts and cognitive agility, will also measure an individual’s emotional quotient, or EQ. “Think about how powerful it would be if we could identify the personality traits in our most successful developers,” says Cox Lisk, “and weave it back into the assessment.” The company is currently working on an alumni study to track how the program has fared for all Catalytes–the 1,700 people who have graduated from the program–whether they stay or leave the company after completing their training and apprenticeship. In three to five years, if Catalyte is known as “the largest software development service company in the world,” Hsu says it would’ve failed: “We would’ve missed the boat because our ultimate ambition is to change the way we connect opportunity to potential. We want to kill pedigree. We’re proving it now with software developers. Our goal is to do this across industries so that we can make a much more diverse, egalitarian workforce, where everybody gets a shot if they can prove that they have the potential.” Aside from convincing financial decision-makers that success can come from anywhere and look like anybody, Catalyte’s model, if successfully scalable, can also prove this to those at the base of the economic pyramid. Reed, now an application developer who has since received his Associate’s degree, paid for by Catalyte, says he has friends who have witnessed his success, but are too afraid to take the plunge themselves. When asked why, he says: “I think it’s the image of a software engineer. The positive is Bill Gates, Microsoft, and Google. The negative image is of a hacker. But both of those pictures are a form of higher intelligence…some super-smart people do that. My friends can’t imagine themselves being those people.” Correction: A previous version of this article misstated the number of participants who complete the Catalyte program. 
USC scientists find a way to enhance quantum computer performance 

USC scientists have demonstrated a theoretical method to enhance the performance of quantum computers, an important step to scale a technology with potential to solve some of society’s biggest challenges. The method addresses a weakness that bedevils performance of the next-generation computers by suppressing erroneous calculations while increasing fidelity of results, a critical step before the machines can outperform classic computers as intended. Called “dynamical decoupling,” it worked on two quantum computers, proved easier and more reliable than other remedies and could be accessed via the cloud, which is a first for dynamical decoupling The technique administers staccato bursts of tiny, focused energy pulses to offset ambient disturbances that muck sensitive computations. The researchers report they were able to sustain a quantum state up to three times longer than would otherwise occur in an uncontrolled state. “This is a step forward,” said Daniel Lidar, a professor of electrical engineering, chemistry and physics at USC and director of the USC Center for Quantum Information Science and Technology. “Without error suppression, there’s no way quantum computing can overtake classical computing. The results were published in the journal Physical Review Letters. Lidar is the Viterbi Professor of Engineering at USC and corresponding author of the study; he led a team of researchers at CQIST, which is a collaboration between the USC Viterbi School of Engineering and the USC Dornsife School of Letters, Arts and Sciences. IBM and Bay Area startup Rigetti Computing provided cloud access to their quantum computers. Quantum computers are fast but fragile Quantum computers have the potential to render obsolete today’s super computers and propel breakthroughs in medicine, finance and defense capabilities. They harness the speed and behavior of atoms, which function radically different than silicon computer chips, to perform seemingly impossible calculations. Quantum computing has the potential to optimize new drug therapies, models for climate change and designs for new machines. They can achieve faster delivery of products, lower costs for manufactured goods and more efficient transportation. They are powered by qubits, the subatomic workhorses and building blocks of quantum computing. But qubits are as temperamental as high-performance race cars. They are fast and hi-tech, but prone to error and need stability to sustain computations. When they don’t operate correctly, they produce poor results, which limits their capabilities relative to traditional computers. Scientists worldwide have yet to achieve a “quantum advantage” – the point where a quantum computer outperforms a conventional computer on any task. The problem is “noise,” a catch-all descriptor for perturbations such as sound, temperature and vibration. It can destabilize qubits, which creates “decoherence,” an upset that disrupts the duration of the quantum state, which reduces time a quantum computer can perform a task while achieving accurate results. Noise and decoherence have a large impact and ruin computations, and a quantum computer with too much noise is useless. Daniel Lidar “Noise and decoherence have a large impact and ruin computations, and a quantum computer with too much noise is useless,” Lidar explained. “But if you can knock down the problems associated with noise, then you start to approach the point where quantum computers become more useful than classic computers.” USC research spans multiple quantum computing platforms USC is the only university in the world with a quantum computer; its 1098-qubit D-Wave quantum annealer specializes in solving optimization problems. Part of the USC-Lockheed Martin Center for Quantum Computing, it’s located at USC’s Information Sciences Institute. However, the latest research findings were achieved not on the D-Wave machine, but on smaller scale, general-purpose quantum computers: IBM’s 16-qubit QX5 and Rigetti’s 19-qubit Acorn. Quantum computer chips must be encased in a cryogenic environment. PhotoJustin Fantl, Rigetti Computing To achieve dynamical decoupling DD, the researchers bathed the superconducting qubits with tightly focused, timed pulses of minute electromagnetic energy. By manipulating the pulses, scientists were able to envelop the qubits in a microenvironment, sequestered – or decoupled – from surrounding ambient noise, thus perpetuating a quantum state. “We tried a simple mechanism to reduce error in the machines that turned out to be effective,” said Bibek Pokharel, an electrical engineering doctoral student at USC Viterbi and first author of the study. The time sequences for the experiments were exceedingly small with up to 200 pulses spanning up to 600 nanoseconds. One-billionth of a second, or a nanosecond, is how long it takes for light to travel one foot For the IBM quantum computers, final fidelity improved threefold, from 28.9 percent to 88.4 percent. For the Rigetti quantum computer, final fidelity improvement was a more modest 17 percent, from 59.8 to 77.1, according to the study. The scientists tested how long fidelity improvement could be sustained and found that more pulses always improved matters for the Rigetti computer, while there was a limit of about 100 pulses for the IBM computer. Overall, the findings show the DD method works better than other quantum error correction methods that have been attempted so far, Lidar said. “To the best of our knowledge,” the researchers wrote, “this amounts to the first unequivocal demonstration of successful decoherence mitigation in cloud-based superconducting qubit platforms … we expect that the lessons drawn will have wide applicability.” High stakes in the race for quantum supremacy The quest for quantum computing supremacy is a geopolitical priority for Europe, China, Canada, Australia and the United States. Advantage gained by acquiring the first computer that renders all other computers obsolete would be enormous and bestow economic, military and public health advantages to the winner. Congress is considering two new bills to establish the United States as a leader in quantum computing. In September, the House of Representatives passed the National Quantum Initiative Act to allocate $1.3 billion in five years to spur research and development. It would create a National Quantum Coordination Office in the White House to supervise research nationwide. A separate bill, the Quantum Computing Research Act by Sen. Kamala Harris, D-Calif., directs the Department of Defense to lead a quantum computing effort. “Quantum computing is the next technological frontier that will change the world and we cannot afford to fall behind,” Harris said in prepared remarks. “It could create jobs for the next generation, cure diseases and above all else make our nation stronger and safer. … Without adequate research and coordination in quantum computing, we risk falling behind our global competition in the cyberspace race, which leaves us vulnerable to attacks from our adversaries,” she said. The study authors include Lidar and three graduate students — Pokharel, Namit Anand and Benjamin Fortman — who co-authored the paper. The project grew out of a course on quantum error correction taught by Lidar. Funding for the research was supported in part by Oracle Labs, part of Oracle America ., and the Lockheed Martin Corp. More stories about: Electrical Engineering, Faculty, Research
Tech Ranks Among Top Universities Globally in Computer Science, Engineering 

Campus and Community November 29, 2018 • Atlanta, GA Click image to enlarge Georgia Tech has made the Top 10 in global rankings in the key subject areas of computer science and engineering. Georgia Tech has made the Top 10 in global rankings in the key subject areas of computer science and engineering. In the latest Times Higher Education subject rankings for computer science, the Institute is ranked seventh internationally. Tech was the top American public university on the list. Ranked just behind Stanford University, MIT, and the United Kingdom’s University of Oxford and University of Cambridge, its computer science program landed ahead of both Harvard and Princeton. In the engineering rankings, Georgia Tech appeared in the Top 10, and was also the top-rated public American university. The London-based Times Higher Education has been providing ranked data on international universities since 2004. Their 2018-19 research includes more than 1,250 higher education institutions from 86 nations.  Georgia Tech’s full rankings: Computer Science: 7 Engineering: 10 Physical Science: 44 Business and Economics: 51 Social Sciences: 72 Psychology: 100 To see the full rankings in detail, • Engineering & Technology
Project 5-100 universities climb the ranks in computer science, engineering and technology 

ITMO University – a Project 5-100 participant – has improved its position among the world's top 100 universities in Computer Science. Times Higher Education THE has released the THE World University Rankings by subjects: Computer Science and Engineering Technology.  Russia has 16 universities listed in the Computer Science rankings table, 11 of them are Project 5-100 participants.  ITMO University has risen five places to joint 71st, which puts it on the podium in the regional table and strengthens its position in the top 100. The Moscow Institute of Physics and Technology MIPT 101-125 band – another Project 5-100 university - has been ranked among the world's 125 best universities in this subject. The National Research Nuclear University MEPhI 201-250 band held its position among the top 250 universities in the world a second year in a row.   Eight of the Project 5-100 universities have entered the rankings for the first time: NSU, HSE, Samara University, ETU “LETI”, SPbPU, TSU, TPU, and UrFU.  Lomonosov Moscow State University MSU has been ranked 78th in the rankings, featuring in the 100 best universities in Computer Science. In the THE World University Rankings by subject: Engineering and Technology Russia is represented by 28 universities, 15 of them are Project 5-100 participants. The Project's leader in the ranking is TPU 201-250 band. It is followed by MIPT, TSU and ITMO University, which have landed in 301-400 band. The Project's third-best performers are MEPhI, Samara University and SPbPU 401-500 band. HSE, ETU “LETI”, FEFU, Lobachevsky University, SIBFU and UrFU have been listed in the THE subject rankings for engineering and technology for the first time.  MSU ranked in the 151-157 band is Russia's leader in this subject ranking. Overall, 11 subject rankings have been published by Times Higher Education: Social Sciences, Business and Economics, Education, and Law, Arts and Humanities, Life Sciences, Physical Sciences, Psychology, Clinical, Pre-Clinical and Health, Engineering and Technology, Computer Science. Eight of the lists include more than 500 universities for the first time. Project 5-100 universities have shown positive momentum in the subject rankings published earlier this year by THE. KFU is Russia's leader in the Times Higher Education World University Rankings’ table for education. This university has joined the ranking in the 101-125 band. It makes KFU the highest-ranked Russian newcomer in the table. HSE has become the first Russian university to feature in the top 150 best universities for the social sciences. HSE also shares the 101-125 band with MSU in the Business and Economics rankings table this year. Two Project 5-100 institutions are in the global top 100 for physical sciences. MIPT has retained a top 50 position in this subject, MEPhI has risen to 78th, and MSU has reach joint 96th place. The subject tables employ the same range of 13 performance indicators used in the overall World University Rankings, brought together with scores provided under five categories. However, the overall methodology is carefully recalibrated for each subject, with the weightings changed to suit the individual fields.


Post a Comment

Free website traffic to your site!
Total Page views of this site shown below: