SketchUp-Keyboard-shorcut

Customizing Your Keyboard and Mouse

Drawing 3D models in SketchUp requires a lot of back and forth between your keyboard and mouse. As you become a more experienced SketchUp modeler, you develop a sense of what commands and tools you use most often and what you do and don’t like about the default keyboard and mouse settings.

Tip: Keyboard shortcuts are one of the most flexible ways you can tailor SketchUp to your unique modeling quirks and desires. If you’ve ever wished you could open a specific feature with a single keystroke, get ready to fall in love with the Shortcuts preferences panel. It’ll be one of the easiest relationships you’ve ever had.

Because SketchUp relies so heavily on mouse and keystroke combinations already, the mouse customizations aren’t quite as flexible as the keyboard shortcuts. However, you can change the scroll wheel zooming and the way the mouse and Line tool interact. The following sections explain all the details.

Creating keyboard shortcuts

In SketchUp, you can assign keyboard shortcuts to the commands you use most often, so that the commands are literally at your fingertips.

For the most part, you can customize the keyboard shortcuts however you like, but here are a few guidelines to help you understand what you can and can’t do as you assign shortcuts:

    • You can’t start with a number because that would conflict with the functionality of SketchUp’s Measurements box, and you can’t use a few other reserved commands.
    • You can add modifier keys, such as the Shift key.
    • You can’t use shortcuts that your operating system has reserved. If a shortcut is unavailable, SketchUp lets you know.
    • You can reassign a keyboard shortcut that already exists in SketchUp. For example, by default, the O key is the shortcut for the Orbit tool, but you can reassign the O key to the Open command if you like.

To create your own keyboard shortcuts, follow these steps:

    1. Select Window > Preferences.
    2. In the Preferences dialog box that appears, select Shortcuts in the sidebar on the left.
    3. In the Function list box, select the command to which you want assign a keyboard shortcut. If your selection already has a keyboard shortcut assigned to it, that shortcut appears in the Assigned box.

Tip: When you type all or part of a command’s name in the Filter text box, the Function list box options are filtered to only those options that include the characters you type. For example, typing mater filters the list down to three commands related to materials, as shown in the following figure.

4. In the Add Shortcut text box, type the keyboard shortcut that you want to assign to the command and click the + button. The shortcut you type moves to the Assigned box. If the shortcut you chose is already assigned to another command, SketchUp asks whether you want to reassign the shortcut to the command you selected in Step 3.

5. Repeat Steps 3 and 4 until you’ve created all your desired shortcuts. When you’re done, click OK.

Tip: If a shortcut is getting in your way, you can remove it. Simply select the command with the offending shortcut in the Function list box. Then select its shortcut in the Assigned box and click the minus sign button. The shortcut vanishes from the Assigned box — nay, from your copy of SketchUp.

If you ever want to reset all your keyboard shortcuts to the defaults, click the Reset All button on the Shortcuts preference panel. If you want to load your keyboard shortcuts onto another copy of SketchUp, find out how to export and import preferences in Customizing Your Workspace

Inverting the scroll wheel

If you use SketchUp with a scroll wheel mouse — which makes drawing in SketchUp much easier, by the way — by default, you roll the scroll wheel up to zoom in and roll down to zoom out.

On Microsoft Windows, you can flip this behavior by following these steps:

    1. Select Window > Preferences.
    2. In the sidebar on the left, select Compatibility.
    3. In the Mouse Wheel Style area, select the Invert checkbox.
    4. Click OK and take your inverted scroll wheel for a test drive.

Remapping mouse buttons

Remapping your mouse buttons refers to customizing the way the buttons work. If you’ve used your operating system preferences to flip the right and left mouse buttons because you’re left-handed, your remapped mouse should work fine in SketchUp.

However, if you’ve used a special utility to assign commands to your mouse buttons, you may experience unpredictable behavior or lose functionality in SketchUp.

Warning: Because SketchUp makes extensive use of the mouse buttons in combination with various modifier keys (Ctrl, Alt, Shift), you can easily lose functionality by remapping the mouse buttons.

Choosing mouse-clicking preferences for the Line tool

If you want to customize how the Line tool cursor responds to your clicks, you find a few options on the Drawing preferences panel. Here’s a quick look how you can customize the Line tool’s behavior:

    • Click-Drag-Release radio button: Select this option if you want the Line tool to draw a line only if you click and hold the mouse button to define the line’s start point, drag to extend the line, and release the mouse to set the line’s end point.
    • Auto Detect radio button: When this option is selected (it’s the default), you can either click-drag-release or click-move-click as necessary.
    • Click-Move-Click radio button: Force the Line tool to draw by clicking to define the line’s start point, moving the mouse to extend the line, and clicking again to establish the line’s end point.
    • Continue Line Drawing check box: When either Auto Detect or Click-Move-Click is selected, you can choose whether to select or deselect this checkbox. (It’s selected by default.) When the checkbox is selected, the Line tool treats an end point as the start of a new line, saving you the extra click required set a new start point. If that behavior isn’t your cup of tea, deselect the checkbox. Then go enjoy a cup of tea, knowing that the Line tool now works the way you always wanted.

Download SketchUp Quick Reference Cards

SketchUp 2019
LayOut 2019
mcafee-blog1

Embedded Whitelisting Meets Demand for Cost Effective, Low-Maintenance, and Secure Solutions

McAfee® Embedded Control frees Hitachi KE Systems’ customers to focus on production, not security
Hitachi KE Systems, a subsidiary of Hitachi Industrial Equipment Systems, part of the global Hitachi Group, develops and markets network systems, computers, consumer products, and industrial equipment for a wide variety of industries. Hitachi KE meets the needs of customers who seek high quality yet cost-effective, low-maintenance systems for their operational technology (OT) environments—they don’t want to have to think about security at all.

In addition to the custom tablet and touch panel terminals and other hardware and software Hitachi KE sells, the Narashino, Japan-based company, also offers a one-stop shop for its solutions—from solution construction (hardware and software development) to operation and integration to maintenance and replacement. To provide the best solutions across this wide spectrum of offerings, the company often turns to partners to augment its technology.

“To expand our Internet of Things [IoT] solutions and operational features and functionality, we enhance our own products and systems with the latest digital and network technologies,” says Takahide Kume, an engineer in the Terminal Group at Hitachi KE. “We strive to provide the technologically optimal as well as most cost-effective solution for our customers.”

Highest Customer Concern: Production

Although the risk of a zero-day attack in their OT environments has increased dramatically as IoT has become commonplace, most of Hitachi KE’s customers do not have information security personnel on staff. For them, the only thing that counts is production. Does the technology solution enable faster, higher-quality, or more cost-effective production?

“Despite many malware-related incidents in the news, many of our customers honestly don’t care as much as they should about cybersecurity,” acknowledges Kume. “We have to educate their management that lack of security, if malware strikes, could seriously hurt production and business in general. Thankfully, making that point is becoming easier and easier with malware incidents on the rise.”

“We decided that embedded whitelisting was the best solution for reduced operating cost and high security in an OT environment… We felt McAfee offered the best long-term support and the highest quality technical support.”
—Takahide Kume, Engineer, Hitachi KE Systems

Best Solution for Minimal Overhead Yet High Security

Even before its customers began to catch on to the need for secure solutions, Hitachi KE began looking for a way to build security into its systems that have Microsoft Windows, Linux, and Google Android operating systems and often multiple versions within the customer’s environment. “Because our customers often lack security personnel, security must be extremely easy and basically run itself,” explains Kume “When a system is infected in the field, the person on the front line usually can’t do anything about it.”

“We decided that embedded whitelisting was the best solution for reduced operating cost and high security in an OT environment,” adds Kume. After examining leading whitelisting solutions, Hitachi KE chose McAfee® Embedded Control software.

“We felt McAfee offered the best long-term support and the highest quality technical support along with robust security,” he continues. “With McAfee Embedded Control installed, no one has to take care of the system in the field… Industrial systems are often set and left alone for a long time—they can be overtaken by malware without anyone realizing it. For such systems, McAfee Embedded Control is the best solution.”

McAfee Embedded Control maintains the integrity of Hitachi KE systems by only allowing authorized code to run and only authorized changes to be made. It automatically creates a dynamic whitelist of the authorized code on the system on which it resides. Once the whitelist is created and enabled, the system is locked down to the “known good” baseline, thereby blocking execution of any unauthorized applications or zero-day malware attacks.

“Almost Maintenance-Free” Solution Reduces TCO

Users of Hitachi KE Systems with McAfee Embedded Control can easily configure the machines, specifying exactly which applications and actions that will be allowed to run and who has authority to make modifications in the future. The minimal impact of the McAfee software on performance also means fewer problems to troubleshoot.

“McAfee Embedded Control is an almost maintenancefree solution,” says Kume. “It is extremely easy to update when needed and doesn’t require our customers to have a security expert on staff. Minimal maintenance lowers the total cost of ownership for our customers.”

Even if security hasn’t been their top priority, Hitachi KE customers have been very pleased with the addition of McAfee Embedded Control to their solutions. “Having McAfee security built in gives our customers and end users peace of mind that they can connect our systems to the Internet,” says Kume. “McAfee has had many success stories within the Hitachi Group, and this is just one of them.”

“Having McAfee security built in gives our customers and end users peace of mind that they can connect our systems to the Internet.”
—Takahide Kume, Engineer, Hitachi KE Systems

autodesk-blog2

Creating Japanese Mountain Shrine with 3ds Max

Manuel Fuentes, architect and aspiring games artist, breaks down his process for creating his Japanese Mountain Shrine. Turn up your audio and press play, we hope you enjoy this Zen and charming scene as much as we do.

Hi, my name is Manuel and I am an architect and aspiring games environment artist from Mexico. In the beginning I started working with 3ds Max doing mostly architectural visualization. Over the years as I got more familiar with it, I’ve used it for a variety of details such as rapid prototyping of buildings, rendering realistic architectural scenes, and more recently to creating game ready environments. The scene in this article was created as my entry for the Artstation Feudal Japan Challenge in the real time environment category.

All the architectural elements, the rocks, and the small shrubs where modelled in 3ds Max. The detail sculpting of trees and rocks was done in ZBrush, and the texturing with Substance Painter/Designer. Later, the meshes where adjusted in 3ds Max for final optimization and UV adjustments before exporting to UE4 for the final rendering of the scene.

How to build the scene

The initial blockout of the scene was done using boxes with very low subdivisions to easily adjust the proportions and properly balance the scene. After this was completed, using 3ds Max’s Modifier Stack I could easily add more complexity to the models without destroying the original geometry. This allowed me to quickly adjust general proportions as the scene grew more complex by going to the first levels of the Modifier Stack, and then back to my higher levels and continue adjusting the higher poly details.

Adding in the elements

The roof and wood details around the scene were created using a basic spline with a Sweep Modifier and then some edit Poly Modifiers to create the desired final shape. Again, this non-destructive approach allowed me to duplicate an element and reuse it somewhere else in the scene. I would simply go to the lower levels of the Modifier Stack, adjusting the spline to fit the new building, and then use edit poly to modify it and rotate it into place.

I used V-Ray to render some previews of my scene during the workflow, and before exporting the elements. All the modular terrain elements where first modeled and dimensioned in 3ds Max to make sure they fit together to shape the mountain and landscape scene. They were modelled using basic boxes with edit poly modifiers in 3ds Max, and later the detail sculpt was done in ZBrush.

Character animation

Once the scene was complete the final step was to do an animation with a ghost dragon flying around the scene. This was a first for me as I had never animated a character before, but the CAT rig was very easy to understand. After applying a skin modifier to a model I imported from ZBrush, and a basic motion animation modified using curves, I changed the default walk into something that resembled a flying motion. The model and animation were ready to export as an FBX and integrated into the scene.

sendquick-blog1

7 Tips to Help Choose an SMS Service Provider

You’ve done your legwork and have now decided to leverage the powerful benefits of using SMS technology to engage with your customers more effectively. The ubiquitous SMS (text) can help companies improve their communications flow, internally as well as with customers. It is one of the most cost effective broadcasting medium with one of the highest open-read rates.

So how does an organization choose a right SMS provider? A simple google search will give you endless options. With the plethora of options in an increasingly complex market, it is a daunting task to choose the right one. There are simply too many SMS vendors in the market offering a myriad of solutions and often they all seem to fulfill your project requirements. Apart from pricing to consider in choosing the right SMS service provider, here are the other key factors to take into consideration in making the best choice for your business.

1. Cost: Pricing is a key consideration especially for SMBs or for companies who need to reach out to thousands of customers regularly. Do confirm with the SMS vendor that the quotation provided for the SMS service needed is all explicitly reflected such as setup fee, monthly hosting fee, per SMS fee etc, and there are no hidden costs.

2. SMS API for ease of Integration: Make sure your vendor’s SMS API documents are comprehensive, uncomplicated. The API should be able to easily integrate with all your company’s existing network applications including mobile apps, open source software, CRM system, social messengers and collaboration tools. TalariaX can fully support all formats like SMTP email, SNMP Traps, Syslog and HTTP Post, all IT equipment & devices. Furthermore, sendQuick (flagship mobile messaging product of TalariaX) integrates with any existing applications to send messages via SMS, email, social messengers (WhatsApp Business, Facebook Messenger, LINE, WeChat, Viber, Telegram) and collaboration tools (Microsoft Teams, Slack, Cisco WebEx).

3. Reliable Message Delivery: Cheap pricing does not necessarily account for good delivery. A reliable SMS provider should deliver messages quickly and efficiently at competitive rates. They should have direct and strong partnerships with the local and global aggregators and telecom network providers to ensure messages are delivered with minimum delay and bounce backs.

4. Support: Is there a local account manager attending to your project requirements responsibly and proactively? If so, he or she needs to listen to your project requirements and limitations, then propose you the appropriate solutions or methodology to fulfill your requirements and allow room for scalability in the future. Furthermore, he or she needs to be able to walk-through with your team the evaluation, purchasing and post-purchase processes closely. Also, do check if they provide other means of support in addition to email, such as phone, web chat, accessibility 24/7, anything that is relevant for you.

5. Global reach: The SMS vendor’s network coverage and reach are an important factor to consider. With globalisation and evolution of e-commerce, more businesses are expanding their operations outside of their home country. It is important that the SMS provider should have global connectivity and send SMS texts to different countries across multiple mobile networks. TalariaX SMS gateways have been deployed across multiple industry verticals in over 50 countries across the globe.

6. Scalability and Testing: An important item on the checklist is scalability and testing of the system. Is there a proof-of-concept or trial account during the user acceptance testing (UAT) stage to confirm whether you can send and receive messages from your chosen mobile operators or mobile phone numbers through the SMS vendor? This will ensure minimal hiccups when initiating a campaign.

7. 2-way messaging: If you are looking for interactive responses to your SMS texts, you should ask the SMS gateway provider if they provide 2-way SMS messaging. Many companies are moving towards 2-way messaging as it allows them to interact with their consumers more closely and can be used for various job functions like job dispatch, appointment reminders, promotional messaging, security alerts, notifications, etc. sendQuick can send and receive 2-way alerts from IP addressable infrastructure, third-party applications from users across the enterprise.

sap-blog1

Why Artificial Intelligence Will Make Work More Human

What does the rise of artificial intelligence mean for the world of work?

First, it’s clear that it’s a huge opportunity for increased productivity. Gartner believes that this year alone, half a billion users will save two hours a day using artificial intelligence—that’s up to half a million years of improved efficiency!

McKinsey has estimated the percentage of various work tasks and sectors that could now be automated using new technology. After predictable physical work (which can increasingly be done by robots) the biggest opportunities are in mainstream business tasks such as data collection and data processing. McKinsey believes that over 60% of these tasks could now be automated.

Given these efficiencies, it’s only natural that some worry about the effects on employment. The good news is that, so far at least, these technologies have been displacing work rather than replacing workers.

In other words, machine learning excels at replacing the more boring and repetitive aspects of knowledge work—freeing workers to spend time on more rewarding and empowering tasks.

The Payoffs of Machine Learning for Workers Made Simple

An analogy can help illustrate the point. Remember when you were a child, and you had to spend months learning to do long division? After a while, to your relief, you were allowed to use a calculator. Far from slowing your ability to do mathematics, it freed you to move on to more complex and challenging tasks. Machine learning is doing the same for daily work in enterprises worldwide.

For example, machine learning has proved successful at automating repetitive finance tasks such as the automatic matching of invoices and payments, increasing rates from 70% to 94% in just a few weeks—resulting in massive savings in time and effort.

Machine learning also helps augmented human intelligence. For example, a sales person can now receive more intelligent lists of potential prospects—based on historic patterns, algorithms can automatically provide information about which prospects are most likely to buy, what products they are most likely to purchase, how long the deal is likely to take, etc. The end result is that every sales person gets closer to the best in the organization.

Indeed, we may be at the dawn of a new golden age for knowledge workers. Just as the invention of tractors multiplied physical labor, allowing a single farmer to plough many fields in a fraction of the time, these new technologies will do the same thing for knowledge workers, allowing them to multiply efforts in ways it is hard for us to currently imagine.

A Net Increase in Jobs—for Everyone?

But what about non-knowledge workers? Since the dawn of time, new technologies have been greeted with skepticism (Socrates famously feared that books were a bad idea because students would no longer have to use their memory). But the result has consistently been richer societies. For example, thanks to mechanization, the number of farm workers in the US has gone from 83% in 1800 to less than 2% today—but few of us would like to return to that era.

Clearly, some workers and jobs will be affected. But there’s reason to believe that we shouldn’t be too pessimistic. Generally, we’re very good at thinking about jobs that will be lost to automation, but it’s much harder for us to imagine the new jobs that will be created thanks to the new opportunities.

A study by Gartner shows that machine learning will result in a net increase in jobs from the year 2020. And in fact, it may be earlier: another study shows that in companies using AI today, 26% report job increases, compared to just 16% saying that it has reduced jobs.

There’s a tendency to think that the new jobs created will inevitably only go to the highly-skilled — for example, increased use of machine learning has led to increased demands for data scientists.

But history gives a rosier view. New technology also enables people to do jobs that they wouldn’t previously have been qualified for. For example, to work in a general store a century ago, you would have had to be able to do fast mental arithmetic, in order to calculate the amount of the bill. The advent of cash registers meant that stores could hire people for their customer service skills, rather than their mathematics prowess.

Machine learning is making computers easier to use in many different ways. For example, new enterprise digital assistants let us access the information we need to do our jobs faster and easier than ever before (think of how Jarvis in Iron Man” helps Tony Stark do his job faster). This will enable workers to do more with less effort and resources than in the past.

This process has happened many times in the past. For example, the spinning jenny was introduced in the UK in 1760. It automated the process of spinning, drawing, and twisting cotton. But the lower costs and higher demands for cloth meant that far from reducing employment, the number of workers exploded from around 8,000 skilled artisans to more than 300,000 less-skilled workers a few decades later.

In the End…

Ultimately, the rise of artificial intelligence will raise the premium on tasks that only humans can do. Because repetitive intellectual tasks can increasingly be automated, skills like leadership, adaptability, creativity, and caring will become relatively more scarce and more important.

Instead of forcing people to spend time and effort on tasks that we find hard but computers find easy (such as mental arithmetic), we will be rewarded for doing what humans do best—and artificial intelligence help make us all more human.

Posted By Timo Elliott, November 1, 2018

Source: https://blog-sap.com/analytics/2018/11/01/why-artificial-intelligence-will-make-work-more-human/

gemalto-blog3

Why Data Encryption and Tokenization Need to be on Your Company’s Agenda

As children we all enjoyed those puzzles where words had their letters scrambled and we had to figure out the secret to make the words or sentences legible. This simple example of encryption is deployed in vastly more complex forms across many of the services we use everyday, working to protect sensitive information. In recent years the financial services industry has added a new layer of encryption called tokenization. This concept works by taking your real information and generating a one-time code, or token, that is transmitted across networks. The benefit is that if the communication is intercepted your real details are not compromised.

According to our Breach Level Index there were 1,765 breaches in 2017. And these breaches are getting faster and larger in scope, over two billion records were lost last year. The fallout for companies is significant so it is in their interests to do whatever they can to protect their customer’s data.

Of course, encryption is a very complicated field of research, and one shouldn’t expect board level executives to understand how the cryptographic algorithms work. But they must understand just how vitally important it is that data is secure, whether at rest or in motion.

Those working on encryption face a challenge to ensure that access to applications, databases and files is unimpeded by the need to encrypt and decrypt data. There is a performance issue here, and so companies need to evaluate and test while decided what data, when, how and where should be encrypted.

The worrying thing is that despite the clear need for such work, there is a distinct lack of cyber security professionals worldwide—and especially in encryption. Indeed, you’ll often see job postings for security positions where experience of encryption isn’t even mentioned.

As the statistics show, this is having a huge effect on companies. In 2017, less than 3% of data breaches involved encrypted data. If we accept that companies are going to get hacked it is imperative that any data that is stolen is rendered useless through encryption.

Encryption would have mitigated the damage to brand image, reputation, company financial losses, government fines and falls in stock prices as well as damage to their executives image and reputation. It is also a major disincentive to criminals as the effort needed to crack the algorithms makes it entirely unprofitable while there are so many other available targets.

So if the problem is so clear, and the solution so obvious, why are companies delaying investing in encrypting data?

Well, many executives I speak to daily in Latin America tell me that the security of their Big Data is handled by their cloud service provider. And if there was a leak, it would be the supplier’s responsibility.

This completely overlooks that customers, authorities, investors and the wider public do not care about this distinction. They will all associate any breach with the company, never a supplier of services. So, while ultimately liability may fall at the feet of the cloud service provider, the immediate and potentially catastrophic impact will be felt by the breached company.

It is therefore crucial that companies start taking serious responsibility for the data of their customers. Whether internal staff or cloud provider, conversations need to be had about how data is encrypted. This includes:

• Checking that the cryptographic algorithms used are certified by international bodies 
• Checking to ensure that your cryptographic keys are stored in an environment fully segregated from where you store your encrypted information (whether held by third parties or in your own systems, files, or databases).

PwC suggests that one of the biggest concerns CEOs fear is a cyber-attack. Given the severity of the threat, we must recognize that we are all responsible for promoting data security. And that means adopting best practices for data protection, deploying encryption, and optimizing management of cryptographic keys.

autodesk-blog1

Creating the Mind Flayer in Stranger Things’ Season 2 Finale

We initially got a bid for just two episodes of “Stranger Things” Season 2. In the end, we worked on every single episode. The shot counts and the amount of work just grew and grew. It was exciting to have them come back to us – it went from being a fairly small project to being really large, especially for our studio at the time.

I was FX Lead on this project. As far as my experience goes, the Shadow Monster is probably the thing I’m most proud of from the show.

DESIGNING THE SHADOW MONSTER (AKA MIND FLAYER)

Before production, there were some stills and references that were drawn up for us. The client had an idea of what they wanted, but as far as the end result, they felt that they’d know it when they saw it.

There was one point when the Method VFX supervisor was on location with the client and he called me up and said, “I have to ask a favor. We’ve got to redo the look of the Shadow Monster. They want it done while I’m still here. Take these suggestions, take these notes, and redo the look – send me something as soon as you can, and we’ll try and get something approved before I leave.”

Within 24 hours, we turned around a brand-new look for the Shadow Monster in the final episode and the client loved it. It was a challenging and scary thing! You often can’t get it right on your first try, but having that ability to do a quick back-and-forth and be more creatively involved was satisfying. It was also scary!

UPPING THE FEAR FACTOR

The creators wanted the Shadow Monster to feel more solid. Kubrick was a huge influence since the primary reference for this was the wall of blood from The Shining. You get your first look at the “original” Shadow Monster in episode 3, where Will confronts him unsuccessfully. Season one of Stranger Things ends with Eleven making the Demogorgon burst into particles and disappear. That served as a springboard for the Shadow Monster.

Initially, we started with a reimagining of the Smoke Monster in Lost. We wanted something that was smoky and not quite there but still had wispy particles. We took some inspiration just from Lost’s season one ending where the Big Bad disperses into a cloud of particles that then vanish. We also liked the way the pseudopod in The Abyss reached forward, and the Symbiote from Spiderman 3 also served as in inspiration in episode 3 where you have these little arms reaching out and then pulling back in.

Our Shadow Monster started off a lot less concrete than what made the final cut; it initially wasn’t looking scary enough. In the final episode, when the arm was reaching out toward Eleven and Hopper, it needed to feel like a solid and substantial threat, nearly tactile in nature so you felt the strength of fight.

We created a string of static particles that weren’t built through a simulation, but rather we built up noise patterns and modeled points into a line, which we then deformed and weaved four lines together to form a cord of particles – an arm. We then took that arm and weaved those cords, spiraling them around each other, and that’s how we achieved that twisting, reaching limb. This made it go from being a mass of smoke to something amorphous; you could see the claws coming out, it had pointy tips, it felt crunchy in the middle, yet still had a wispy, smoky, ethereal quality to it.

The tentacle animation was done in Maya and then brought into Houdini where we created a procedural particle system. That was finally brought into Katana and it was rendered with RenderMan.

The key to maintaining that air of threat is that the bulk of the Shadow Monster is still behind the curtain of membrane that it’s reaching through.

TYING IT ALL TOGETHER

For the filming of the scene in the rift chamber at the end, Eleven, played by Millie Bobby Brown, and Hopper, played by David Harbour, were hanging from a cherry picker surrounded by only green screens. Our VFX Supervisor, Seth Hill, was telling us about how the crew would be hanging off the bottom of the cherry picker and shaking it to try and make it dynamic while the actors were trying to be serious and fight the monster.

There were lots of considerations that we needed to take as the VFX team. We needed to make the shots work with Eleven’s eye line. From different perspectives, it became a little challenging to match her eye line and make sure that everything felt consistent, all the while maintaining this connection between her, Hopper and this CG thing that we’re making.

BEING CREATIVE IN VFX

A big takeaway is that this wasn’t a traditional VFX relationship; our studio was allowed more responsibility with creative decisions. The production welcomed ideas and gave us a voice to share creative thoughts.

It’s gratifying to have that creative relationship and have more creative freedom than on a lot of the projects that come through here. For me, that was the most rewarding part of Stranger Things – how great that creative collaboration was between the client and us.

suse-blog3

OpenStack—The Next Generation Software-defined Infrastructure for Service Providers

Many service providers face the challenge of competing with the pace of innovation and investments made by hypercloud vendors. You constantly need to enable new services (e.g., containers, platform as a service, IoT, etc.) while remaining cost competitive. The proprietary cloud platforms used in the past are expensive and struggle to keep up with emerging technologies. It’s time to start planning your future with an open source solution that enables a software defined infrastructure for rapid innovation.

A growing number of service providers have selected OpenStack due to its low cost and its rapid pace of innovation. Many new technologies are introduced early in their development in OpenStack prior to making their way to proprietary and hyper-cloud platforms. Well known examples include containers, platform as a service and network function virtualization. Why not leverage the work of a growing community of thousands of open source developers to gain a competitive edge?

For those service providers unfamiliar with OpenStack, SUSE recently published a paper entitled, “Service Providers: Future-Proof Your Cloud Infrastructure,”to highlight some of the architectural choices you will need to make when implementing an OpenStack environment. While the concepts are not new, several decisions will need to be made up-front based on the data center footprint you wish to address.

While OpenStack may seem a bit complex at first, the installation and operations of vendor supplied distributions have greatly improved over the years. Support is available from the vendors themselves as well as from a large community of developers. Most service providers start with a relatively small cloud and build from there. Since OpenStack is widely supported by most hardware and software vendors, you can even repurpose your existing investments. The upfront cost to begin your OpenStack journey is low. When you’re ready to get started, SUSE offers a free 60-day evaluation trial of our solution (www.suse.com/cloud).

Now is the time to map out the future of your software-defined infrastructure. Take advantage of the most rapidly evolving cloud platform with no vendor lock-in. Build your offering on some of the best operations automation available today. OpenStack is the best way to control your own destiny. For more information, please visit our site dedicated to cloud service providers at www.suse.com/csp.

suse-blog2

Three Key Best Practices for DevOps Teams to Ensure Compliance

Driving Compliance with Greater Visibility, Monitoring and Audits

Ensuring Compliance in DevOps

DevOps has fundamentally changed the way software developers, QA, and IT operations professionals work. Businesses are increasingly adopting a DevOps approach and culture because of its power to virtually eliminate organizational silos by improving collaboration and communication. The DevOps approach establishes an environment where there is continuous integration and continuous deployment of the latest software with integrated application lifecycle management, leading to more frequent and reliable service delivery. Ultimately, adopting a DevOps model increases agility and enables the business to rapidly respond to changing customer demands and competitive pressures.

While many companies aspire to adopt DevOps, it requires an open and flexible infrastructure. However, many organizations are finding that their IT infrastructure is becoming more complex. Not only are they trying to manage their internal systems, but are now trying to get a handle on the use of public cloud infrastructure, creating additional layers of complexity. This complexity potentially limits the agility that organizations are attempting to achieve when adopting DevOps and significantly complicates compliance efforts.

Ensuring compliance with a complex infrastructure is a difficult endeavor. Furthermore, in today’s digital enterprise, IT innovation is a growing priority. However, many IT organizations still spend great time and money on merely maintaining the existing IT infrastructure. To ensure compliance and enable innovation, this trend must shift.

With a future that requires innovation and an immediate need for compliance today, the question remains: How can IT streamline infrastructure management and reduce complexity to better allocate resources and allow more time for innovation while ensuring strict compliance?

Infrastructure management tools play a vital role in priming the IT organization’s infrastructure for innovation and compliance. By automating management, streamlining operations, and improving visibility, these tools help IT reduce infrastructure complexity and ensure compliance across multiple dimensions— ultimately mitigating risk throughout the enterprise.

Adopting a Three-Dimensional Approach to Compliance

For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

However, businesses in every industry need to consider compliance, whether maintaining compliance to the latest OS patch levels to avoid the impacts of the latest security virus or compliance for software licensing agreements to avoid contract breaches. Without compliance, the business puts itself at risk for a loss of customer trust, financial penalties, and even jail time for those involved.

When examining potential vulnerabilities in IT, there are three dimensions that guide an effective compliance program: security compliance, system standards, and licensing or subscription management.

Security compliance typically involves a dedicated department that performs audits to monitor and detect security vulnerabilities. Whether a threat is noted in the press or identified through network monitoring software, it must be quickly remediated. With new threats cropping up daily, protecting the business and its sensitive data is critical.

For system standards compliance, most IT departments define an optimal standard for how systems should operate (e.g., operating system level, patch level, network settings, etc.). In the normal course of business, systems often move away from this standard due to systems updates, software patches, and other changes. The IT organization must identify which systems no longer meet the defined standards and bring them back into compliance.

The third dimension of compliance involves licensing or subscription management which reduces software license compliance concerns and unexpected licensing costs. Compliance in this area involves gaining better visibility into licensing agreements to manage all subscriptions and ensure control across the enterprise.

To mitigate risk across the business in all three dimensions of compliance, the IT organization needs infrastructure management tools that offer greater visibility, automation, and monitoring. According to Gartner’s Neil MacDonald, vice president and distinguished analyst, “Information security teams and infrastructure must adapt to support emerging digital business requirements, and simultaneously deal with the increasingly advanced threat environment. Security and risk leaders need to fully engage with the latest technology trends if they are to define, achieve, and maintain effective security and risk management programs that simultaneously enable digital business opportunities and manage risk.”

Best Practice #1:

Optimize Operations and Infrastructure to Limit Shadow IT

With so many facets to an effective compliance program, the complexity of the IT infrastructure makes compliance a difficult endeavor. One of the most significant implications of a complex infrastructure is the delay and lack of agility from IT in meeting the needs of business users, ultimately driving an increase in risky shadow IT activities.

As business users feel pressure to quickly exceed customer expectations and respond to competitive pressures, they will circumvent the internal IT organization altogether to access services they need. They see that they can quickly provision an instance in the public cloud with the simple swipe of a credit card.

These activities pose a threat to the organization’s security protections, wreaks havoc on subscription management, and takes system standard compliance out of the purview of IT.

Optimizing IT operations and reducing infrastructure complexity go a long way toward reducing this shadow IT. With an efficient server, VM, and container infrastructure, the IT organization can improve speed and agility in service delivery for its business users. An infrastructure management solution offers the tools IT needs to drive greater infrastructure simplicity. It enables IT to optimize operations with a single tool that automates and manages container images across development, test, and production environments, ensuring streamlined management across all DevOps activities. Automated server provisioning, patching, and configuration enables faster, consistent, and repeatable server deployments. In addition, an infrastructure management solution enables IT to quickly build and deliver container images based on repositories and improve configuration management with parameter-driven updates. Altogether, these activities support a continuous integration/continuous deployment model that is a hallmark of DevOps environments.

When DevOps runs like a well-oiled machine in this way, IT provisions and delivers cloud resources and services to business users with speed and agility, making business users less likely to engage in shadow IT behaviors that pose risks to the business. As a result, compliance in all three dimensions—security, licensing, and system standards—is naturally improved.

Best Practice #2:

Closely Monitor Deployments for Internal Compliance

In addition to optimizing operations, improving compliance requires the ability to easily monitor deployments and ensure internal requirements are met. With a single infrastructure management tool, IT can easily track compliance to ensure the infrastructure complies with defined subscription and system standards.

License tracking capabilities enable IT to simplify, organize, and automate software licenses to maintain long-term compliance and enforce software usage policies that guarantee security. With global monitoring, licensing can be based on actual data usage which creates opportunities for cost improvements.

Monitoring compliance with defined system standards is also important to meeting internal requirements and mitigating risk across the business. By automating infrastructure management and improving monitoring, the IT organization can ensure system compliance through automated patch management and daily notifications of systems that are not compliant with the current patch level.

Easy and efficient monitoring enables oversight into container and cloud VM compliance across DevOps environments. With greater visibility into workloads in hybrid cloud and container infrastructures, IT can ensure compliance with expanded management capabilities and internal system standards. By managing configuration changes with a single tool, the IT organization can increase control and validate compliance across the infrastructure and DevOps environments.

Best Practice #3:

Closely Monitor Deployments for Internal Compliance

The fundamental goal of any IT compliance effort is to remedy any security vulnerabilities that pose a risk to the business. Before that can be done, however, IT must audit deployments and gain visibility into those vulnerabilities.

An infrastructure management tool offers graphical visualization of systems and their relationship to each other. This enables quick identification of systems deployed in hybrid cloud and container infrastructures that are out of compliance.

This visibility also offers detailed compliance auditing and reporting with the ability to track all hardware and software changes made to the infrastructure. In this way, IT can gain an additional understanding of infrastructure dependencies and reduce any complexities associated with those dependencies. Ultimately, IT regains control of assets by drilling down into system details to quickly identify and resolve any health or patch issues.

veritas-blog-2

The Future of Data Protection

Enterprises to spend 56% more of their IT budgets on cloud technologies by 2019.
The cloud momentum

As I meet with customers, most of whom are large global enterprises, the topic of the cloud continues to come up. Getting cloud right means new ways to stay competitive and stand out in their respective markets. For example, moving test/dev operations to the cloud has allowed many organizations to reap the benefits of increased productivity, rapid product delivery and accelerated innovation. Or another benefit the cloud provides is an on demand infrastructure which can be used as a landing zone for business operations in the event of a disaster.

No longer do IT staff have to spend countless hours installing a set of SQL, DB2 or Oracle servers to run your in-house databases, CRM or analytics platform. Databases are offered as services that are ready for the largest, most intense data warehouse needs, and the ability to add analytics capabilities on top gives organizations more opportunities to gain more insights from your data. Additionally, companies have choice. Subscribing to multiple services from multiple cloud vendors simultaneously to test products or services in real time, only paying for what resources are used or consumed, is hugely beneficial.

It’s this increased agility companies are after, and what allows them to grow faster and better meet the needs of their customer.

Persisting concerns

But of course, there’s still quite a bit of uncertainty when it comes to cloud, which causes concern. Some of the most common concerns I hear about are related to data protection and service interruptions. There’s a fear of accidentally deleting critical data, being held hostage to ransomware, and the risk of application or resource failure. There’s also a general misunderstanding regarding how much of the responsibility for addressing these concerns sits with customers versus cloud providers.

In a traditional sense, the perception that because servers and data were ‘tucked away’ safe and sound within the confines of the on-premises data center, those concerns were more easily addressed. But, in the cloud, that’s not the case. When the data center moves to the cloud, rows and rows of 42U racks filled with blades and towers transform into on-demand cloud instances that can be spun up or down at will. This causes a sense of ‘losing control’ for many.

Some argue that the risks actually increase when you move to the cloud and no longer own the resources, but we believe those risks can be minimized, without sacrificing the rewards.

The trick here is to keep things simple, especially for IT teams that are responsible for protecting company data – wherever that data is stored. And that’s an important point, because it’s not an either/or conversation. According to RightScale’s 2018 State of the Cloud survey, 51% of enterprises operate with a hybrid strategy and 81% are multi-cloud. This information further provides support for clouds existing alongside an existing on-premises data center strategy for most large enterprise customers. More point solutions, creating silos is a losing strategy. Equally so are platform specific technologies that are inflexible and do not account for the persisting heterogeneous, hybrid nature of enterprise IT environments.

Veritas has you covered

In the midst of this cloud evolution, Veritas has taken its years of data management expertise and leadership, and developed a data protection technology called Veritas CloudPoint, that is cloud-native, light-weight and flexible, yet robust with core enterprise-grade data protection capabilities that can be extended to protect workloads in public, private, and hybrid cloud infrastructures. Veritas CloudPoint can easily be introduced to your AWS, Google Cloud, Microsoft Azure, or data center environments. Utilizing the available cloud infrastructure APIs, CloudPoint delivers an automated and unified snapshot-based data protection experience with a simple, intuitive, and modern UI. Figure 1 below shows the basics of how it works.

Figure 1 

But that is just the tip of the iceberg…

With the recent Microsoft and Google press releases announcing version 2.0 of Veritas CloudPoint, we have expanded the reach of CloudPoint to VMware environments as well as support for high-performance, on-premises databases such as MongoDB.

We are already working on our next release of CloudPoint, targeted for availability in the coming quarters, where we plan to add cloud support for VMware Cloud on AWS and IBM. For private cloud environments, we plan to offer VM-level and application-level support for Microsoft’s private cloud platform Azure Stack. We already announced in-guest support for Azure Stack with Veritas NetBackup earlier this year.

And, in staying consistent with my comment above regarding point solutions and platform specific solutions being a losing strategy, we plan to integrate CloudPoint with the next release of Veritas NetBackup, see figure 2 below. This should be welcome news for NetBackup customers in particular, as they will have an integrated way to address data protection requirements in the most optimized way possible, without adding more silos, and no matter where their workloads run. But, I’ll save the details and specifics on that for my next blog!

Figure 2 

Be on the lookout for more news in the coming months.

[1]Forward-looking Statement: Any forward-looking indication of plans for products is preliminary and all future release dates are tentative and are subject to change at the sole discretion of Veritas. Any future release of the product or planned modifications to product capability, functionality, or feature are subject to ongoing evaluation by Veritas, may or may not be implemented, should not be considered firm commitments by Veritas, should not be relied upon in making purchasing decisions, and may not be incorporated into any contract. The information is provided without warranty of any kind, express or implied.