Monday, 31 October 2016
Best Help Desk Certifications For 2017
from Tom's IT Pro
via CERTIVIEW
ITIL® Foundation Question of the Week: Service Transition
Which of the following responses contain processes that are all defined in the Service Transition stage of the Service Lifecycle?
A. Service Asset and Configuration Management, Service Catalogue Management, Request Fulfillment
B. Change Management, Request Fulfillment, Knowledge Management
C. Service Asset and Configuration Management, Change Management, Knowledge Management
D. Change Management, Request Fulfillment, Service Catalog Management
Answer: C.
The processes in the Service Transition stage of the Service Lifecycle are: Transition Planning and Support, Change Management, Service Asset and Configuration Management, Release and Deployment Management and Knowledge Management. (Note: Service Validation and Testing and Change Evaluation are also processes introduced in the Service Transition Stage, but they are not included in the ITIL Foundation syllabus). Service Catalogue Management is part of Service Design and Request Fulfillment is part of Service Operation.
Related Course
ITIL Foundation
Related Certification
ITIL Foundation
ITIL Foundation Question of the Week Series
- ITIL® Foundation Question of the Week: Service Operation Stage Functions
- ITIL® Foundation Question of the Week: Service No Longer Used
- ITIL® Foundation Question of the Week: Types of Services
- ITIL® Foundation Question of the Week: Categorizing Incidents
- ITIL® Foundation Question of the Week: Value of a Service
- ITIL® Foundation Question of the Week: Service Transition
from
CERTIVIEW
Friday, 28 October 2016
Amazon Adds Career Component to AWS Educate
from Tom's IT Pro
via CERTIVIEW
CCNP Collaboration Question of the Week: CLI Command
You are troubleshooting video quality issues on a Cisco TelePresence system. Which CLI command shows the number of lost video packets and jitter during a call in progress?
A. Show call statistics video
B. Show call statistics all
C. Show call statistics detail
D. Show call statistics video detail
E. Show call statistics all video detail
Answer: D.
With the keyword detail, the show call statistics video command will show the addition information about packet loss and jitter, as well as other information.
Related Courses
CIPTV1 – Implementing Cisco IP Telephony and Video Part 1 v1.0
CIPTV2 – Implementing Cisco IP Telephony and Video Part 2 v1.0
CTCOLLAB – Troubleshooting Cisco IP Telephony and Video
CAPPS – Implementing Cisco Collaboration Applications v1.0
Related Certification
CCNP Collaboration
CCNP Collaboration Question of the Week Series
- CCNP Collaboration Question of the Week: DSP Farm Profile Configuration Mode
- CCNP Collaboration Question of the Week: Discard Digits Instruction
- CCNP Collaboration Question of the Week: CLI Command
- CCNP Collaboration Question of the Week: H.323 Endpoints
from
CERTIVIEW
Best Endpoint Protection: Android Antivirus and Data Security
from Tom's IT Pro
via CERTIVIEW
Thursday, 27 October 2016
Apple's New MacBook Pro Could Kill the Mouse, Almost
from Tom's IT Pro
via CERTIVIEW
How Drones Will Reshape Your Company
from Tom's IT Pro
via CERTIVIEW
Wednesday, 26 October 2016
Windows 10 For IT Pros: Tutorials, Tips & Tricks
from Tom's IT Pro
via CERTIVIEW
Microsoft Reveals New Surface Book and Surface Studio AIO
from Tom's IT Pro
via CERTIVIEW
Best Subreddits for IT Pros
from Tom's IT Pro
via CERTIVIEW
Tuesday, 25 October 2016
How the First Email Message was Born
Back in October 1971, programmer Ray Tomlinson had no idea what he was about to start when he essentially sent the first email. That email consisted of something resembling “QWERTYUIOP.” It was a test email and had absolutely no importance to him at the time, so it was not preserved for posterity.
While email had been sent before on networks such as PLATO and AUTODIN, those messages were sent to users on the same computer. Yes, there was a time when not everyone had their own computer. What made Tomlinson’s email so unique and revolutionary is that he was able to send it to a single user on a different host connected to ARPANET. This is how the first email message was born.
Born in Amsterdam NY in 1941, Tomlinson received a Bachelor of Science degree in electrical engineering from Rensselaer Polytechnic Institute. He participated in RPI’s co-op program with IBM. He also received a Master of Science degree in electrical engineering at the Massachusetts Institute of Technology’s Speech Communication Department. At MIT, he developed an analog-digital hybrid speech synthesizer, which was also the subject of his Master’s thesis.
Tomlinson joined the research and development firm BBN Technologies in 1967. There he had a hand in developing the TENEX operating system and the TELNET network protocol. He also was assigned to the development of ARPANET, the US military’s communications network, which was an early form on the Internet.
One of his duties included adapting a program called SNDMSG for use on TENEX. The SNDMSG program allowed different users on a shared computer to leave messages for each other. Using code from CPYNET, Tomlinson devised a way to send messages to users on other computers, which resulted in the first email. That first email was sent from one Digital Equipment Corporation computer to another DEC-10, which happened to sit beside each other in his lab.
Amazingly, Tomlinson is also responsible for choosing the “@” sign to designate users from different hosts, thus establishing the convention for the modern email address. At first he didn’t make a big deal about his technological breakthrough and didn’t realize the significance his side project would have decades later.
Tomlinson would later show colleague Jerry Burchfiel his email-messaging system, not thinking it was a big deal. Burchfiel knew better. Tomlinson’s discovery was quickly adopted across the ARPANET, which significantly increased the popularity of email.
Earlier this year on March 4, 2016, Tomlinson passed away at his home in Lincoln, Massachusetts from a heart attack at age 74. Today, nearly 45 years later after its beginnings, billions of emails are sent every day, and we have Ray Tomlinson to thank for it.
from
CERTIVIEW
Google Jamboard: What You Need to Know
from Tom's IT Pro
via CERTIVIEW
Best System Administrator Certifications For 2017
from Tom's IT Pro
via CERTIVIEW
Best Rugged iPhone 7 Cases for BYOD Deployment
from Tom's IT Pro
via CERTIVIEW
Monday, 24 October 2016
Starting Points to Jumpstart a T-SQL Career
Technology is ever changing and very few areas are stable. This makes information technology interesting, but challenging. T-SQL or Transact-SQL is one area that is stable. There is also a vast use of T-SQL and many career options.
T-SQL is the combination of standard SQL as well as the proprietary adds-ons for Microsoft that include functions, stored procedures and other elements of the language. SQL or the Structured Query Language is the programming language used to inquire, create, control and manipulate objects in a relational database. It is also used for administering the database. SQL is both an American National Standards Institute (ANSI) and International Standards Organization (ISO) standard.
SQL statements are used to perform tasks such as adding data, making data modifications, creating objects, performing maintenance tasks and retrieving data from a database. Most analysis and business decisions are made as a result of querying and understanding data in the databases.
Databases hold core information that allow businesses to function. Relational database management systems use SQL. These include platforms like Oracle, DB2, Sybase, Microsoft SQL Server, and Access as well as others. Typically, each database platform follows the ANSI / ISO standards and then also has features in the language that are proprietary. This is true of Microsoft SQL Server and the full language is called T-SQL.
The place to begin learning T-SQL depends on the task at hand. Here are some example roles and starting points to jump start your career in each role.
Report Writer or Analyst
- Focus on the Select statement. Report writers and analysts must know how to ask questions (i.e. query the database). Global Knowledge course Querying Microsoft SQL Server 2014 (M20461) or Querying Data with Transact-SQL (M20761) are good starting points.
- Learn how to turn your queries into stored procedures. For example, if you have a query “Select last, first from customer” you can turn it into a stored procedure of “Create procedure usp_GetCustInfo as Select last, first from customer”. Then, you only need to call the procedure from the report “Execute usp_GetCustInfo”. Stored procedures provide performance and security benefits when querying the database.
- Spend time learning SQL Server Reporting Services. This tool will allow you to graphically produce reports and utilize the Select statements and stored procedures you create in T-SQL. Global Knowledge course Implementing Data Models and Reports with Microsoft SQL Server 2014 (M20466) is an excellent course dealing with reporting services as well as analysis services.
Data Entry
- Learn these T-SQL statements: Insert, Update and Delete. These allow data entry and modification.
- Focus on the Select statement. Although the primary focus is data entry, querying the data supports data verification.
- Learn the interface that supports data entry. This could be a web page or a Microsoft Windows application. Each interface will have its own unique design.
Database Designer
- Focus on Create, Alter and Drop statements. These statements give definition to the database objects. Examples include creation of tables, stored procedures, views, functions and triggers. The Alter statement supports modification. Drop removes an object.
- Take a relational database design course or read a relational database design book. Global Knowledge courses Developing Microsoft SQL Server 2014 Databases (M20464), Introduction to SQL Databases (M10985), or Developing SQL Databases (M20762) are excellent options. “Database Design for Mere Mortals”, by Michael Hernandez is a fabulous first book.
- Learn to map business questions to objects that need to exist. Take this question: Who are the best ten customers in terms of revenue and loyalty? Tables need to exist for customers, orders, time and perhaps customer satisfaction. This single question represents many topics – customers, orders, loyalty (probably over time), and revenue.
Database Administrator
- Start small. Two ways to do this are to: join a team as a junior database administrator (DBA) and learn from more senior people or volunteer at a nonprofit or small business to help with their database. Spend time learning from the staff about their business and database platform.
- Focus on T-SQL statements such as Grant, Revoke and Deny. These all deal with security which is a primary responsibility of a DBA. These statements control access to objects that have been created by the database designer. These statements control whether a person is allowed to see data, modify data, create tables, drop tables or any other privilege.
- Focus on performance and metadata. Learn dynamic management views, system stored procedures, and system functions that deal with metadata. Administering Microsoft SQL Server 2014 Databases (M20462) is a good starting point for those pursuing a database administration role.
Although each role has its specialty, every role needs the ability to query the database, or database metadata (i.e. the objects within the database). Every person needs an understanding of the Select statement. Many roles need an understanding of statements such as create, alter, drop, insert, update, delete, grant, revoke and deny. The place to begin in T-SQL is with the select statement, regardless of role.
Related Courses
Querying Microsoft SQL Server 2014 (M20461)
Administering Microsoft SQL Server 2014 Databases (M20462)
Developing Microsoft SQL Server 2014 Databases (M20464)
Implementing Data Models and Reports with Microsoft SQL Server 2014 (M20466)
Querying Data with Transact-SQL (M20761)
Developing SQL Databases (M20762)
from
CERTIVIEW
How to Enable Remote Desktop in Windows Server 2016
from Tom's IT Pro
via CERTIVIEW
Best Online Fax Services 2016
from Tom's IT Pro
via CERTIVIEW
ITIL® Foundation Question of the Week: Value of a Service
What are the three (3) main aspects used by a customer to determine the value of a service?
A. Functionality, Cost, Warranty
B. Outcomes, Functionality, Preferences
C. Preferences, Features, Performance
D. Outcomes, Preferences, Perceptions
Answer: D.
Customers have a preference regarding the service provider and what the service should offer, and they have a perception of what they should receive for the money spent on the service. Once they receive the outcome, as long as the perceived outcome is equal to or greater than what they thought they should receive, they’ve received value.
Related Course
ITIL Foundation
Related Certification
ITIL Foundation
ITIL Foundation Question of the Week Series
- ITIL® Foundation Question of the Week: Service Operation Stage Functions
- ITIL® Foundation Question of the Week: Service No Longer Used
- ITIL® Foundation Question of the Week: Types of Services
- ITIL® Foundation Question of the Week: Categorizing Incidents
- ITIL® Foundation Question of the Week: Value of a Service
from
CERTIVIEW
Friday, 21 October 2016
CCNP Collaboration Question of the Week: H.323 Endpoints
Which parameter should be set to prevent H.323 endpoints from registering to Cisco TelePresence Video Communication Server (VCS) automatically?
A. On the VCS, go to Configuration, Protocols, H.323, and set Auto Discover to off.
B. On the VCS, go to Configuration, Protocols, H.323, and set Auto Registration to off.
C. On the VCS, go to Configuration, Registration, Allow List, and set Auto Discovery to off.
D. On the VCS, go to Configuration, Registration, Configuration, and set Auto Discovery to off.
Answer: A.
You can prevent H.323 endpoints being able to register automatically with the VCS by disabling Auto Discovery on the VCS (VCS configuration > Protocols > H.323).
Related Courses
CIPTV1 – Implementing Cisco IP Telephony and Video Part 1 v1.0
CIPTV2 – Implementing Cisco IP Telephony and Video Part 2 v1.0
CTCOLLAB – Troubleshooting Cisco IP Telephony and Video
CAPPS – Implementing Cisco Collaboration Applications v1.0
Related Certification
CCNP Collaboration
from
CERTIVIEW
Amazon's Alexa May be the Next Essential Business Tool
from Tom's IT Pro
via CERTIVIEW
Thursday, 20 October 2016
Best Computer Hardware Certifications For 2017
from Tom's IT Pro
via CERTIVIEW
As Old MCSEs Wane the MVA Recertify Option Remain
from Tom's IT Pro
via CERTIVIEW
The Unending Challenge of Managing “The Database”
If you’re a large organization, somewhere in the bowels of the IT department is “The Database”. It doesn’t matter what kind of database server it runs on, maybe it’s SQL Server, maybe it’s MySQL, but The Database has something big in it – the output of that exhaustive customer survey you did, or the company’s complete sales history back to 1986. But whatever it is, it has gobs of data, and you likely have no idea what’s in it.
The IT department probably even bugs you about it from time to time. “Hey, do you still need this database? It hasn’t been read from in three years, and it’s taking up half a terabyte of storage!” You can’t delete it, of course – that’s irreplaceable company history, or it’s something the company spent a lot of time or effort to assemble.
There could be extremely valuable insights to be gleaned by analyzing the information in The Database. But who’s going to do it? Do you have somebody who knows SQL and can query a database? That’s good, but is he or she a person also trained in statistics sufficiently to spot meaningful trends and patterns in the data that comes out of the query? (Don’t look at me, stats class in college was some of the most expensive naps I ever took.)
Do you have a programmer on staff? Somebody who can build GUI interfaces? Excellent – that’s a good start. But will he or she be able to take the data coming from the database professional to correctly forecast the next quarter or two based on that existing data? How are their data visualization skills?
Do you have any employees skilled in the fast-growing art of machine learning? These people are gifted in the ability to build programs that get progressively more sophisticated the more they analyze your company’s data. If you do, you’re in the minority today. And, do your machine learning pros really understand relational databases?
You see the problem, perhaps. Increasingly, companies need someone who brings all these skills to the table: delivering an understanding of database design, savvy querying skills, statistical acumen, data visualization artistry, and machine learning wizardry, all in one package. Do you have that consummate data analysis professional on your staff? Maybe not – and that’s a problem that Microsoft wants to fix.
Solution: The Data Science Degree
If no name of a current employee with all these skills pops into your head, you aren’t alone. As early as 2012 Gartner, the IT industry analysis organization, had identified this critical shortage of experts that can mold and shape a company’s raw data into actionable business intelligence. “There is not enough talent in the industry,” said Gartner’s Peter Sondergaard, adding, “Data experts will be a scarce, valuable commodity.”
It’s not hard to see why. The IT industry is currently generating a quantity of new data each year that is measurable in zettabytes of storage. A zettabyte represents 1,024 exabytes of storage. Each of the following units of measurement describe 1,024 of the one that follows – getting smaller as we go. An exabyte is 1,024 petabytes, which in turn is 1,024 terabytes. Each of those terabytes is a bit more than the storage capacity of 200 DVD discs. So a zettabyte is somewhere around 200 billion DVDs-worth of data. Whoa.
And that’s just the new data being added each year to the zettabytes that have come before. The skills needed to manage that quantity of information are not widely spread. The potential for amazing new businesses and growth potential in existing businesses is in danger of being throttled by a lack of professionals with the complete set of skills to capitalize on new opportunities provided by big data.
What would it take to create a new generation of data analysts? Microsoft doesn’t claim it will be easy. Their curriculum of self-paced learning materials delivered through edX.com could consume 200 hours of effort to complete – or more! On top of all that is the final exam, a real-world project requiring another six to 10 hours to bring all the lessons of the program to bear. When you consider that a bachelor’s program at a university only requires 120 credit hours to complete, it becomes clear why this program is named the Microsoft Professional Degree Program.
What’s in the program?
The curriculum of the program is grouped into three units. The first unit contains four courses focused on the basics of data science: querying data and applying statistical analysis techniques to the results. After an initial orientation course, the meat of Unit 1 starts with a course on mastering the Microsoft querying language Transact-SQL (T-SQL) for the purposes of retrieving and modifying data. Students are then free to choose Excel or Power BI as their tool for data visualization and analysis and will complete a course in that chosen tool. The first unit completes with an extensive section on statistics – a vitally important tool for determining when a blip in the data is the beginning of an important trend, or just a funny quirk of fate.
Unit 2 uses three courses to focus on some of the broader software development skills needed to begin building software solutions to data science problems. Students start with a course introducing them to their choice of Python, a popular general-purpose programming language, or R, a language created explicitly for statistical analysis situations.
This is followed by a course on the skills of data science proper such as learning to explore and visualize patterns in data, dealing with corrupt or incomplete data sets, and transforming that data into other forms that better support analysis. Unit 2 then concludes with a deep dive into the principles of machine learning, as students learn to write software that spots patterns in data and which gets better at understanding the data as time goes by.
Students that have made it this far are now prepared for the rigorous content of Unit 3. Only two courses are needed to complete Unit 3, but students get a number of choices along that path. Students revisit their Unit 2 programming language for a deeper explanation of their choice of R or Python. This time, basic syntax for the language is replaced with thorough explorations of how to use their language choice for organizing data for analysis.
Students then have a second big decision to make: choose between one of three available courses to conclude their formal studies. Option one is a course in advanced methods in machine learning. Option two is a course on intelligent app development building IOT and big data bots on the Azure Machine Learning system. The third option is a deep dive into the Azure HDInsight platform, providing cloud-based data analysis services on an on-demand basis.
All this prepares budding data scientists for their final challenge in Unit 4: a 12-hour-long capstone project demonstrating real-world skills as a data science professional. Students are challenged to create a project that utilizes the Cortana Intelligence Platform. This project is then scored by administrators of the Microsoft Professional Degree (MPD) Program to determine if the individual has indeed earned a degree in data science.
You’re probably wondering, how much is this going to set back my budget?
One of the most remarkable aspects of this program is how little is being charged for it. Attending the classes is, as of this writing, entirely free of charge. Students that wish to prove their completion of the various courses can purchase a verified completion certificate costing $49 for each course, but the information itself is being provided completely free.
Microsoft is placing some big bets on big data. From improvements in SQL Server, to programming language development, visualization tools and Azure-based compute clusters, Microsoft is demonstrating that they see huge potential for organizations to do business in bigger and better ways than ever before by leveraging what’s in The Database.
But that can’t happen if efforts to access an organization’s data is hampered by shoddy database design, or inadequate data scrubbing capabilities, substandard data transformation techniques, uninspiring data visualization efforts or limitations imposed by the analysis capacity of on-premises hardware.
The market-crushing businesses of the future need full-spectrum analysts who understand a company’s data intimately, and who can efficiently produce innovative solutions that surface latent data insights to company stakeholders. They need people that have had a collegiate level of immersion in every aspect of data analysis. Tomorrow’s market leaders need data scientists, and Microsoft has inexpensively delivered an amazing wealth of knowledge to nurture those data experts in your organization to fill exactly that role.
Related Courses
Analytics and Data Management Training
from
CERTIVIEW
Wednesday, 19 October 2016
Confessions of an IT Pro: Precious Ideas Will Hold You Back
from Tom's IT Pro
via CERTIVIEW
Security+ Question of the Week: Risk of Collision
Which of the following has the highest risk of collision?
A. SHA‐1
B. HMAC
C. MD5
D. SHA‐2
The correct answer is C.
The highest risk of collision is based on the shorted hash value output length. From this list of MD5 has the shortest with 128 bit hash value length. SHA-1 has 160 bit hash value length, and SHA-2 has hash value length starting as 224 increasing from there. HMAC is not a hashing algorithm, instead it is an implementation of hashing. HMAC can use any hashing algorithm, such as MD5 or SHA-1, then adds the use of a symmetric key as a randomness source in order to produce a more complex hash. It does not produce an encrypted hash. Since HMAC can use any hashing algorithm, it is not necessarily using MD5 and with the added randomness, collisions are less common that with MD5 on its own.
Related Courses
Security+ Prep Course (SY0-401)
Security+ Certification Boot Camp (SY0-401)
Security+ Question of the Week (SY0-401) Series
- Security+ Question of the Week: Deploying a Firewall
- Security+ Question of the Week: Flood Guard
- Security+ Question of the Week: iSCSI
- Security+ Question of the Week: Wireless MAC Filtering
- Security+ Question of the Week: Quantitative Analysis
- Security+ Question of the Week: Contracts
- Security+ Question of the Week: System Clock
- Security+ Question of the Week: Security Breach Incident Response
- Security+ Question of the Week: Reduce Electrostatic Discharge
- Security+ Question of the Week: Planting Malware
- Security+ Question of the Week: Network Hardening
- Security+ Question of the Week: Fuzzing
- Security+ Question of the Week: Single Sign‐On
- Security+ Question of the Week: Digital Envelope
- Security+ Question of the Week: Confining Communications to a Subnet
- Security+ Question of the Week: DoS Tool
- Security+ Question of the Week: Intranet Defense
- Security+ Question of the Week: War Driving
- Security+ Question of the Week: User Rights and Permissions Checks
- Security+ Question of the Week: Third Party Partnerships
- Security+ Question of the Week: Indicator of Integrity
- Security+ Question of the Week: Incident Response Procedure
- Security+ Question of the Week: Good Password Behavior
- Security+ Question of the Week: Tailgating
- Security+ Question of the Week: Differential Backup
- Security+ Question of the Week: Government and Military
- Security+ Question of the Week: Backdoor
- Security+ Question of the Week: Wrong Name or Address
- Security+ Question of the Week: Increase in Email Hoaxes
- Security+ Question of the Week: Suspicious Location-Based Messages
- Security+ Question of the Week: Session Hijack
- Security+ Question of the Week: Definition of a Threat
- Security+ Question of the Week: Dismiss Alarms
- Security+ Question of the Week: NoSQL vs. SQL Database
- Security+ Question of the Week: BYOD Compliance
- Security+ Question of the Week: Missing Storage Devices
- Security+ Question of the Week: Data Processed by an Application
- Security+ Question of the Week: LDAP Port
- Security+ Question of the Week: Authentication System
- Security+ Question of the Week: Cryptographic Solution
- Security+ Question of the Week: Risk of Collision
from
CERTIVIEW
10 Best Mac Apps for IT Pros
from Tom's IT Pro
via CERTIVIEW
Tuesday, 18 October 2016
Best IT Jokes. Ever.
from Tom's IT Pro
via CERTIVIEW
Best Enterprise Architect Certifications For 2017
from Tom's IT Pro
via CERTIVIEW
Why Upgrade to Windows Server 2016?
On September 26, 2016, Microsoft announced the general availability of Windows Server 2016. We already have a taste of what the user interface looks like from Windows 10. In actuality, there are a few other features that Windows 10 picked up first, such as PowerShell 5.0 and the latest version of Hyper-V, that are also part of Windows Server 2016. Many other heavy-hitting server-only features will be here soon. Here are five compelling reasons to upgrade to the newest edition of Windows Server:
Nano Server
One of the most interesting new features of Windows Server 2016 is the advent of a new installation option called Nano Server. This is similar to the Windows Server Core option which began with Windows Server 2008 R1. However, Nano is far smaller than Server Core, requiring less than 600MB of hard disk space and can run in as little as 128MB of RAM.
Server Core needs around 3GB of hard disk space and 512MB of RAM at a minimum. We have to go all the way back to Windows NT 4.0 to find a Windows Server operating system that used less hardware than Nano. Last year at Ignite, I saw a demonstration of Nano server running on a stack of tiny devices, each about the size of a deck of cards. Nano Server has many fewer components running, which will mean a smaller attack surface, and far less patching. That does mean that Nano will only be able to run a select few workloads, such as Hyper-V and File Services.
What will likely throw many Windows administrators for a loop is the fact that Nano is almost completely headless. It does have an Emergency Management Console from which you can change the IP address of the machine, shutdown, restart and configure Windows Firewall settings. Other than that, there is no command-prompt or PowerShell prompt to work with like there is on Server Core. In addition, Nano cannot be raised to a full edition server or downgraded from full edition. Nano is a one-way installation choice. All management of Nano Server will be performed remotely with PowerShell or the graphical RSAT suite.
Containers
The Containers concept has been around for some time in other environments, but is now making its way to Windows. Containers allow services to be run in an isolated manner that makes them far more portable, less resource intensive and easier to deploy in large numbers. Windows Server Containers allow for one or more instances of a service to run on the same server. They can be spun up instantaneously and still provide isolation on the host. Hyper-V Containers run in a minimalistic virtual machine to provide even greater levels of isolation, albeit with more resource consumption. Microsoft has been working with Docker to bring the container management and libraries found in Linux over to Windows.
For those of you familiar with Microsoft’s App-V platform, Containers provide similar capabilities for the backend server environment. Where App-V is typically about providing a wrapper around an end-user application, Containers usually provide that isolation for services running on a server (although, there is certainly crossover between the two).
Storage Spaces Direct
The new Storage Spaces Direct feature is an outgrowth of the Storage Spaces capability introduced with Windows Server 2012. Storage Spaces isolates the underlying physical disks and aggregates them into storage pools on a single server. Storage Spaces Direct allows these storage pools to span multiple servers.
Not only can Storage span disks on multiple servers, it can also be made redundant across those servers. This means the storage pool can survive the outage of a disk and also survive the outage of a server in the storage pool.
This technology is not likely to knock the SAN off of its perch at the pinnacle of our storage hierarchy, but it could be a very useful option for some environments where the cost of continuously adding storage to the SAN is becoming a management and cost nightmare.
PowerShell enhancements
Windows Server 2016 comes with PowerShell 5.0 which includes a large number of enhanced features. PowerShell 5.0 provides hundreds of new cmdlets addressing many of the new features of the Windows Server 2016 operating system and new capabilities for PowerShell in general.
For starters, the PowerShell prompt has been given a facelift, enhancing it with colorized commands, CTRL key copy and paste, and line-wrap selection of text. Next, PowerShell 5.0 includes package management functionality allowing for the download of PowerShell modules from public or private libraries. This opens up an enormous library of enhancements that can easily be taken advantage of directly from PowerShell.
From a programmability standpoint, classes can now be developed directly in PowerShell much like other object-oriented languages. Finally, if you missed the announcement, Microsoft has open sourced PowerShell with development of Linux and Mac OS X versions of PowerShell already in Alpha.
Hyper-V enhancements
Hyper-V has been updated with many new features and enhanced capabilities in Windows Server 2016. The features are explained in greater detail in this blog, but here is the outline of the new features:
- Storage Replica – A block-level volume replication technology for clusters. This feature will primarily affect Hyper-V, but could be used for other workloads.
- Rolling Cluster Upgrades – Again, this is something that affects clustering in general, but will be most useful in a Hyper-V cluster scenario.
- Hot-add NICs and Memory – Previously, VMs had to be shut down to add more memory or virtual network cards.
- Software Defined Networking – Provides a new Network Controller role, along with new load balancing capabilities.
- PowerShell Direct – Allows PowerShell commands to be sent to VMs through the Hyper-V host.
- Shielded Virtual Machines – Allows for VMs to be protected from intrusion even by the Hyper-V administrator.
- Linux Secure Boot – Linux VMs can now boot using the Secure Boot feature of the UEFI firmware in Hyper-V.
- Nested Virtualization – Allows for Hyper-V hosts to run inside another Hyper-V host.
- Nano Server – Nano Server can be used as a Hyper-V host, providing the smallest footprint yet for a Windows-based Hypervisor.
Related Courses
Windows Server 2016 Training
Microsoft Training
from
CERTIVIEW
Microsoft Enterprise Mobility + Security (EM+S) Review
from Tom's IT Pro
via CERTIVIEW
Monday, 17 October 2016
Microsoft Dynamics 365: Everything You Need to Know
from Tom's IT Pro
via CERTIVIEW
ITIL® Foundation Question of the Week: Categorizing Incidents
What is the reason for categorizing Incidents?
A. To determine where to escalate the issue.
B. To understand the number of each type of incident.
C. For reporting and trending.
D. To determine which function is responsible for fixing the most incidents.
Answer: C.
Categorizing Incidents enables reporting on specific types of incidents so that trending analysis can be completed. This is typically done as one of the proactive activities carried out by Problem Management.
Related Course
ITIL Foundation
Related Certification
ITIL Foundation
ITIL Foundation Question of the Week Series
- ITIL® Foundation Question of the Week: Service Operation Stage Functions
- ITIL® Foundation Question of the Week: Service No Longer Used
- ITIL® Foundation Question of the Week: Types of Services
- ITIL® Foundation Question of the Week: Categorizing Incidents
from
CERTIVIEW
Workplace by Facebook: FAQ
from Tom's IT Pro
via CERTIVIEW
Friday, 14 October 2016
10 Best New Features in Windows Server 2016
from Tom's IT Pro
via CERTIVIEW
CCNP Collaboration Question of the Week: Discard Digits Instruction
Which discard digits instruction removes the access code from a number before sending the number onto an adjacent system?
A. PreDot
B. PreNot
C. NoDigits
D. PostDot
Answer: A.
A discard digits instruction (DDI) removes a portion of the dialed digit string before passing the number on to the adjacent system. PreDor instructs Cisco CallManager to remove the external access code.
Related Courses
CIPTV1 – Implementing Cisco IP Telephony and Video Part 1 v1.0
CIPTV2 – Implementing Cisco IP Telephony and Video Part 2 v1.0
CTCOLLAB – Troubleshooting Cisco IP Telephony and Video
CAPPS – Implementing Cisco Collaboration Applications v1.0
Related Certification
CCNP Collaboration
CCNP Collaboration Question of the Week Series
- CCNP Collaboration Question of the Week: DSP Farm Profile Configuration Mode
- CCNP Collaboration Question of the Week: Discard Digits Instruction
from
CERTIVIEW
Best Remote Access Software and Solutions 2017
from Tom's IT Pro
via CERTIVIEW
Thursday, 13 October 2016
Best Document Management Software and Systems 2016
from Tom's IT Pro
via CERTIVIEW
What’s New in Hyper-V for Windows Server 2016
Microsoft’s Hyper-V product got its start in 2006 with the release of Windows Server 2008 R1. Over the next several generations of the Windows operating system, Hyper-V has matured greatly to become a true competitor in the hypervisor marketplace. Hyper-V has been updated with many new features and enhanced capabilities in Windows Server 2016.
Some of these new features can be seen in the client Hyper-V that is included with Windows 10. (Most people don’t know that you can enable Hyper-V on a Windows 10 Pro or Enterprise edition machine.) Many of the more robust Hyper-V server-related features are now available in Windows Server 2016.
Here are some of the more notable features:
General Hyper-V Improvements
Nested virtualization allows for Hyper-V hosts to run inside another Hyper-V host. Previously, the only way to run Hyper-V was directly on hardware that supported hardware assisted virtualization. This new feature will be a great addition for lab and training environments allowing for the use of multiple Hyper-V hosts on a single piece of hardware.
Nano Server can be used as a Hyper-V host, providing the smallest footprint yet for a Windows-based Hypervisor. With Nano Server running as the host, the amount of patching required will be a mere fraction of a full edition server, and significantly less than even Server Core. This will result in far fewer reboots of the Hyper-V host throughout the course of a year.
PowerShell Direct allows PowerShell commands to be sent to virtual machines (VMs) directly through the Hyper-V host. The VM does not need network access or even a virtual NIC. Commands are sent to the Hyper-V host first, then from the host they are passed through to the VM.
Storage Improvements
Storage Replica is a block-level volume replication technology for clusters. This feature will primarily affect Hyper-V, but could be used for other workloads. In previous versions of Windows, a cluster that spanned locations could only be replicated with third-party hardware or software. Now, the ability to replicate data from one location to another is included in Windows Server 2016.
Storage resiliency allows Hyper-V VMs to recover from storage access problems. Interruptions in storage access typically causes a VM to crash with a Stop Screen message. The new Storage Resiliency feature allows Hyper-V to intercept the failed request and places the VM in a critical paused state until such time as the storage interruption has been resolved.
Clustering Improvements
Rolling cluster upgrades affect clustering in general, but will be most useful in a Hyper-V cluster scenario. This can be used to upgrade a cluster from Windows Server 2012 R2 to Windows Server 2016 in an incremental fashion. The cluster can continue to run while the individual members are being upgraded.
Shared vhdx files being used in a guest cluster can now be resized while the VMs using the shared vhdx are online. The shared vhdx files can also be protected against failure by using the Hyper-V Replica feature.
Start order priority can be used to determine in what order VMs will start when they are clustered. This is useful when VMs depend others for services that must be present first.
Hardware Improvements
Hot-add NICs and memory allow virtual network interface controllers (NICs) and memory to be added to a VM without downtime. Previously, VMs had to be shut down to add more memory or virtual network cards.
Discrete device assignment allows a VM to have direct access to some PCIe hardware. This can result in faster access since it is bypassing the Hyper-V stack and can result in a VM having access to hardware that would otherwise be only available to the host.
Networking Improvements
Software-defined networking provides a new network controller role, for centralized management and monitoring of the virtualized network infrastructure.
Network function virtualization allows for services typically provided via hardware appliances to be shifted over to software based virtual appliance. The network function virtualization (NFV) technologies provided in Windows Server 2016 include the Datacenter Firewall, RAS Gateway, Software Load Balancer and NAT.
Switch embedded teaming (SET) allows for multiple network adapters to be configured as a network team providing similar capabilities to the NIC teaming feature in Windows Server 2012 R2. SET, however, is more tightly integrated with Hyper-V and provides better fault-tolerance and performance than traditional NIC teams.
Security Improvements
Shielded virtual machines allow for VMs to be protected from intrusion even by the Hyper-V administrator. This is accomplished by using a virtual trusted platform model (TPM) chip on the VM to encrypt the virtual disks. The VM can also be encrypted as it is transferred from one host to another via live migration. Also, the VM can be isolated so that the memory of the VM is inaccessible while either running or at rest.
Host resource protection helps to protect against attacks coming from other VMs in the infrastructure. VMs are monitored for overconsumption of resources when perpetrating such attacks and can help to prevent denial of service attacks from being successful.
Linux secure boot allows Linux VMs to start up using the Secure Boot feature of the UEFI firmware in Generation 2 VM on Hyper-V. This new feature is used to prevent modification to kernel mode code, and was previously available only to Windows Server 2012 R2 VMs.
With all of these new features in Windows Server 2016, in addition to the capacity increases provided in Windows Server 2012 (R2), Hyper-V is now on par with other players in the virtualization space and provides a compelling option for virtualizing the modern datacenter.
Related Courses
Windows Server 2016 Training
from
CERTIVIEW
5 New Dropbox for iOS Productivity Features
from Tom's IT Pro
via CERTIVIEW
Wednesday, 12 October 2016
Skills in Demand in the Freelance Marketplace
from Tom's IT Pro
via CERTIVIEW
Security+ Question of the Week: Cryptographic Solution
How is integrity verified using a cryptographic solution?
A. Checking the pre and post hash values
B. Replacing the symmetric key often
C. Keeping the private key on a removable media
D. Use longer keys
The correct answer is A.
Integrity is verified using the cryptographic solution of checking the pre and post hash values against each other to see if they are exactly the same. This process is usually performed by employing the binary operator of XOR, where if the two binary hash values being compared are exactly the same, the result of the XOR is 0 (zero), but if they are different then the result is a non-zero value. Replacing the symmetric key often is a good idea, it is the basis of ephemeral keying, but it does not related to integrity. Keeping the private key on a removable media is a secure storage mechanisms, but it does not relate to integrity. Use of longer keys is a good security concept for cryptography, as longer keys make brute force attacks more difficult, but that does not relate to integrity.
Related Courses
Security+ Prep Course (SY0-401)
Security+ Certification Boot Camp (SY0-401)
Security+ Question of the Week (SY0-401) Series
- Security+ Question of the Week: Deploying a Firewall
- Security+ Question of the Week: Flood Guard
- Security+ Question of the Week: iSCSI
- Security+ Question of the Week: Wireless MAC Filtering
- Security+ Question of the Week: Quantitative Analysis
- Security+ Question of the Week: Contracts
- Security+ Question of the Week: System Clock
- Security+ Question of the Week: Security Breach Incident Response
- Security+ Question of the Week: Reduce Electrostatic Discharge
- Security+ Question of the Week: Planting Malware
- Security+ Question of the Week: Network Hardening
- Security+ Question of the Week: Fuzzing
- Security+ Question of the Week: Single Sign‐On
- Security+ Question of the Week: Digital Envelope
- Security+ Question of the Week: Confining Communications to a Subnet
- Security+ Question of the Week: DoS Tool
- Security+ Question of the Week: Intranet Defense
- Security+ Question of the Week: War Driving
- Security+ Question of the Week: User Rights and Permissions Checks
- Security+ Question of the Week: Third Party Partnerships
- Security+ Question of the Week: Indicator of Integrity
- Security+ Question of the Week: Incident Response Procedure
- Security+ Question of the Week: Good Password Behavior
- Security+ Question of the Week: Tailgating
- Security+ Question of the Week: Differential Backup
- Security+ Question of the Week: Government and Military
- Security+ Question of the Week: Backdoor
- Security+ Question of the Week: Wrong Name or Address
- Security+ Question of the Week: Increase in Email Hoaxes
- Security+ Question of the Week: Suspicious Location-Based Messages
- Security+ Question of the Week: Session Hijack
- Security+ Question of the Week: Definition of a Threat
- Security+ Question of the Week: Dismiss Alarms
- Security+ Question of the Week: NoSQL vs. SQL Database
- Security+ Question of the Week: BYOD Compliance
- Security+ Question of the Week: Missing Storage Devices
- Security+ Question of the Week: Data Processed by an Application
- Security+ Question of the Week: LDAP Port
- Security+ Question of the Week: Authentication System
- Security+ Question of the Week: Cryptographic Solution
from
CERTIVIEW
6 Best Windows 10 Apps for IT Pros
from Tom's IT Pro
via CERTIVIEW
Tuesday, 11 October 2016
Microsoft Azure vs. Amazon Web Services: Cloud Comparison
from Tom's IT Pro
via CERTIVIEW
Best Data Center Certifications For 2017
from Tom's IT Pro
via CERTIVIEW
Cybersecurity Awareness is Cybersecurity Job One
The onus of cybersecurity extends beyond all previous boundaries, and responsibilities no longer rest solely with uber geeks who engage in cyber warfare from sterile rooms with raised floors, whirring fans and blinking lights.
In fact, the single greatest asset to the cybersecurity workforce is the workforce in general. It is also its greatest liability. Valued and secured corporate assets are touched at different levels by all authorized users on a corporate network.
The cybercriminal could have many motivations for breaching the network from theft to fraud and blackmail, to just plain fun. Additionally, network attacks can range from unauthorized access to the denial and disruption of service. A cybercriminal’s varied methods for disrupting business and compromising assets and information creates an evolving environment that leaves your network assets vulnerable. The people responsible for those far-reaching assets are not cybersecurity specialists, but standard authorized users.
Recently, a certain manufacturer unveiled something called USB Kill. Simply put, it is a portable USB device that, when plugged in, rapidly draws power into its capacitors and then unleashes the stored energy in one fatal blow to the device in which it is plugged. The result is instant death to that device.
Obviously, this is not a hacking access issue. However, cybercriminals also seek service and asset denial on a large-scale basis from Distributed Denial of Service (DDoS) attacks to power grid manipulations. This particular threat illustrates an end-point vulnerability. And, in this case, end point means general asset holder, the workforce populous, who are lacking cyber awareness training at an alarming level in the corporate world.
Recently, I had a valued business partner tell me of an experiment they did where they randomly dropped 200 USB drives in high-traffic, public locations in Chicago, Cleveland, San Francisco and Washington, D.C. These thumb drives contained a text file that instructed whomever found them to send an email to a specific address and inform the receiver of where it was found. This “message in a bottle” experiment garnered alarming results, with 20 percent of the dropped devices garnering replies to the message on the USB drive. This means that those “found” USB drives were plugged into machines and the text file was opened and read. I cannot give you exact numbers, but it is not difficult to imagine that many of those USB drives were plugged into corporate owned assets. It is a looming question as to what would have happened if these had been “USB Kill” sticks, or even worse contained malware, ransomware or a remote code of some sort.
The workforce populous represents our true end-point security phase and, sadly, they are the least well informed. You can attribute that to a traditional education model where the available cybersecurity training consists of defending network assets through firewalls, network tools, and threat management policies, all of which do not apply to the general workforce.
Cyber awareness-level training is appropriate for all members of the corporate workforce, and reflects an organization’s true devotion to creating an educated and vigilant team. In addition to instilling the security virtues necessary to prevent an otherwise preventable issue, it serves as an excellent standard when mandated and deployed for all corporate network users as a best practice.
Global Knowledge is proud to offer a state-of-the-art cybersecurity product from CompTIA, a top-level security industry leader, called CyberSecure. Appropriate for all corporate citizens, it builds true cyber awareness in a highly interactive, self-paced format that can be completed in about an hour.
Related Training
CompTIA CyberSecure
from
CERTIVIEW
Monday, 10 October 2016
Wandering Through Microsoft Ignite 2016 and Exploring the Future of AI
For the last four years I have attended Microsoft’s big conference for IT professionals. It used to be called TechEd and for the last two years it has been combined with a few other conferences and goes by Ignite. This is the biggest conference that Microsoft sponsors for IT professionals (and according to Microsoft it is the biggest tech conference in the world) so it always offers insights into where Microsoft is going.
For me, the conference kicked off with one clear message throughout the first day, including both keynote addresses and the major breakout sessions: Microsoft is a company oriented towards the cloud and Azure. The official release of Windows Server 2016 was announced in the keynote, but it felt less important than the Azure-related discussion. A partner deal with Adobe to use Azure for some of their cloud capabilities got more attention.
At the morning keynote, Microsoft Executive Vice President for Cloud and Enterprise Scott Guthrie presented a series of topics that pretty much came down to “Azure is awesome!” Don’t get me wrong, Azure is in fact awesome and currently my favorite playground. It is just such a different vibe from just a few years ago when Azure was part of the strategy but there was still a lot of emphasis on the on-premises software and servers bearing the Microsoft name. On-premises products still get some love, but almost always as part of a hybrid solution. Windows 10, which dominated last year’s event, was barely mentioned in keynote addresses, although there were still many technical sessions focused on Windows 10.
At the afternoon keynote address Microsoft CEO Satya Nadella spent an hour talking about Microsoft’s artificial intelligence (AI) capabilities, which are largely hosted in and provided through Azure. I found it interesting that he took pains to make clear that Microsoft AI is not intended to take away jobs (or become the evil overlord that everyone who has seen the Terminator™ movies expects).
Instead, Microsoft AI is currently busy translating all of Wikipedia in under one-tenth of a second, recognizing the mood people are in based on their photo, and most importantly creating bots to help lazy people like me make good fantasy football roster decisions. Microsoft is really working with the NFL on a fantasy football bot. The bot recommended that keynote guest Deion Sanders, a NFL sportscaster and former player, should start Drew Brees at quarterback over Matt Ryan in the evening’s Monday Night Football game. I looked it up later in the week and it turned out the bot was right. Congratulations Microsoft AI you are already better at fantasy football than I am.
Several times during the keynote Nadella emphasized that Microsoft wants to democratize AI. If they can do this, it will be an epic win for Microsoft. There is amazing AI work going on throughout the IT industry right now, but it’s generally being done by people who are really smart – certainly much smarter than me. If Microsoft can bring functional AI to the point where normal (by that I mean non-genius) professionals can use it directly that will be game changing. It will be like Visual Basic was for Windows application development and Excel has been for data analysis.
I will say that in many ways this conference feels a bit like Ignite 2015 part two. Much of what I have seen is either an expansion of technologies discussed last year or an evolution of new functionality based on existing technologies. For example, I sat in on a great session on Azure Function apps. These are new and very cool, but they are an extension of Web Apps and WebJobs. This is actually a very good thing. We are seeing a maturing and stable platform in Azure, but powerful new capabilities are still being added.
I have been pretty fortunate that the focus of the conference aligns with things that interest me. I am also fortunate in that I am an instructor and my life is pretty much learning about things that interest me. I sat in a session on the new cognitive services that Microsoft is making available. This was really powerful for me because it was a practical, simple framework for implementing artificial intelligence. I can integrate seriously powerful AI into applications without engaging a data scientist. Essentially Microsoft is productizing the data scientist for me.
Another theme that has been developing over the last few years is Microsoft’s commitment to open source. If you have followed Microsoft for some time you know that they, as a company, were at one point fairly antagonistic towards open source. They have really completely turned around on this. he majority of the sessions that I attended featured something in GitHub. In fact the word git was used tens of thousands of times at the conference. Well, probably not but, it has been used a lot. Even products central to Microsoft such as Visual Studio and .Net have open source components readily available on GitHub.
One open source-related product that is new to Microsoft this year, at least as a demonstrable product, is container technology. This isn’t getting the same amount of coverage as some of the Azure topics but this is something that is clearly huge going forward. Docker, a leader in the application container space, is now integrated in both Azure and Windows Server 2016.
If you are not familiar with Docker or containers, the technology provides a portable, isolated configuration environment for applications. I can define a container for an application and then deploy that container to any server that is running Docker and it will run. There are some weaknesses — such as the inability to co-mingle Windows and Linux workloads on the same server and lack of support for applications with graphical user interfaces — but for a lot of cases this is a really powerful capability for application deployment and management. As an application developer I cannot wait to get back home and start playing with containers (I mean preparing to teach container technology).
If the focus of Ignite is any indication, Microsoft under the leadership of Nadella is truly a new company. I will admit that even a couple of years ago I did not fully get Microsoft’s strategy related to Azure or open source. Now it is clear and undeniable. While Azure feels more stable as a platform than it was a few years ago there are still plenty of new and evolving capabilities that are really interesting. Microsoft is now heavily involved and invested in open source. I would suggest keeping your eyes on Azure, Docker and Azure cognitive APIs. Oh, also if you play fantasy football next year be aware — I’ll have an expert in my ear with every decision that I make courtesy of the fine engineers and scientists at Microsoft.
View our complete Microsoft portfolio.
ABOUT THE AUTHOR
Tracy Wallace has worked in IT for almost 30 years. He began his career repairing mainframe terminals and programming in Fortran. He regrets how old this makes him look. Tracy has been a Microsoft Certified Trainer since 1995 and currently focuses on Azure, Microsoft.Net programming, SQL Server and SharePoint. Somewhere between Fortran and programming instruction he earned a degree in mechanical engineering. When not working Tracy basks in the glory of his wonderful wife and two amazing kids. He is also attempting to build a fleet of autonomous robots to take over the world but is struggling with the electronics design so everyone is safe for now.
from
CERTIVIEW
Upwork vs. Freelancer: Which is Better for Your Business?
from Tom's IT Pro
via CERTIVIEW
ITIL® Foundation Question of the Week: Types of Services
Services are made up of a combination of people, process and technology, and can be further classified as what three (3) types?
A. Customer, supplier, user
B. Core, enhancing, enabling
C. Basic, underpinning, added value
D. Strategic, tactical, operational
Answer: B.
The three (3) types of services are core, enhancing and enabling.
- Core services are the basic functionality of the service that the customer is paying for.
- Enhancing services are the additional functionality that provide differentiation. They are sometimes know as excitement factors.
- Enabling services are those service assets that are required for the delivery of services. Examples of enabling services are the servers, applications, training, databases and network devices.
Related Course
ITIL Foundation
Related Certification
ITIL Foundation
ITIL Foundation Question of the Week Series
- ITIL® Foundation Question of the Week: Service Operation Stage Functions
- ITIL® Foundation Question of the Week: Service No Longer Used
- ITIL® Foundation Question of the Week: Types of Services
from
CERTIVIEW
How to Encrypt Facebook Messenger Chats
from Tom's IT Pro
via CERTIVIEW