Suchbegriff: Sprache:

     Anzahl ausgewählter Meldungen: 17 (von max. 50)

03.05.2018 07:43:00 Quelle: https://www.reddit.com/

(Help) Simple inventory management software

Hi - I am wondering if there is some offline inventory management software, something simple that I can just input serial/model types of monitors/pcs and which user/location currently has the item that maybe I can export as a PDF.

Ideally with a section that I can also manage software license keys and expiry dates.

Would prefer something local on my machine as I'm not sure on how to set up a server based one like snipe IT.

I am currently using excel spreadsheets.

Thanks!

submitted by /u/ppprob
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

16.03.2018 10:29:00 Quelle: http://go.theregister.com/

Patch LOSE-day: Microsoft secures servers of the world. By disconnecting them

Users complain of static IP issues, world of admin pain

Microsoft’s Tuesday patch-fest may have reacted quite negatively with Windows Server 2008 R2 running VMware, leaving servers offline and administrators scrambling to recover IP addresses.…

der Autor auf            ext. Link anzeigen | ext. Link in deutsch (Testphase)

13.03.2018 09:39:00 Quelle: http://go.theregister.com/

Tutanota blames Comcast block for March 1 outage

ISP cut off our secure mail service, says dev

The creator and co-founder of Tutanota is blaming Comcast after his encrypted mail service was briefly taken offline earlier this month.…

der Autor auf            ext. Link anzeigen | ext. Link in deutsch (Testphase)

23.02.2018 11:29:00 Quelle: https://www.reddit.com/

DFS Manager vs Powershell DFSN commands

I typically handle DFS folder management via PS rather than the snap-in but there seems to be a discrepancy.

To disable/offline a namespace I use:

get-dfsnfolder -path "\XXXX*" | set-dfsnfolder -state offline

However, in the DFS Management GUI, there's no option to enable/disable this state for the namespace -- there's an option for the folder targets (referral status) but they are not the same.

So, after I execute the command above, and look up \XXXX in the GUI, I see the namespace but it has no option to enable it or view it's status.

Am I missing something or is this property limited to PS?

submitted by /u/zxsaint
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

23.02.2018 11:29:00 Quelle: https://www.reddit.com/

LAPS: Computer offline for half a year, how does LAPS cope with that?

We are looking into implementing LAPS in our domain and a question popped up: If I had a computer that got a local administrator password set by LAPS and then happened to be taken offline for half a year or so, then powered on again with no network connection. Would LAPS still be able to tell me the password that was set half a year ago, or whenever LAPS was last able to set the password on the client?

submitted by /u/niemalsnever
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

14.12.2017 16:18:00 Quelle: http://go.theregister.com/

EE Business Broadband digital transformation: Portal offline until July

You guys OK with paper print-outs for the next eight months?

EE Business Broadband customers will have to make do with old-fashioned paper print-outs for the next eight months because the firm's online portal is down.…

der Autor auf            ext. Link anzeigen | ext. Link in deutsch (Testphase)

14.12.2017 10:16:00 Quelle: https://www.reddit.com/

Windows 10 | Printer GPO

Hi!

In our company we are currently using Windows 8.1 and Windows 10 client computers. There seems to be a problem on the Windows 10 machines as our printer states that it is "Offline", this is usually only on one to two computers at the same time. That is often fine, as everyone is not printing here.

The problem is that in the recent days I've had to manually do two things in order for the printer to seem online on the client computer.

  1. gpupdate /force

  2. Restart the "Print Spooler" service

After doing those steps it works again, but I have a feeling that this is due to our GPO being broken in Windows 10. Does anyone have any suggestions as to what may be the problem?

Thanks.

submitted by /u/OnlyDrey
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

14.12.2017 04:14:00 Quelle: https://www.reddit.com/

Wrangling Linux/Samba permissions

I have a share setup I moved from a Windows system with inheritance to Linux using Samba. Multiple folders that you can consider like home folders (profiles) are in a single share, each with the own permissions. Of each folder multiple AD users can access and work in (read/write).

The folder looks like this:

drwxrws--t+ 2 user1 domain+server-share-user1 6 Nov 17 2016 user1 drwxrws--t+ 3 user2 domain+server-share-user2 62 Dec 12 13:53 user2 drwxrws--t+ 9 company domain+server-share-company 4096 Dec 13 15:50 company 

Each folder has the following ACLs:

getfacl /share/user1 getfacl: Removing leading '/' from absolute path names # file: var/www/html/share/user1 # owner: user1 # group: domain+server-share-user1 # flags: -st user::rwx group::rwx group:domain+server-share-user1:rwx mask::rwx other::--x default:user::rwx default:group::rwx default:group:domain+server-share-user1:rwx default:mask::rwx default:other::--x 

Our Samba config:

[global] workgroup = DOMAIN password server = DC1.DOMAIN.LOCAL DC2.DOMAIN.LOCAL realm = DOMAIN.LOCAL security = ads idmap config * : range = 16777216-33554431 winbind separator = + template homedir = /share/%U template shell = /bin/bash kerberos method = secrets only winbind use default domain = true winbind offline logon = true netbios name = SERVER server string = server log file = /var/log/samba/log.%m max log size = 50 passdb backend = tdbsam load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # ACL support vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes hide unreadable = Yes [share] comment = share path = /share browseable = yes read only = no inherit acls = yes inherit permissions = yes #force group = SERVER-share-rx admin users = "domain admins" valid users = @SERVER-share-rx 

however when a user via Samba saves a file into say the user1 folder it plants "domain users" in there instead of holding on to the domain+server-share-user1 group:

ls -l /share/user1 drwxrws--x+ 2 user1 domain users 6 Dec 13 17:08 New folder getfacl: Removing leading '/' from absolute path names # file: /share/user1/New folder/ # owner: user1 # group: domain\040users # flags: -s- user::rwx user:user1:rwx group::rwx group:server-share-user1:rwx group:domain\040users:rwx mask::rwx other::--x default:user::rwx default:user:user1:rwx default:group::rwx default:group:server-share-user1:rwx default:group:domain\040users:rwx default:mask::rwx default:other::--x 

How can I get the new file/folder creation to keep the inheritance that I specified at the parent folder?

That group is important for the xfs_quota we have going on in each profile/folder as it starts not counting it towards the original group's quota.

submitted by /u/Enxer
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

11.12.2017 22:16:00 Quelle: https://www.reddit.com/

PDQ Deploy packs v53.0.0 (2017-12-11)

Background

This is v53.0.0 (52.0.0, v51.0.0, v50.0.0, v49.0.0, v48.0.0, etc...) of our PDQ installers and includes all installers from the previous package with old versions removed.

All packages:

  1. ...install silently and don't place desktop or quicklaunch shortcuts

  2. ...disable every auto-update, nag popup and stat-collection feature I can find

  3. ...work with the free or paid version of PDQ Deploy but do not require PDQ - each package can run standalone (e.g. from a thumb drive) or push with SCCM/GPO/etc if desired. PM me if you need assistance setting something like that up


Download

Primary: Download the self-extracting archive from one of the repos:

Mirror HTTPS HTTP Location Host
Official link link US-NY /u/SGC-Hosting
#1 link link FR /u/mxmod

Secondary:

Download the torrent.

Tertiary:

Plug one of these keys into Resilio Sync (formerly called "BT Sync") to pull down that repository:

- BTRSRPF7Y3VWFRBG64VUDGP7WIIVNTR4Q (Installer Packages, ~2.91 GB) - BMHHALGV7WLNSAPIPYDP5DU3NDNSM5XNC (WSUS Offline updates, ~12.00 GB) 

Make sure the settings for your Sync folder look like this (or this if you're on v1.3.x). Specifically you need to enable DHT.

Quaternary: (source code)

The Github page contains all scripts and wrapper files used in the pack. Check it out if you want to see the code without downloading the full binary pack, or just steal them for your own use. Note that downloading from Github directly won't work - you need either this provided pack or go manually fetch all the binaries yourself in order to just plug them in and start working.


Instructions

  1. Import all .XML files from the \job files directory into PDQ deploy (it should look roughly like this after you've imported them).

  2. Copy all files from the \repository directory to wherever your repository is.

  3. All jobs reference PDQ's $(Repository) variable, so as long as you've set that in preferences you're golden.


Package list

Installers:

(Updates in bold. All installers are 64-bit unless otherwise marked)

  • 7-Zip v16.04

  • 7-Zip v16.04 (x86)

  • Adobe Acrobat Reader DC v15.023.20053

  • Adobe AIR v27.0.0.124

  • Adobe Flash Player v27.0.0.187 (Chrome)

  • Adobe Flash Player v27.0.0.187 (Firefox)

  • Adobe Flash Player v27.0.0.187 (IE / ActiveX)

  • Adobe Reader XI v11.0.23

  • Adobe Shockwave v12.3.1.201

  • Apple iTunes v12.5.1.21

  • CDBurnerXP v4.5.8.6795

  • CutePDF v3.0 (PDF printer) (x86)

  • FileZilla Client v3.29.0

  • Gimp v2.8.22 (x86)

  • Google Chrome Enterprise v63.0.3239.84

  • Google Chrome Enterprise v63.0.3239.84 (x86)

  • Google Earth v7.1.5.1557

  • Java Development Kit 6 Update 45

  • Java Development Kit 6 Update 45 (x86)

  • Java Development Kit 7 Update 80

  • Java Development Kit 7 Update 80 (x86)

  • Java Development Kit 8 Update 144

  • Java Development Kit 8 Update 144 (x86)

  • Java Development Kit 9.0.1

  • Java Runtime 6 update 115

  • Java Runtime 6 update 115 (x86)

  • Java Runtime 7 update 80

  • Java Runtime 7 update 80 (x86)

  • Java Runtime 8 update 144

  • Java Runtime 8 update 144 (x86)

  • Java Runtime 9.0.1

  • KTS KypM Telnet/SSH Server v1.19c (x86)

  • Microsoft .NET Framework v3.5.1 SP1 (x86)

  • Microsoft Silverlight v5.1.50901.0

  • Microsoft Silverlight v5.1.50901.0 (x86)

  • Mozilla Firefox v57.0.2

  • Mozilla Firefox v57.0.2 (x86)

  • Mozilla Firefox ESR v52.5.2

  • Mozilla Firefox ESR v52.5.2 (x86)

  • Mozilla Thunderbird v52.5.0 (x86) (customized; read notes)

  • Notepad++ v7.5.3 (x86)

  • Pale Moon v27.6.2 (x86)

  • Spark v2.8.3 (x86)

  • TightVNC v2.8.8

  • TightVNC v2.8.8 (x86)

  • UltraVNC v1.2.1.2 (x86)

  • VLC media player v2.2.8 (x86)

  • WinSCP v5.11.2 (x86)

Utilities:

  • Clean Up ALL Printers (purge all printers from target)

  • Clean Up Orphaned Printers (remove non-existent printers from the spooler)

  • Empty All Recycle Bins (force all recycle bins to empty on target)

  • Enable Remote Desktop

  • Install PKI Certificates

  • Reboot (force target reboot in 15 seconds)

  • Remove Adobe Flash Player (removes all versions)

  • Remove Java Runtime (removes JRE versions 3-9)

  • Temp File Cleanup

  • USB Device Cleanup. Uninstalls non-present USB hubs, USB storage devices and their storage volumes, Disks, CDROMs, Floppies, WPD devices and deletes their registry items. Devices will re-initialize at next connection


Package Notes

  1. Read the notes in PDQ for each package, they explain what it does. Basically, most packages use a .bat file to accomplish multi-step installs with the free version of PDQ. You can edit the batch files to see what they do; most just delete "All Users" desktop shortcuts and things like that. changelog-v##-updated-<date>.txt has version and release history in addition to random notes where I complain about things like Reader DC and how much of a pain it is to build packages for. But actually though and for real it is a hideous pain to build for. Please someone for the love of G-d...accost Adobe and tell them to fix their a+ garbage customization routine.

  2. Thunderbird:

    • Thunderbird is configured to use a global config file stored on a network share. This allows for settings changes en masse. By default it's set to check for config updates every 120 minutes.
    • You can change the config location, update frequency, OR disable this behavior entirely by editing thunderbird-custom-settings.js.
    • A copy of the config file is in the Thunderbird directory and is called thunderbird-global-settings.js
    • If you don't want any customizations, just edit Thunderbird's .bat file and comment out or delete all the lines mentioning the custom config files.
  3. Microsoft Offline Updates - built using the excellent WSUS Offline tool. Please donate to them if you can, their team does excellent work.


Integrity

In the folder \integrity verification the file checksums.txt is signed with my PGP key (0x07d1490f82a211a2, pubkey included). You can use this to verify package integrity.

If you find a bug or glitch, PM me or post it here. Advice and comments are welcome and appreciated.


Donations (bitcoin):

1Bfxpo1WqTGwRXZKrwYZV2zvJ4ggyj9GE1

Donations (Monero):

46ZUK4VDLLz3zapDw62UaS71ZfFBjH9uwhc8FeyocPhUHHsuxj5zfvpZpZcZFHWpxoXD99MVt6PnR9QfftXDV8s6CFAnPSo

"Do not withhold good from those to whom it is due, when it is in your power to act."

submitted by /u/vocatus
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

06.12.2017 10:14:00 Quelle: https://www.reddit.com/

SSSD, AD, and Autofs. Getting autofs to mount nfs drives based on AD nisMaps. Following online guides for CentOS7, but its simply not working.

Ok, so I've done the steps listed here

https://ovalousek.wordpress.com/2015/08/03/autofs/

and here

https://care.qumulo.com/hc/en-us/articles/115014470007-Active-Directory-AutoFS-maps-to-AD-bound-Linux-clients-with-SSSD

Everything... seems to work? But I'm at the verification/mounting step, and it's just not working. I don't know what to look at since this is relatively new to me, and I'm fairly certain I'm "doing it" right.

To be clear, this:

automount -m 

Only outputs the default maps. Nothing from AD.

Here's the full output of my sssd.conf file. Obviously assume "domainname" is my actual domainname in the file.

[sssd] domains = domainname.com config_file_version = 2 services = nss, pam, autofs [domain/domainname.com] ad_domain = domainname.com krb5_realm = DOMAINNAME.COM realmd_tags = manages-system joined-with-samba cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = False fallback_homedir = /home/%u access_provider = ad [autofs] autofs_provider = ldap ldap_autofs_entry_key = cn ldap_autofs_entry_object_class = nisObject ldap_autofs_entry_value = nisMapEntry ldap_autofs_map_name = nisMapName ldap_autofs_map_object_class = nisMap ldap_autofs_search_base = ou=automount, dc=domainname, dc=com 

My SSSD version is 1.15.2. Authentication works, I can su into my AD users. So it's working somehow.

RealmD is working, too.

[root@mercer ~]# realm list domainname.com type: kerberos realm-name: DOMAINNAME.COM domain-name: domainname.com configured: kerberos-member server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools login-formats: %U login-policy: allow-realm-logins 

[Edit]: Using

tcpdump host domaincontroller 

I can at least confirm there's no sssd/nisMap error, because the workstation I'm on is never attempting to communicate with the domain controller where the "automount" OU and nisMap is stored.

submitted by /u/Sysa_Dmin
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

05.12.2017 22:13:00 Quelle: https://www.reddit.com/

Charter Outages (SE TN, GA, Others?)

As of 1020 est, we lost sites under Charter in Chattanooga, Nashville, Atlanta, Greenville, etc...

 

Luckily we failed over to other ISPs, but our telecommuters are offline. No resolution estimate yet.

EDIT1: 1205est Charter now recognizes outage in NC/SC/TN/N-GA. No MTTR.

EDIT2: 1310est After 3 hours, @Ask_Spectrum posted the following tweet advising that service outage was due to a fiber cut. No MTTR is available and Charter is now refusing phone calls with an automated message stating, "Due to a higher than normal call volume, we are unable to process this call."

EDIT3: 1409est Charter advises through 3rd party sources that outages will increase as traffic is routed around the outage.

EDIT4: 1545est 2nd Shift is taking over for the evening. In light of that, I won't be updating any further. Last check, all our users in that area are still suffering from speeds that are so slow that packets die en route.

submitted by /u/IAintShootinMister
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

01.12.2017 04:09:00 Quelle: https://www.reddit.com/

Migrating to a new hyper-v host

Here's the scenario. I have a older dl385 Gen 8 that hosts 8 or so VM's. This server is coming up on year 5, it's running out of space and is in need of replacement. I initially started shopping around for AMD servers and they appear to be in short supply. Dell should have some out next year at some point and HP, well they have the DL385 gen8 and thats about it. Plus I do not know how live migration is going to play between the Opteron and the Epyc cpus.

Downtime is a factor here. We are a 24/7 company and this particular HV server hosts our mission critical applications. I would like to go intel but that means I cannot live migrate and the powers that be won't allow for the appropriate amount of downtime needed to do offline migrations. A few of these VM's are 350-400G each. The rest are in the 100-200G range.

I would just like to sanity check my idea here. My thought was to just pick up a new intel server, set up hyper-v replica, once things are replicating kick everyone out of everything, run a final replication and then fail over. I see this as possibly needing maybe 30 minutes of downtime vs the hours it would take to actually migrate. What do you all think?

submitted by /u/hakzorz
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

30.11.2017 16:12:00 Quelle: https://www.reddit.com/

Skype 4 biz 2015

Peers,

I'm on a sfb course right now, but my instructor has been contradictory when asked this question! Please could someone advise?

If I deploy a STD edition backend server, with an edge box in a DMZ, is this all that is required for federation and remote client connectivity? He's said yes, but then in another module begun saying I need a web application proxy too, on a separate box. However i suspect this is for offline online apps (on premise)?

Please may someone steer me in the right direction?

submitted by /u/bobaboo42
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

29.11.2017 04:08:00 Quelle: https://www.reddit.com/

Really? I.T. Rant

Over Thanksgiving, Sunday night sometime, our most mission critical file server went offline. Too small of a company for 24hr I.T. 350ish people, 2 I.T.

5:50 am ticket comes in "cannot access map drive." 6 am my alarm goes off, I notice the ticket, check it out and realize the server is offline. Idrac isn't responding either, but the network is up, the site is up, other servers are up. Turns out it was redundant power supply circuitry failure. I work remotely, the other I.T. guy works at the site where this server failed.

Coincidentally, this server is scheduled to be replaced mid December anyway, and it's replacement has been set up and in sync with it anyway, so timing is in my favor. I begin group policy updates of mapped drive locations, verify share permissions, gpupdate /force and replication to all remote sites. At 6:50 I've verified all group policies at all sites are good and my kid is ready to be dropped off at school. I missed my shower, but whatever. Send an email to all users that if their mapped drive doesn't work, reboot. If that doesn't work, here's all 3 ways to contact me.

Had about a dozen shortcuts to unc paths for a handful of people to update for people, transition was seamless.

I got us running before the majority of our users got to the office and communicated that a reboot would fix things for people if it wasn't automatically fixed already, which it was.

Our most mission critical resource was there for employees at the start of business after an unplanned hardware failure. Way to go team?

Got lectured by an member of the executive team that I handled the situation poorly because many people left unsaved files open over the holiday and rebooting was a major inconvenience. WTF?

submitted by /u/LakeSuperiorIsMyPond
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

26.11.2017 10:05:00 Quelle: https://www.reddit.com/

AD DNS and Dynamic Registration

I tend to get into debates in regard to AD DNS, dynamic registration, and best practices...

So we know DNS is critical to AD - you need to resolve stuff to auth/bind.

As a netadmin I’d prefer to not point clients directly to individual AD DNS servers - I prefer using anycast secondaries based around BIND that pull zone copies. (Secondary for the msdcs, and other AD integrated zones... it takes minutes to build a BIND server - and there’s no license requirements...)

Then corporation wide I can say: Just use, for example, 10.10.10.10/11 as your DNS server and you’ll route to the closest server and there’s fault tolerance load/balancing within the base IP routing. (UDP/ECMP)

Then I am not relying on a single AD DC as the primary resolver for clients - or hope the local field personnel can keep track of the local AD:DNS for the site (and don’t send their queries to some random site a hemisphere away...)

It places the configuration burden on me... and per my experience everything works - even dynamic reg - as that, based on my experience, is on SOA - which is still AD:DNS server.

Unfortunately I often get the server tech with the MS AD manual in hand that says DNS has to be pointed to a MS DNS server...

But if that primary server is ever offline - it impacts end users as they have to wait for a DNS timeout before they hit the secondary...

I’ve attempted to engineer around that - should I just give up and let the AD folks do their “per rubric” implementation...? I hate the “Internet is slow” complaints because the primary AD:DNS is down... especially when I can engineer a simple (per my time) fault-tolerant work around...

I guess what I am saying: what are other solutions that have worked well? I tend to get a lot of push-back against this design...

(I will say that a failure has never been linked to my implementation - I just get push back when it is understood I am using BIND.)

submitted by /u/davis-sean
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

23.11.2017 16:05:00 Quelle: https://www.reddit.com/

Redirected folders + Client-Side Caching (Offline files) - best way to move to new file server?

Moving some users on to a new file server.

I copied their redirected folders (Desktop + Documents) to the new server in advance of the move, then updated Folder Redirection for them in the applicable Group Policy to point at the new server, leaving "Move the contents to the new location" as "Disabled".

When the users log on, client-side caching starts to cache the folders from the new server, but also retains the cached folders from the old server. This fills the local cache (fairly small SSDs).

Is there any way to let CSC know the old server is gone, and to stop saving files from it, other than to re-initialize the Offline Files cache altogether as outlined here? MS KB942974

I was wondering whether, if I set "Move the contents to the new location" to "Enabled" - although it will cause a slow initial logon - it might make CSC realise it doesn't need to hold on to the old server's files any more.

submitted by /u/techqueue
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)

22.11.2017 16:06:00 Quelle: https://www.reddit.com/

Using Work folders instead of CSC (offline files)

If you haven't used work folders by now you really need to look at them. Folder Redirection and CSC hasn't been touched in years, the last update to them was with Server 2008. Windows 10 works better with work folders than it does with CSC, and also your files are stored locally, encrypted, and protected with WIP. So this means faster access times, and less conflicts. Also you don't need to worry about things like cached shares not wanting to show the most recent version of a share even though it says online, then having to reset the offline files cache.

I've been working with Work Folders for about 4 months now, and everyone is much happier than they were using CSC, especially our remote users.

So if you are looking for something to setup during this holiday season while you are bored check this out and set it up. It takes 30 minutes at the most. It is very painless, and you just need to change a few GPOs around.

submitted by /u/HSChronic
[link] [comments]

    ext. Link anzeigen | ext. Link in deutsch (Testphase)