Feb 9 09:54:07.086826 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:54:07.086845 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:54:07.086852 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:54:07.086859 kernel: printk: bootconsole [pl11] enabled Feb 9 09:54:07.086864 kernel: efi: EFI v2.70 by EDK II Feb 9 09:54:07.086870 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:54:07.086876 kernel: random: crng init done Feb 9 09:54:07.086881 kernel: ACPI: Early table checksum verification disabled Feb 9 09:54:07.086887 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:54:07.086892 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086898 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086904 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:54:07.086909 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086915 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086921 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086927 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086933 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086940 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086946 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:54:07.086951 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:07.086957 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:54:07.086963 kernel: NUMA: Failed to initialise from firmware Feb 9 09:54:07.086969 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:07.086974 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Feb 9 09:54:07.086980 kernel: Zone ranges: Feb 9 09:54:07.086986 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:54:07.086992 kernel: DMA32 empty Feb 9 09:54:07.086998 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:07.087005 kernel: Movable zone start for each node Feb 9 09:54:07.087010 kernel: Early memory node ranges Feb 9 09:54:07.087016 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:54:07.087022 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:54:07.087028 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:54:07.087033 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:54:07.087039 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:54:07.088406 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:54:07.088426 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:54:07.088433 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:54:07.088439 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:07.088450 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:07.088459 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:54:07.088465 kernel: psci: probing for conduit method from ACPI. Feb 9 09:54:07.088471 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:54:07.088477 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:54:07.088484 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:54:07.088490 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:54:07.088496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:54:07.088502 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:54:07.088509 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:54:07.088515 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:54:07.088522 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:54:07.088528 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:54:07.088534 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:54:07.088540 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:54:07.088546 kernel: CPU features: detected: Spectre-BHB Feb 9 09:54:07.088552 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:54:07.088560 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:54:07.088566 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:54:07.088572 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:54:07.088578 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:54:07.088584 kernel: Policy zone: Normal Feb 9 09:54:07.088592 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:07.088599 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:54:07.088605 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:54:07.088611 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:54:07.088617 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:54:07.088625 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:54:07.088632 kernel: Memory: 3991932K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202228K reserved, 0K cma-reserved) Feb 9 09:54:07.088638 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:54:07.088644 kernel: trace event string verifier disabled Feb 9 09:54:07.088650 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:54:07.088657 kernel: rcu: RCU event tracing is enabled. Feb 9 09:54:07.088663 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:54:07.088669 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:54:07.088675 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:54:07.088682 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:54:07.088688 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:54:07.088695 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:54:07.088701 kernel: GICv3: 960 SPIs implemented Feb 9 09:54:07.088707 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:54:07.088713 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:54:07.088719 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:54:07.088725 kernel: GICv3: 16 PPIs implemented Feb 9 09:54:07.088731 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:54:07.088737 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:54:07.088743 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:07.088750 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:54:07.088756 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:54:07.088762 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:54:07.088770 kernel: Console: colour dummy device 80x25 Feb 9 09:54:07.088777 kernel: printk: console [tty1] enabled Feb 9 09:54:07.088783 kernel: ACPI: Core revision 20210730 Feb 9 09:54:07.088789 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:54:07.088796 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:54:07.088802 kernel: LSM: Security Framework initializing Feb 9 09:54:07.088808 kernel: SELinux: Initializing. Feb 9 09:54:07.088815 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:07.088821 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:07.088829 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:54:07.088835 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:54:07.088842 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:54:07.088848 kernel: Remapping and enabling EFI services. Feb 9 09:54:07.088854 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:54:07.088860 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:54:07.088867 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:54:07.088874 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:07.088880 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:54:07.088888 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:54:07.088895 kernel: SMP: Total of 2 processors activated. Feb 9 09:54:07.088901 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:54:07.088908 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:54:07.088914 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:54:07.088920 kernel: CPU features: detected: CRC32 instructions Feb 9 09:54:07.088927 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:54:07.088933 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:54:07.088939 kernel: CPU features: detected: Privileged Access Never Feb 9 09:54:07.088947 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:54:07.088954 kernel: alternatives: patching kernel code Feb 9 09:54:07.088964 kernel: devtmpfs: initialized Feb 9 09:54:07.088972 kernel: KASLR enabled Feb 9 09:54:07.088979 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:54:07.088986 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:54:07.088993 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:54:07.088999 kernel: SMBIOS 3.1.0 present. Feb 9 09:54:07.089006 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:54:07.089013 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:54:07.089021 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:54:07.089027 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:54:07.089034 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:54:07.089041 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:54:07.089059 kernel: audit: type=2000 audit(0.097:1): state=initialized audit_enabled=0 res=1 Feb 9 09:54:07.089066 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:54:07.089072 kernel: cpuidle: using governor menu Feb 9 09:54:07.089081 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:54:07.089087 kernel: ASID allocator initialised with 32768 entries Feb 9 09:54:07.089094 kernel: ACPI: bus type PCI registered Feb 9 09:54:07.089101 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:54:07.089110 kernel: Serial: AMBA PL011 UART driver Feb 9 09:54:07.089117 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:54:07.089123 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:54:07.089130 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:54:07.089137 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:54:07.089144 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:54:07.089151 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:54:07.089157 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:54:07.089164 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:54:07.089171 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:54:07.089178 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:54:07.089184 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:54:07.089191 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:54:07.089199 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:54:07.089207 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:54:07.089214 kernel: ACPI: Interpreter enabled Feb 9 09:54:07.089220 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:54:07.089227 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:54:07.089233 kernel: printk: console [ttyAMA0] enabled Feb 9 09:54:07.089240 kernel: printk: bootconsole [pl11] disabled Feb 9 09:54:07.089246 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:54:07.089253 kernel: iommu: Default domain type: Translated Feb 9 09:54:07.089260 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:54:07.089268 kernel: vgaarb: loaded Feb 9 09:54:07.089274 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:54:07.089281 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:54:07.089288 kernel: PTP clock support registered Feb 9 09:54:07.089295 kernel: Registered efivars operations Feb 9 09:54:07.089302 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:54:07.089308 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:54:07.089315 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:54:07.089321 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:54:07.089329 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:54:07.089336 kernel: pnp: PnP ACPI init Feb 9 09:54:07.089342 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:54:07.089349 kernel: NET: Registered PF_INET protocol family Feb 9 09:54:07.089355 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:54:07.089362 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:54:07.089369 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:54:07.089376 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:54:07.089382 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:54:07.089391 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:54:07.089398 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:07.089405 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:07.089411 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:54:07.089418 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:54:07.089425 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:54:07.089431 kernel: kvm [1]: HYP mode not available Feb 9 09:54:07.089438 kernel: Initialise system trusted keyrings Feb 9 09:54:07.089444 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:54:07.089453 kernel: Key type asymmetric registered Feb 9 09:54:07.089459 kernel: Asymmetric key parser 'x509' registered Feb 9 09:54:07.089466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:54:07.089473 kernel: io scheduler mq-deadline registered Feb 9 09:54:07.089479 kernel: io scheduler kyber registered Feb 9 09:54:07.089486 kernel: io scheduler bfq registered Feb 9 09:54:07.089493 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:54:07.089499 kernel: thunder_xcv, ver 1.0 Feb 9 09:54:07.089506 kernel: thunder_bgx, ver 1.0 Feb 9 09:54:07.089514 kernel: nicpf, ver 1.0 Feb 9 09:54:07.089521 kernel: nicvf, ver 1.0 Feb 9 09:54:07.094139 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:54:07.094222 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:54:06 UTC (1707472446) Feb 9 09:54:07.094233 kernel: efifb: probing for efifb Feb 9 09:54:07.094240 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:54:07.094248 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:54:07.094255 kernel: efifb: scrolling: redraw Feb 9 09:54:07.094267 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:54:07.094274 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:07.094281 kernel: fb0: EFI VGA frame buffer device Feb 9 09:54:07.094288 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:54:07.094295 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:54:07.094301 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:54:07.094308 kernel: Segment Routing with IPv6 Feb 9 09:54:07.094315 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:54:07.094321 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:54:07.094329 kernel: Key type dns_resolver registered Feb 9 09:54:07.094336 kernel: registered taskstats version 1 Feb 9 09:54:07.094343 kernel: Loading compiled-in X.509 certificates Feb 9 09:54:07.094349 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:54:07.094356 kernel: Key type .fscrypt registered Feb 9 09:54:07.094363 kernel: Key type fscrypt-provisioning registered Feb 9 09:54:07.094369 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:54:07.094376 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:54:07.094383 kernel: ima: No architecture policies found Feb 9 09:54:07.094391 kernel: Freeing unused kernel memory: 34688K Feb 9 09:54:07.094398 kernel: Run /init as init process Feb 9 09:54:07.094404 kernel: with arguments: Feb 9 09:54:07.094411 kernel: /init Feb 9 09:54:07.094417 kernel: with environment: Feb 9 09:54:07.094424 kernel: HOME=/ Feb 9 09:54:07.094430 kernel: TERM=linux Feb 9 09:54:07.094437 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:54:07.094445 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:07.094456 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:07.094464 systemd[1]: Detected architecture arm64. Feb 9 09:54:07.094471 systemd[1]: Running in initrd. Feb 9 09:54:07.094479 systemd[1]: No hostname configured, using default hostname. Feb 9 09:54:07.094486 systemd[1]: Hostname set to . Feb 9 09:54:07.094494 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:07.094501 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:54:07.094510 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:07.094517 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:07.094525 systemd[1]: Reached target paths.target. Feb 9 09:54:07.094532 systemd[1]: Reached target slices.target. Feb 9 09:54:07.094540 systemd[1]: Reached target swap.target. Feb 9 09:54:07.094547 systemd[1]: Reached target timers.target. Feb 9 09:54:07.094555 systemd[1]: Listening on iscsid.socket. Feb 9 09:54:07.094563 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:54:07.094571 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:54:07.094579 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:54:07.094587 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:54:07.094594 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:07.094602 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:07.094610 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:07.094617 systemd[1]: Reached target sockets.target. Feb 9 09:54:07.094625 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:07.094633 systemd[1]: Finished network-cleanup.service. Feb 9 09:54:07.094642 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:54:07.094649 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:07.094657 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:07.094665 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:07.094672 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:54:07.094685 systemd-journald[275]: Journal started Feb 9 09:54:07.094732 systemd-journald[275]: Runtime Journal (/run/log/journal/ed74721fcc4d4d05b416523b3cefd1b5) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:07.085269 systemd-modules-load[276]: Inserted module 'overlay' Feb 9 09:54:07.117521 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:54:07.132343 systemd-resolved[277]: Positive Trust Anchors: Feb 9 09:54:07.137314 kernel: Bridge firewalling registered Feb 9 09:54:07.137335 systemd[1]: Started systemd-journald.service. Feb 9 09:54:07.132525 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:07.178206 kernel: audit: type=1130 audit(1707472447.150:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.132555 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:07.262280 kernel: SCSI subsystem initialized Feb 9 09:54:07.262302 kernel: audit: type=1130 audit(1707472447.183:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.262312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:54:07.262321 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:54:07.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.144263 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 9 09:54:07.303781 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:54:07.303802 kernel: audit: type=1130 audit(1707472447.274:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.145003 systemd-modules-load[276]: Inserted module 'br_netfilter' Feb 9 09:54:07.150394 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:07.334717 kernel: audit: type=1130 audit(1707472447.309:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.218120 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:07.275244 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:54:07.373817 kernel: audit: type=1130 audit(1707472447.344:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.303957 systemd-modules-load[276]: Inserted module 'dm_multipath' Feb 9 09:54:07.309372 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:07.415746 kernel: audit: type=1130 audit(1707472447.378:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.344317 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:54:07.378312 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:07.405170 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:54:07.421694 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:07.428859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:07.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.455983 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:54:07.488987 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:07.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.498477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:07.547864 kernel: audit: type=1130 audit(1707472447.464:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.547887 kernel: audit: type=1130 audit(1707472447.498:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.547896 kernel: audit: type=1130 audit(1707472447.526:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.548121 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:54:07.564097 dracut-cmdline[298]: dracut-dracut-053 Feb 9 09:54:07.568406 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:07.627073 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:54:07.639087 kernel: iscsi: registered transport (tcp) Feb 9 09:54:07.658955 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:54:07.659010 kernel: QLogic iSCSI HBA Driver Feb 9 09:54:07.694320 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:54:07.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:07.701118 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:54:07.756067 kernel: raid6: neonx8 gen() 13812 MB/s Feb 9 09:54:07.777063 kernel: raid6: neonx8 xor() 10837 MB/s Feb 9 09:54:07.798062 kernel: raid6: neonx4 gen() 13564 MB/s Feb 9 09:54:07.820060 kernel: raid6: neonx4 xor() 11098 MB/s Feb 9 09:54:07.841061 kernel: raid6: neonx2 gen() 13107 MB/s Feb 9 09:54:07.862059 kernel: raid6: neonx2 xor() 10236 MB/s Feb 9 09:54:07.886063 kernel: raid6: neonx1 gen() 10521 MB/s Feb 9 09:54:07.908062 kernel: raid6: neonx1 xor() 8793 MB/s Feb 9 09:54:07.937061 kernel: raid6: int64x8 gen() 6297 MB/s Feb 9 09:54:07.959058 kernel: raid6: int64x8 xor() 3542 MB/s Feb 9 09:54:07.983060 kernel: raid6: int64x4 gen() 7231 MB/s Feb 9 09:54:08.006060 kernel: raid6: int64x4 xor() 3851 MB/s Feb 9 09:54:08.028057 kernel: raid6: int64x2 gen() 6155 MB/s Feb 9 09:54:08.049057 kernel: raid6: int64x2 xor() 3320 MB/s Feb 9 09:54:08.072058 kernel: raid6: int64x1 gen() 5033 MB/s Feb 9 09:54:08.097524 kernel: raid6: int64x1 xor() 2643 MB/s Feb 9 09:54:08.097534 kernel: raid6: using algorithm neonx8 gen() 13812 MB/s Feb 9 09:54:08.097542 kernel: raid6: .... xor() 10837 MB/s, rmw enabled Feb 9 09:54:08.102638 kernel: raid6: using neon recovery algorithm Feb 9 09:54:08.130262 kernel: xor: measuring software checksum speed Feb 9 09:54:08.130285 kernel: 8regs : 17308 MB/sec Feb 9 09:54:08.135394 kernel: 32regs : 20749 MB/sec Feb 9 09:54:08.140131 kernel: arm64_neon : 27911 MB/sec Feb 9 09:54:08.140140 kernel: xor: using function: arm64_neon (27911 MB/sec) Feb 9 09:54:08.204069 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:54:08.214463 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:54:08.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:08.224000 audit: BPF prog-id=7 op=LOAD Feb 9 09:54:08.224000 audit: BPF prog-id=8 op=LOAD Feb 9 09:54:08.225113 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:08.246235 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 09:54:08.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:08.253344 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:08.266026 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:54:08.282149 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 09:54:08.318256 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:54:08.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:08.325153 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:08.363170 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:08.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:08.425074 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:54:08.451069 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:54:08.451128 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:54:08.451138 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:54:08.466381 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 09:54:08.467063 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 09:54:08.486246 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:54:08.502684 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:54:08.502740 kernel: scsi host1: storvsc_host_t Feb 9 09:54:08.515477 kernel: scsi host0: storvsc_host_t Feb 9 09:54:08.515688 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:54:08.525071 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:54:08.555059 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:54:08.555284 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:54:08.562825 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:54:08.563017 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:54:08.563130 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:54:08.573385 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:54:08.582214 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:54:08.582369 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:54:08.589068 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:08.597074 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:54:08.618098 kernel: hv_netvsc 000d3ac5-1fcf-000d-3ac5-1fcf000d3ac5 eth0: VF slot 1 added Feb 9 09:54:08.627080 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:54:08.637074 kernel: hv_pci 4830d849-394a-4943-9be9-640b4a9b6711: PCI VMBus probing: Using version 0x10004 Feb 9 09:54:08.655041 kernel: hv_pci 4830d849-394a-4943-9be9-640b4a9b6711: PCI host bridge to bus 394a:00 Feb 9 09:54:08.655229 kernel: pci_bus 394a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:54:08.662349 kernel: pci_bus 394a:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:54:08.672477 kernel: pci 394a:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:54:08.686018 kernel: pci 394a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:08.708276 kernel: pci 394a:00:02.0: enabling Extended Tags Feb 9 09:54:08.738257 kernel: pci 394a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 394a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:54:08.738448 kernel: pci_bus 394a:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:54:08.746978 kernel: pci 394a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:08.790072 kernel: mlx5_core 394a:00:02.0: firmware version: 16.30.1284 Feb 9 09:54:08.947075 kernel: mlx5_core 394a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:54:09.008889 kernel: hv_netvsc 000d3ac5-1fcf-000d-3ac5-1fcf000d3ac5 eth0: VF registering: eth1 Feb 9 09:54:09.009102 kernel: mlx5_core 394a:00:02.0 eth1: joined to eth0 Feb 9 09:54:09.026797 kernel: mlx5_core 394a:00:02.0 enP14666s1: renamed from eth1 Feb 9 09:54:09.046144 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:54:09.071064 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (545) Feb 9 09:54:09.084099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:09.329361 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:54:09.348256 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:54:09.355159 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:54:09.372501 systemd[1]: Starting disk-uuid.service... Feb 9 09:54:09.402081 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:09.413074 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:10.424069 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:10.424248 disk-uuid[604]: The operation has completed successfully. Feb 9 09:54:10.482501 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:54:10.483173 systemd[1]: Finished disk-uuid.service. Feb 9 09:54:10.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.499744 systemd[1]: Starting verity-setup.service... Feb 9 09:54:10.543067 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:54:10.750234 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:54:10.757442 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:54:10.770678 systemd[1]: Finished verity-setup.service. Feb 9 09:54:10.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.838010 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:54:10.847804 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:54:10.843465 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:54:10.844238 systemd[1]: Starting ignition-setup.service... Feb 9 09:54:10.853926 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:54:10.902013 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:10.902058 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:10.908233 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:10.960088 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:54:10.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:10.971000 audit: BPF prog-id=9 op=LOAD Feb 9 09:54:10.972771 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:10.980543 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:54:11.006191 systemd-networkd[847]: lo: Link UP Feb 9 09:54:11.006202 systemd-networkd[847]: lo: Gained carrier Feb 9 09:54:11.059877 kernel: kauditd_printk_skb: 12 callbacks suppressed Feb 9 09:54:11.059902 kernel: audit: type=1130 audit(1707472451.015:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.006610 systemd-networkd[847]: Enumeration completed Feb 9 09:54:11.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.006705 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:11.090136 kernel: audit: type=1130 audit(1707472451.064:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.016002 systemd[1]: Reached target network.target. Feb 9 09:54:11.027319 systemd[1]: Starting iscsiuio.service... Feb 9 09:54:11.047610 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:11.107500 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:11.107500 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:54:11.107500 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:54:11.107500 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:54:11.107500 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:11.107500 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:54:11.268311 kernel: audit: type=1130 audit(1707472451.124:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.268346 kernel: audit: type=1130 audit(1707472451.200:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.268357 kernel: mlx5_core 394a:00:02.0 enP14666s1: Link up Feb 9 09:54:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.055490 systemd[1]: Started iscsiuio.service. Feb 9 09:54:11.085516 systemd[1]: Starting iscsid.service... Feb 9 09:54:11.111662 systemd[1]: Started iscsid.service. Feb 9 09:54:11.304068 kernel: hv_netvsc 000d3ac5-1fcf-000d-3ac5-1fcf000d3ac5 eth0: Data path switched to VF: enP14666s1 Feb 9 09:54:11.125657 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:54:11.343252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:54:11.343288 kernel: audit: type=1130 audit(1707472451.315:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.188011 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:54:11.200789 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:54:11.378495 kernel: audit: type=1130 audit(1707472451.353:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.242827 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:11.258575 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:11.275258 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:54:11.299680 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:54:11.342509 systemd-networkd[847]: enP14666s1: Link UP Feb 9 09:54:11.342582 systemd-networkd[847]: eth0: Link UP Feb 9 09:54:11.342714 systemd-networkd[847]: eth0: Gained carrier Feb 9 09:54:11.347632 systemd[1]: Finished ignition-setup.service. Feb 9 09:54:11.354284 systemd-networkd[847]: enP14666s1: Gained carrier Feb 9 09:54:11.358917 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:54:11.398143 systemd-networkd[847]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:12.445152 systemd-networkd[847]: eth0: Gained IPv6LL Feb 9 09:54:14.679582 ignition[869]: Ignition 2.14.0 Feb 9 09:54:14.680601 ignition[869]: Stage: fetch-offline Feb 9 09:54:14.680685 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:14.680712 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:14.789602 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:14.789751 ignition[869]: parsed url from cmdline: "" Feb 9 09:54:14.789754 ignition[869]: no config URL provided Feb 9 09:54:14.789760 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:14.849401 kernel: audit: type=1130 audit(1707472454.818:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.807973 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:54:14.789767 ignition[869]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:14.819991 systemd[1]: Starting ignition-fetch.service... Feb 9 09:54:14.789773 ignition[869]: failed to fetch config: resource requires networking Feb 9 09:54:14.789992 ignition[869]: Ignition finished successfully Feb 9 09:54:14.853195 ignition[876]: Ignition 2.14.0 Feb 9 09:54:14.853202 ignition[876]: Stage: fetch Feb 9 09:54:14.853314 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:14.853333 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:14.862928 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:14.863085 ignition[876]: parsed url from cmdline: "" Feb 9 09:54:14.863089 ignition[876]: no config URL provided Feb 9 09:54:14.863094 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:14.863103 ignition[876]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:14.863141 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:54:14.955169 ignition[876]: GET result: OK Feb 9 09:54:14.955277 ignition[876]: config has been read from IMDS userdata Feb 9 09:54:14.955345 ignition[876]: parsing config with SHA512: 6be24754afe4649780b70352a75dcfd771f2903a99f51e7796f7c85dccbca9ed701199e455aabf5e93de1b59f2664b1c8978c880eace66ba1fc424004307d324 Feb 9 09:54:14.994086 unknown[876]: fetched base config from "system" Feb 9 09:54:14.998105 unknown[876]: fetched base config from "system" Feb 9 09:54:14.998843 ignition[876]: fetch: fetch complete Feb 9 09:54:15.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.998113 unknown[876]: fetched user config from "azure" Feb 9 09:54:15.049866 kernel: audit: type=1130 audit(1707472455.010:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.998848 ignition[876]: fetch: fetch passed Feb 9 09:54:15.003736 systemd[1]: Finished ignition-fetch.service. Feb 9 09:54:14.998906 ignition[876]: Ignition finished successfully Feb 9 09:54:15.039437 systemd[1]: Starting ignition-kargs.service... Feb 9 09:54:15.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.051481 ignition[882]: Ignition 2.14.0 Feb 9 09:54:15.098967 kernel: audit: type=1130 audit(1707472455.069:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.063594 systemd[1]: Finished ignition-kargs.service. Feb 9 09:54:15.051488 ignition[882]: Stage: kargs Feb 9 09:54:15.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.070516 systemd[1]: Starting ignition-disks.service... Feb 9 09:54:15.051603 ignition[882]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:15.150961 kernel: audit: type=1130 audit(1707472455.112:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.103327 systemd[1]: Finished ignition-disks.service. Feb 9 09:54:15.051625 ignition[882]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:15.134014 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:54:15.056674 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:15.144307 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:15.058265 ignition[882]: kargs: kargs passed Feb 9 09:54:15.156129 systemd[1]: Reached target local-fs.target. Feb 9 09:54:15.058316 ignition[882]: Ignition finished successfully Feb 9 09:54:15.165435 systemd[1]: Reached target sysinit.target. Feb 9 09:54:15.080894 ignition[888]: Ignition 2.14.0 Feb 9 09:54:15.177250 systemd[1]: Reached target basic.target. Feb 9 09:54:15.080901 ignition[888]: Stage: disks Feb 9 09:54:15.193007 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:54:15.081015 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:15.081038 ignition[888]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:15.089877 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:15.094443 ignition[888]: disks: disks passed Feb 9 09:54:15.094507 ignition[888]: Ignition finished successfully Feb 9 09:54:15.281900 systemd-fsck[896]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:54:15.289990 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:54:15.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.298092 systemd[1]: Mounting sysroot.mount... Feb 9 09:54:15.325068 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:54:15.325793 systemd[1]: Mounted sysroot.mount. Feb 9 09:54:15.331171 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:54:15.386947 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:54:15.391837 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:54:15.404791 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:54:15.404833 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:54:15.421418 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:54:15.476362 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:15.481835 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:54:15.509078 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Feb 9 09:54:15.516423 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:54:15.530989 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:15.531014 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:15.536110 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:15.542393 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:15.570923 initrd-setup-root[938]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:54:15.581080 initrd-setup-root[946]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:54:15.590961 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:54:16.075439 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:54:16.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.091619 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:54:16.091662 kernel: audit: type=1130 audit(1707472456.080:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.089827 systemd[1]: Starting ignition-mount.service... Feb 9 09:54:16.114639 systemd[1]: Starting sysroot-boot.service... Feb 9 09:54:16.126458 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:16.132885 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:16.149491 systemd[1]: Finished sysroot-boot.service. Feb 9 09:54:16.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.177307 kernel: audit: type=1130 audit(1707472456.154:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.185409 ignition[976]: INFO : Ignition 2.14.0 Feb 9 09:54:16.185409 ignition[976]: INFO : Stage: mount Feb 9 09:54:16.200394 ignition[976]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:16.200394 ignition[976]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:16.200394 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:16.200394 ignition[976]: INFO : mount: mount passed Feb 9 09:54:16.200394 ignition[976]: INFO : Ignition finished successfully Feb 9 09:54:16.283262 kernel: audit: type=1130 audit(1707472456.210:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.194847 systemd[1]: Finished ignition-mount.service. Feb 9 09:54:16.771082 coreos-metadata[906]: Feb 09 09:54:16.770 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:54:16.780958 coreos-metadata[906]: Feb 09 09:54:16.773 INFO Fetch successful Feb 9 09:54:16.810348 coreos-metadata[906]: Feb 09 09:54:16.810 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:54:16.826573 coreos-metadata[906]: Feb 09 09:54:16.826 INFO Fetch successful Feb 9 09:54:16.843232 coreos-metadata[906]: Feb 09 09:54:16.843 INFO wrote hostname ci-3510.3.2-a-37d4719b0b to /sysroot/etc/hostname Feb 9 09:54:16.853163 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:54:16.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.859838 systemd[1]: Starting ignition-files.service... Feb 9 09:54:16.892276 kernel: audit: type=1130 audit(1707472456.858:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:16.890780 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:16.920439 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (985) Feb 9 09:54:16.920485 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:16.926159 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:16.931369 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:16.935984 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:16.951874 ignition[1004]: INFO : Ignition 2.14.0 Feb 9 09:54:16.958406 ignition[1004]: INFO : Stage: files Feb 9 09:54:16.958406 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:16.958406 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:16.986271 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:16.986271 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:54:16.986271 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:54:16.986271 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:54:17.072916 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:54:17.082314 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:54:17.090064 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:54:17.082344 unknown[1004]: wrote ssh authorized keys file for user: core Feb 9 09:54:17.103970 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:17.103970 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:17.553438 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:54:17.774447 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:17.788792 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:54:17.788792 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:18.193099 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:54:18.335199 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:54:18.354638 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:54:18.354638 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:54:18.354638 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:54:18.354638 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:54:18.354638 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:54:18.741596 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:54:18.953776 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:54:18.970936 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:54:18.970936 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:18.970936 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:19.429293 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:54:19.477602 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:19.488929 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:19.488929 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:54:19.649099 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:54:19.970272 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:54:19.995333 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:19.995333 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:19.995333 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:54:20.045230 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 09:54:20.800244 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:54:20.823347 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:20.823347 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:20.823347 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:20.823347 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:20.823347 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:54:20.888361 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 09:54:21.193730 ignition[1004]: DEBUG : files: createFilesystemsFiles: createFiles: op(b): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:54:21.220233 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:21.220233 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:21.220233 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:21.220233 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:21.273879 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:21.273879 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:21.273879 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:21.273879 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:21.273879 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:21.347884 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:21.347884 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:21.347884 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:21.347884 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:21.417613 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3052324405" Feb 9 09:54:21.417613 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3052324405": device or resource busy Feb 9 09:54:21.417613 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3052324405", trying btrfs: device or resource busy Feb 9 09:54:21.417613 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3052324405" Feb 9 09:54:21.489635 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1006) Feb 9 09:54:21.441100 systemd[1]: mnt-oem3052324405.mount: Deactivated successfully. Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3052324405" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem3052324405" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem3052324405" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem469222773" Feb 9 09:54:21.496490 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem469222773": device or resource busy Feb 9 09:54:21.496490 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem469222773", trying btrfs: device or resource busy Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem469222773" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem469222773" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem469222773" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem469222773" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 9 09:54:21.496490 ignition[1004]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 9 09:54:21.785621 kernel: audit: type=1130 audit(1707472461.502:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.785652 kernel: audit: type=1130 audit(1707472461.622:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.785664 kernel: audit: type=1131 audit(1707472461.622:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.476854 systemd[1]: mnt-oem469222773.mount: Deactivated successfully. Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1a): [started] processing unit "nvidia.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1a): [finished] processing unit "nvidia.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1b): [started] processing unit "containerd.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1b): op(1c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1b): op(1c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1b): [finished] processing unit "containerd.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1d): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1d): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1f): [started] processing unit "prepare-critools.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1f): op(20): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(1f): [finished] processing unit "prepare-critools.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Feb 9 09:54:21.791287 ignition[1004]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 9 09:54:22.094738 kernel: audit: type=1130 audit(1707472461.860:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.094769 kernel: audit: type=1130 audit(1707472461.959:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.094779 kernel: audit: type=1131 audit(1707472461.992:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.492630 systemd[1]: Finished ignition-files.service. Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(25): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(27): [started] setting preset to enabled for "waagent.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: op(27): [finished] setting preset to enabled for "waagent.service" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:22.101134 ignition[1004]: INFO : files: files passed Feb 9 09:54:22.101134 ignition[1004]: INFO : Ignition finished successfully Feb 9 09:54:22.333848 kernel: audit: type=1130 audit(1707472462.105:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.333876 kernel: audit: type=1131 audit(1707472462.231:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.536730 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:54:22.344997 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:54:21.571499 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:54:21.587757 systemd[1]: Starting ignition-quench.service... Feb 9 09:54:21.605395 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:54:21.605493 systemd[1]: Finished ignition-quench.service. Feb 9 09:54:21.854850 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:54:21.860858 systemd[1]: Reached target ignition-complete.target. Feb 9 09:54:22.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.907185 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:54:21.943963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:54:21.944079 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:54:22.505366 kernel: audit: type=1131 audit(1707472462.427:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.505391 kernel: audit: type=1131 audit(1707472462.482:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:21.992458 systemd[1]: Reached target initrd-fs.target. Feb 9 09:54:22.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.004481 systemd[1]: Reached target initrd.target. Feb 9 09:54:22.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.043757 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:54:22.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.055103 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:54:22.099968 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:54:22.148533 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:54:22.560721 ignition[1043]: INFO : Ignition 2.14.0 Feb 9 09:54:22.560721 ignition[1043]: INFO : Stage: umount Feb 9 09:54:22.560721 ignition[1043]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:22.560721 ignition[1043]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:22.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.175690 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:54:22.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.625608 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:22.625608 ignition[1043]: INFO : umount: umount passed Feb 9 09:54:22.625608 ignition[1043]: INFO : Ignition finished successfully Feb 9 09:54:22.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.184270 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:54:22.199896 systemd[1]: Stopped target timers.target. Feb 9 09:54:22.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.215467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:54:22.215530 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:54:22.263128 systemd[1]: Stopped target initrd.target. Feb 9 09:54:22.278156 systemd[1]: Stopped target basic.target. Feb 9 09:54:22.296465 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:54:22.316740 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:54:22.328368 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:54:22.339288 systemd[1]: Stopped target remote-fs.target. Feb 9 09:54:22.349496 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:54:22.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.363915 systemd[1]: Stopped target sysinit.target. Feb 9 09:54:22.385065 systemd[1]: Stopped target local-fs.target. Feb 9 09:54:22.394408 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:54:22.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.404615 systemd[1]: Stopped target swap.target. Feb 9 09:54:22.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.413321 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:54:22.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.803000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:54:22.413380 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:54:22.457732 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:54:22.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.468936 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:54:22.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.468992 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:54:22.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.482354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:54:22.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.482400 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:54:22.511009 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:54:22.511063 systemd[1]: Stopped ignition-files.service. Feb 9 09:54:22.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.519324 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:54:22.519365 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:54:22.538202 systemd[1]: Stopping ignition-mount.service... Feb 9 09:54:22.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.549681 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:54:22.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.565764 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:54:22.952117 kernel: hv_netvsc 000d3ac5-1fcf-000d-3ac5-1fcf000d3ac5 eth0: Data path switched from VF: enP14666s1 Feb 9 09:54:22.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.565882 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:54:22.577816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:54:22.577886 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:54:22.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.588101 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:54:22.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.588202 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:54:22.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.600615 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:54:23.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:23.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.600710 systemd[1]: Stopped ignition-mount.service. Feb 9 09:54:22.620882 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:54:22.621290 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:54:22.621328 systemd[1]: Stopped ignition-disks.service. Feb 9 09:54:22.630994 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:54:22.631041 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:54:22.642556 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:54:22.642594 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:54:23.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:22.651357 systemd[1]: Stopped target network.target. Feb 9 09:54:22.662259 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:54:22.662313 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:54:22.674158 systemd[1]: Stopped target paths.target. Feb 9 09:54:22.683268 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:54:22.697703 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:54:23.104000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:54:23.104000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:54:23.104000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:54:23.104000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:54:23.104000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:54:22.703292 systemd[1]: Stopped target slices.target. Feb 9 09:54:22.711785 systemd[1]: Stopped target sockets.target. Feb 9 09:54:22.721954 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:54:22.721989 systemd[1]: Closed iscsid.socket. Feb 9 09:54:22.730770 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:54:22.730799 systemd[1]: Closed iscsiuio.socket. Feb 9 09:54:23.147727 iscsid[854]: iscsid shutting down. Feb 9 09:54:22.741171 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:54:22.741215 systemd[1]: Stopped ignition-setup.service. Feb 9 09:54:22.750618 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:22.759245 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:54:22.767878 systemd-networkd[847]: eth0: DHCPv6 lease lost Feb 9 09:54:23.147000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:54:22.769247 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:54:22.769352 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:54:23.148071 systemd-journald[275]: Received SIGTERM from PID 1 (n/a). Feb 9 09:54:22.778430 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:22.778534 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:22.788836 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:54:22.788930 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:54:22.798094 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:54:22.798135 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:54:22.807963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:54:22.808014 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:54:22.818529 systemd[1]: Stopping network-cleanup.service... Feb 9 09:54:22.828012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:54:22.828096 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:54:22.835085 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:54:22.835139 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:54:22.849371 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:54:22.849411 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:54:22.854515 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:54:22.866579 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:54:22.871984 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:54:22.872158 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:54:22.884239 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:54:22.884283 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:54:22.893540 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:54:22.893582 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:54:22.905688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:54:22.905739 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:54:22.915981 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:54:22.916018 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:54:22.936363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:54:22.936415 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:54:22.951531 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:54:22.962488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:54:22.962571 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:54:22.980155 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:54:22.980215 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:54:22.985418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:54:22.985465 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:54:22.997139 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:54:22.997614 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:54:22.997700 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:54:23.047308 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:54:23.047411 systemd[1]: Stopped network-cleanup.service. Feb 9 09:54:23.057679 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:54:23.068300 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:54:23.102451 systemd[1]: Switching root. Feb 9 09:54:23.149149 systemd-journald[275]: Journal stopped Feb 9 09:54:35.601309 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:54:35.601330 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:54:35.601340 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:54:35.601350 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:54:35.601358 kernel: SELinux: policy capability open_perms=1 Feb 9 09:54:35.601366 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:54:35.601375 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:54:35.601382 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:54:35.601390 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:54:35.601398 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:54:35.601407 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:54:35.601415 kernel: kauditd_printk_skb: 38 callbacks suppressed Feb 9 09:54:35.601423 kernel: audit: type=1403 audit(1707472466.660:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:54:35.601433 systemd[1]: Successfully loaded SELinux policy in 392.847ms. Feb 9 09:54:35.601443 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.372ms. Feb 9 09:54:35.601455 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:35.601464 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:35.601475 systemd[1]: Detected architecture arm64. Feb 9 09:54:35.601484 systemd[1]: Detected first boot. Feb 9 09:54:35.601493 systemd[1]: Hostname set to . Feb 9 09:54:35.601502 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:35.601511 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:54:35.601522 kernel: audit: type=1400 audit(1707472468.555:87): avc: denied { associate } for pid=1095 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:35.601532 kernel: audit: type=1300 audit(1707472468.555:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000147672 a1=40000c8af8 a2=40000cea00 a3=32 items=0 ppid=1078 pid=1095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.601542 kernel: audit: type=1327 audit(1707472468.555:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:35.601551 kernel: audit: type=1400 audit(1707472468.565:88): avc: denied { associate } for pid=1095 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:35.601560 kernel: audit: type=1300 audit(1707472468.565:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147749 a2=1ed a3=0 items=2 ppid=1078 pid=1095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.601570 kernel: audit: type=1307 audit(1707472468.565:88): cwd="/" Feb 9 09:54:35.601579 kernel: audit: type=1302 audit(1707472468.565:88): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.601588 kernel: audit: type=1302 audit(1707472468.565:88): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:35.601597 kernel: audit: type=1327 audit(1707472468.565:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:35.601606 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:54:35.601615 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:54:35.601624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:54:35.601635 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:54:35.601644 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:54:35.601654 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:54:35.601663 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:54:35.601673 systemd[1]: Created slice system-getty.slice. Feb 9 09:54:35.601682 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:54:35.601694 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:54:35.601705 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:54:35.601714 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:54:35.601724 systemd[1]: Created slice user.slice. Feb 9 09:54:35.601733 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:35.601742 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:54:35.601751 systemd[1]: Set up automount boot.automount. Feb 9 09:54:35.601760 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:54:35.601769 systemd[1]: Reached target integritysetup.target. Feb 9 09:54:35.601779 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:35.601789 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:35.601798 systemd[1]: Reached target slices.target. Feb 9 09:54:35.601808 systemd[1]: Reached target swap.target. Feb 9 09:54:35.601817 systemd[1]: Reached target torcx.target. Feb 9 09:54:35.601826 systemd[1]: Reached target veritysetup.target. Feb 9 09:54:35.601836 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:54:35.601845 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:54:35.601854 kernel: audit: type=1400 audit(1707472475.134:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:35.601865 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:54:35.601875 kernel: audit: type=1335 audit(1707472475.140:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:54:35.601884 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:54:35.601893 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:54:35.601903 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:35.601912 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:35.601921 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:35.601932 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:54:35.601942 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:54:35.601951 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:54:35.601960 systemd[1]: Mounting media.mount... Feb 9 09:54:35.601970 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:54:35.601979 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:54:35.601990 systemd[1]: Mounting tmp.mount... Feb 9 09:54:35.601999 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:54:35.602009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:54:35.602018 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:35.602028 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:54:35.602037 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:54:35.602056 systemd[1]: Starting modprobe@drm.service... Feb 9 09:54:35.602066 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:54:35.602076 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:54:35.602087 systemd[1]: Starting modprobe@loop.service... Feb 9 09:54:35.602097 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:54:35.602107 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:54:35.602117 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:54:35.602126 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:35.602135 kernel: fuse: init (API version 7.34) Feb 9 09:54:35.602144 kernel: loop: module loaded Feb 9 09:54:35.602153 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:35.602164 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:54:35.602173 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:54:35.602183 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:35.602192 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:54:35.602201 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:54:35.602210 systemd[1]: Mounted media.mount. Feb 9 09:54:35.602224 systemd-journald[1204]: Journal started Feb 9 09:54:35.602264 systemd-journald[1204]: Runtime Journal (/run/log/journal/d805190a97fe45788096d8cb71a007a4) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:35.140000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:54:35.598000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:54:35.624926 kernel: audit: type=1305 audit(1707472475.598:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:54:35.624993 systemd[1]: Started systemd-journald.service. Feb 9 09:54:35.598000 audit[1204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc78411f0 a2=4000 a3=1 items=0 ppid=1 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.660211 kernel: audit: type=1300 audit(1707472475.598:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc78411f0 a2=4000 a3=1 items=0 ppid=1 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:35.666337 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:54:35.671957 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:54:35.677868 systemd[1]: Mounted tmp.mount. Feb 9 09:54:35.598000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:54:35.691957 kernel: audit: type=1327 audit(1707472475.598:91): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:54:35.692212 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:54:35.697853 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:35.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.725026 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:54:35.725316 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:54:35.725471 kernel: audit: type=1130 audit(1707472475.665:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.725524 kernel: audit: type=1130 audit(1707472475.697:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.725545 kernel: audit: type=1130 audit(1707472475.715:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.777014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:54:35.777290 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:54:35.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.822410 kernel: audit: type=1130 audit(1707472475.776:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.822454 kernel: audit: type=1131 audit(1707472475.776:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.823190 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:54:35.823433 systemd[1]: Finished modprobe@drm.service. Feb 9 09:54:35.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.830288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:54:35.830513 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:54:35.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.837536 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:54:35.837761 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:54:35.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.843792 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:54:35.844057 systemd[1]: Finished modprobe@loop.service. Feb 9 09:54:35.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.849829 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:35.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.855624 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:54:35.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.863683 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:54:35.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.869600 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:35.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:35.876237 systemd[1]: Reached target network-pre.target. Feb 9 09:54:35.883505 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:54:35.889781 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:54:35.894819 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:54:35.896530 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:54:35.902484 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:54:35.907535 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:54:35.908810 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:54:35.914014 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:54:35.915347 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:35.921036 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:54:35.927148 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:54:35.934136 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:54:35.939562 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:54:35.947427 udevadm[1245]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:54:35.947783 systemd-journald[1204]: Time spent on flushing to /var/log/journal/d805190a97fe45788096d8cb71a007a4 is 13.547ms for 1073 entries. Feb 9 09:54:35.947783 systemd-journald[1204]: System Journal (/var/log/journal/d805190a97fe45788096d8cb71a007a4) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:54:36.099592 systemd-journald[1204]: Received client request to flush runtime journal. Feb 9 09:54:36.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.024757 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:36.044838 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:54:36.050706 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:54:36.100882 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:54:36.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.458571 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:54:36.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:36.467626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:36.844265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:36.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.074574 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:54:37.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.082690 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:37.108633 systemd-udevd[1256]: Using default interface naming scheme 'v252'. Feb 9 09:54:37.286030 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:37.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.297939 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:37.328693 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 09:54:37.376096 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:54:37.416073 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:54:37.416207 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:54:37.420070 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:54:37.422378 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:54:37.436000 audit[1268]: AVC avc: denied { confidentiality } for pid=1268 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:54:37.444118 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:54:37.447068 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:37.448070 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:54:37.448143 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:54:37.448175 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:54:37.489822 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:54:37.493811 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:54:37.493863 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:54:37.493882 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:54:37.493898 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:54:37.436000 audit[1268]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab174781a0 a1=aa2c a2=ffffb6dd24b0 a3=aaab173d2010 items=12 ppid=1256 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:37.436000 audit: CWD cwd="/" Feb 9 09:54:37.436000 audit: PATH item=0 name=(null) inode=6717 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=1 name=(null) inode=10862 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=2 name=(null) inode=10862 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=3 name=(null) inode=10863 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=4 name=(null) inode=10862 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=5 name=(null) inode=10864 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=6 name=(null) inode=10862 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=7 name=(null) inode=10865 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=8 name=(null) inode=10862 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=9 name=(null) inode=10866 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=10 name=(null) inode=10862 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PATH item=11 name=(null) inode=10867 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:37.436000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:54:37.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.949820 systemd[1]: Started systemd-userdbd.service. Feb 9 09:54:38.205994 systemd-networkd[1277]: lo: Link UP Feb 9 09:54:38.206006 systemd-networkd[1277]: lo: Gained carrier Feb 9 09:54:38.206407 systemd-networkd[1277]: Enumeration completed Feb 9 09:54:38.206524 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:38.213099 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1261) Feb 9 09:54:38.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.216060 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:38.236631 systemd-networkd[1277]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:38.245366 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:54:38.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.255725 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 09:54:38.257146 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:54:38.284061 kernel: mlx5_core 394a:00:02.0 enP14666s1: Link up Feb 9 09:54:38.311053 kernel: hv_netvsc 000d3ac5-1fcf-000d-3ac5-1fcf000d3ac5 eth0: Data path switched to VF: enP14666s1 Feb 9 09:54:38.312219 systemd-networkd[1277]: enP14666s1: Link UP Feb 9 09:54:38.312568 systemd-networkd[1277]: eth0: Link UP Feb 9 09:54:38.312578 systemd-networkd[1277]: eth0: Gained carrier Feb 9 09:54:38.316577 systemd-networkd[1277]: enP14666s1: Gained carrier Feb 9 09:54:38.330142 systemd-networkd[1277]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:38.524392 lvm[1335]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:38.568963 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:54:38.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.575169 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:38.581837 systemd[1]: Starting lvm2-activation.service... Feb 9 09:54:38.586679 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:38.604908 systemd[1]: Finished lvm2-activation.service. Feb 9 09:54:38.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.610873 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:38.615998 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:54:38.616232 systemd[1]: Reached target local-fs.target. Feb 9 09:54:38.621453 systemd[1]: Reached target machines.target. Feb 9 09:54:38.627975 systemd[1]: Starting ldconfig.service... Feb 9 09:54:38.632647 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:54:38.632800 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:38.634150 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:54:38.640075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:54:38.647850 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:54:38.653128 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:38.653185 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:38.654310 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:54:38.683242 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:54:38.689854 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1340 (bootctl) Feb 9 09:54:38.691220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:54:38.702106 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:54:38.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.712514 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:54:38.761220 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:54:39.461738 systemd-fsck[1349]: fsck.fat 4.2 (2021-01-31) Feb 9 09:54:39.461738 systemd-fsck[1349]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:54:39.462785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:54:39.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:39.471725 systemd[1]: Mounting boot.mount... Feb 9 09:54:39.490364 systemd[1]: Mounted boot.mount. Feb 9 09:54:39.501850 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:54:39.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:39.587019 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:54:39.587672 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:54:39.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:39.890206 systemd-networkd[1277]: eth0: Gained IPv6LL Feb 9 09:54:39.894944 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:39.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.530998 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:54:40.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.538407 systemd[1]: Starting audit-rules.service... Feb 9 09:54:40.544361 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:54:40.550685 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:54:40.559456 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:40.568882 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:54:40.575025 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:54:40.580346 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:54:40.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.589809 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:54:40.591682 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 09:54:40.591732 kernel: audit: type=1130 audit(1707472480.584:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.632000 audit[1368]: SYSTEM_BOOT pid=1368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.634860 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:54:40.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.686538 kernel: audit: type=1127 audit(1707472480.632:131): pid=1368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.686796 kernel: audit: type=1130 audit(1707472480.660:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.736143 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:54:40.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.763199 kernel: audit: type=1130 audit(1707472480.740:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.761768 systemd[1]: Reached target time-set.target. Feb 9 09:54:40.813059 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:54:40.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.844066 kernel: audit: type=1130 audit(1707472480.818:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.850281 systemd-resolved[1365]: Positive Trust Anchors: Feb 9 09:54:40.850583 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:40.850664 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:40.915776 systemd-resolved[1365]: Using system hostname 'ci-3510.3.2-a-37d4719b0b'. Feb 9 09:54:40.917406 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:40.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.923826 systemd[1]: Reached target network.target. Feb 9 09:54:40.951338 systemd[1]: Reached target network-online.target. Feb 9 09:54:40.953065 kernel: audit: type=1130 audit(1707472480.922:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.958959 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:40.986692 systemd-timesyncd[1367]: Contacted time server 73.193.62.54:123 (0.flatcar.pool.ntp.org). Feb 9 09:54:40.987136 systemd-timesyncd[1367]: Initial clock synchronization to Fri 2024-02-09 09:54:41.011816 UTC. Feb 9 09:54:41.016000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:41.016000 audit[1385]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0a19180 a2=420 a3=0 items=0 ppid=1361 pid=1385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:41.062564 kernel: audit: type=1305 audit(1707472481.016:136): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:41.062629 kernel: audit: type=1300 audit(1707472481.016:136): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0a19180 a2=420 a3=0 items=0 ppid=1361 pid=1385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:41.016000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:54:41.063257 augenrules[1385]: No rules Feb 9 09:54:41.064422 systemd[1]: Finished audit-rules.service. Feb 9 09:54:41.078275 kernel: audit: type=1327 audit(1707472481.016:136): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:54:47.228677 ldconfig[1339]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:54:47.253923 systemd[1]: Finished ldconfig.service. Feb 9 09:54:47.261503 systemd[1]: Starting systemd-update-done.service... Feb 9 09:54:47.300573 systemd[1]: Finished systemd-update-done.service. Feb 9 09:54:47.306219 systemd[1]: Reached target sysinit.target. Feb 9 09:54:47.310967 systemd[1]: Started motdgen.path. Feb 9 09:54:47.315247 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:54:47.321978 systemd[1]: Started logrotate.timer. Feb 9 09:54:47.326219 systemd[1]: Started mdadm.timer. Feb 9 09:54:47.330261 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:54:47.335390 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:54:47.335509 systemd[1]: Reached target paths.target. Feb 9 09:54:47.340214 systemd[1]: Reached target timers.target. Feb 9 09:54:47.345144 systemd[1]: Listening on dbus.socket. Feb 9 09:54:47.350624 systemd[1]: Starting docker.socket... Feb 9 09:54:47.355394 systemd[1]: Listening on sshd.socket. Feb 9 09:54:47.359854 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:47.360383 systemd[1]: Listening on docker.socket. Feb 9 09:54:47.364883 systemd[1]: Reached target sockets.target. Feb 9 09:54:47.369628 systemd[1]: Reached target basic.target. Feb 9 09:54:47.374464 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:54:47.374607 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:47.374697 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:47.375957 systemd[1]: Starting containerd.service... Feb 9 09:54:47.381375 systemd[1]: Starting dbus.service... Feb 9 09:54:47.386176 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:54:47.392006 systemd[1]: Starting extend-filesystems.service... Feb 9 09:54:47.396732 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:54:47.397884 systemd[1]: Starting motdgen.service... Feb 9 09:54:47.402713 systemd[1]: Started nvidia.service. Feb 9 09:54:47.408588 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:54:47.414453 systemd[1]: Starting prepare-critools.service... Feb 9 09:54:47.420130 systemd[1]: Starting prepare-helm.service... Feb 9 09:54:47.425456 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:54:47.431253 systemd[1]: Starting sshd-keygen.service... Feb 9 09:54:47.436968 systemd[1]: Starting systemd-logind.service... Feb 9 09:54:47.441319 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:47.441386 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:54:47.442591 systemd[1]: Starting update-engine.service... Feb 9 09:54:47.448450 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:54:47.457671 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:54:47.457921 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:54:47.464744 jq[1399]: false Feb 9 09:54:47.470536 extend-filesystems[1400]: Found sda Feb 9 09:54:47.479521 jq[1421]: true Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda1 Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda2 Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda3 Feb 9 09:54:47.484874 extend-filesystems[1400]: Found usr Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda4 Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda6 Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda7 Feb 9 09:54:47.484874 extend-filesystems[1400]: Found sda9 Feb 9 09:54:47.484874 extend-filesystems[1400]: Checking size of /dev/sda9 Feb 9 09:54:47.479824 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:54:47.480093 systemd[1]: Finished motdgen.service. Feb 9 09:54:47.535645 jq[1438]: true Feb 9 09:54:47.488558 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:54:47.488792 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:54:47.555164 env[1433]: time="2024-02-09T09:54:47.554546333Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:54:47.582282 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 09:54:47.582762 systemd-logind[1417]: New seat seat0. Feb 9 09:54:47.592617 env[1433]: time="2024-02-09T09:54:47.592576387Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:54:47.593004 env[1433]: time="2024-02-09T09:54:47.592976415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:47.594947 env[1433]: time="2024-02-09T09:54:47.594177461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:47.594947 env[1433]: time="2024-02-09T09:54:47.594216983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:47.594947 env[1433]: time="2024-02-09T09:54:47.594477942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:47.594947 env[1433]: time="2024-02-09T09:54:47.594496522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:47.594947 env[1433]: time="2024-02-09T09:54:47.594511818Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:54:47.594947 env[1433]: time="2024-02-09T09:54:47.594521349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:47.595964 env[1433]: time="2024-02-09T09:54:47.595931297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:47.596350 env[1433]: time="2024-02-09T09:54:47.596309702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:47.596535 env[1433]: time="2024-02-09T09:54:47.596503830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:47.596535 env[1433]: time="2024-02-09T09:54:47.596526254Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:54:47.596598 env[1433]: time="2024-02-09T09:54:47.596578710Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:54:47.596598 env[1433]: time="2024-02-09T09:54:47.596591043Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:54:47.631417 env[1433]: time="2024-02-09T09:54:47.631368337Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:54:47.631554 env[1433]: time="2024-02-09T09:54:47.631446701Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:54:47.631554 env[1433]: time="2024-02-09T09:54:47.631461237Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:54:47.631554 env[1433]: time="2024-02-09T09:54:47.631507486Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.631554 env[1433]: time="2024-02-09T09:54:47.631525385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.631554 env[1433]: time="2024-02-09T09:54:47.631541442Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.631671 env[1433]: time="2024-02-09T09:54:47.631606032Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.632008 env[1433]: time="2024-02-09T09:54:47.631987039Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.632055 env[1433]: time="2024-02-09T09:54:47.632009984Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.632055 env[1433]: time="2024-02-09T09:54:47.632023839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.632105 env[1433]: time="2024-02-09T09:54:47.632061119Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.632105 env[1433]: time="2024-02-09T09:54:47.632075134Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:54:47.632253 env[1433]: time="2024-02-09T09:54:47.632233183Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:54:47.632345 env[1433]: time="2024-02-09T09:54:47.632327323Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:54:47.632714 env[1433]: time="2024-02-09T09:54:47.632683464Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:54:47.632748 env[1433]: time="2024-02-09T09:54:47.632727512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632748 env[1433]: time="2024-02-09T09:54:47.632742247Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:54:47.632816 env[1433]: time="2024-02-09T09:54:47.632800590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632850 env[1433]: time="2024-02-09T09:54:47.632817848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632850 env[1433]: time="2024-02-09T09:54:47.632831263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632905 env[1433]: time="2024-02-09T09:54:47.632851684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632905 env[1433]: time="2024-02-09T09:54:47.632866580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632905 env[1433]: time="2024-02-09T09:54:47.632878233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632905 env[1433]: time="2024-02-09T09:54:47.632890286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632905 env[1433]: time="2024-02-09T09:54:47.632901137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.632995 env[1433]: time="2024-02-09T09:54:47.632914992Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:54:47.633099 env[1433]: time="2024-02-09T09:54:47.633076885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.633141 env[1433]: time="2024-02-09T09:54:47.633103434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.633141 env[1433]: time="2024-02-09T09:54:47.633116167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.633312 env[1433]: time="2024-02-09T09:54:47.633139192Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:54:47.634404 env[1433]: time="2024-02-09T09:54:47.634371230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:54:47.634461 env[1433]: time="2024-02-09T09:54:47.634407229Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:54:47.634461 env[1433]: time="2024-02-09T09:54:47.634440785Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:54:47.634507 env[1433]: time="2024-02-09T09:54:47.634487114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:54:47.634778 env[1433]: time="2024-02-09T09:54:47.634714478Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.634784473Z" level=info msg="Connect containerd service" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.634832204Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635574278Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635679791Z" level=info msg="Start subscribing containerd event" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635723638Z" level=info msg="Start recovering state" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635794994Z" level=info msg="Start event monitor" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635829151Z" level=info msg="Start snapshots syncer" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635831113Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635839001Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635851975Z" level=info msg="Start streaming server" Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.635868032Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:54:47.656115 env[1433]: time="2024-02-09T09:54:47.655820262Z" level=info msg="containerd successfully booted in 0.102188s" Feb 9 09:54:47.656381 extend-filesystems[1400]: Old size kept for /dev/sda9 Feb 9 09:54:47.656381 extend-filesystems[1400]: Found sr0 Feb 9 09:54:47.636057 systemd[1]: Started containerd.service. Feb 9 09:54:47.704427 tar[1426]: linux-arm64/helm Feb 9 09:54:47.704680 tar[1425]: crictl Feb 9 09:54:47.704821 tar[1424]: ./ Feb 9 09:54:47.704821 tar[1424]: ./macvlan Feb 9 09:54:47.649818 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:54:47.650075 systemd[1]: Finished extend-filesystems.service. Feb 9 09:54:47.732651 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:54:47.733592 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:54:47.791558 tar[1424]: ./static Feb 9 09:54:47.825311 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:54:47.832387 dbus-daemon[1398]: [system] SELinux support is enabled Feb 9 09:54:47.832579 systemd[1]: Started dbus.service. Feb 9 09:54:47.838435 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:54:47.838459 systemd[1]: Reached target system-config.target. Feb 9 09:54:47.848157 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:54:47.848182 systemd[1]: Reached target user-config.target. Feb 9 09:54:47.855107 systemd[1]: Started systemd-logind.service. Feb 9 09:54:47.881798 tar[1424]: ./vlan Feb 9 09:54:47.937192 tar[1424]: ./portmap Feb 9 09:54:47.993712 tar[1424]: ./host-local Feb 9 09:54:48.061227 tar[1424]: ./vrf Feb 9 09:54:48.112602 tar[1424]: ./bridge Feb 9 09:54:48.184659 tar[1424]: ./tuning Feb 9 09:54:48.239961 update_engine[1420]: I0209 09:54:48.183986 1420 main.cc:92] Flatcar Update Engine starting Feb 9 09:54:48.243894 tar[1424]: ./firewall Feb 9 09:54:48.290334 systemd[1]: Started update-engine.service. Feb 9 09:54:48.290789 update_engine[1420]: I0209 09:54:48.290390 1420 update_check_scheduler.cc:74] Next update check in 2m23s Feb 9 09:54:48.296778 systemd[1]: Started locksmithd.service. Feb 9 09:54:48.322197 tar[1424]: ./host-device Feb 9 09:54:48.377921 tar[1424]: ./sbr Feb 9 09:54:48.429862 tar[1424]: ./loopback Feb 9 09:54:48.480033 tar[1424]: ./dhcp Feb 9 09:54:48.524668 systemd[1]: Finished prepare-critools.service. Feb 9 09:54:48.549431 tar[1426]: linux-arm64/LICENSE Feb 9 09:54:48.549431 tar[1426]: linux-arm64/README.md Feb 9 09:54:48.555200 systemd[1]: Finished prepare-helm.service. Feb 9 09:54:48.605786 tar[1424]: ./ptp Feb 9 09:54:48.638000 tar[1424]: ./ipvlan Feb 9 09:54:48.669413 tar[1424]: ./bandwidth Feb 9 09:54:48.764102 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:54:49.773768 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:54:50.050404 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:54:50.067650 systemd[1]: Finished sshd-keygen.service. Feb 9 09:54:50.075576 systemd[1]: Starting issuegen.service... Feb 9 09:54:50.081121 systemd[1]: Started waagent.service. Feb 9 09:54:50.086293 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:54:50.086515 systemd[1]: Finished issuegen.service. Feb 9 09:54:50.092713 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:54:50.120973 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:54:50.127964 systemd[1]: Started getty@tty1.service. Feb 9 09:54:50.133906 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:54:50.139285 systemd[1]: Reached target getty.target. Feb 9 09:54:50.148153 systemd[1]: Reached target multi-user.target. Feb 9 09:54:50.154704 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:54:50.164329 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:54:50.164577 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:54:50.170247 systemd[1]: Startup finished in 20.203s (kernel) + 23.643s (userspace) = 43.847s. Feb 9 09:54:50.994635 login[1551]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:54:50.996207 login[1552]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:51.059571 systemd[1]: Created slice user-500.slice. Feb 9 09:54:51.060574 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:54:51.062902 systemd-logind[1417]: New session 1 of user core. Feb 9 09:54:51.100178 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:54:51.101485 systemd[1]: Starting user@500.service... Feb 9 09:54:51.136711 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:54:51.432828 systemd[1557]: Queued start job for default target default.target. Feb 9 09:54:51.433492 systemd[1557]: Reached target paths.target. Feb 9 09:54:51.433521 systemd[1557]: Reached target sockets.target. Feb 9 09:54:51.433533 systemd[1557]: Reached target timers.target. Feb 9 09:54:51.433543 systemd[1557]: Reached target basic.target. Feb 9 09:54:51.433663 systemd[1]: Started user@500.service. Feb 9 09:54:51.434469 systemd[1]: Started session-1.scope. Feb 9 09:54:51.435513 systemd[1557]: Reached target default.target. Feb 9 09:54:51.436363 systemd[1557]: Startup finished in 293ms. Feb 9 09:54:51.995286 login[1551]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:51.999502 systemd[1]: Started session-2.scope. Feb 9 09:54:51.999972 systemd-logind[1417]: New session 2 of user core. Feb 9 09:54:56.955148 waagent[1547]: 2024-02-09T09:54:56.955020Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:54:56.962984 waagent[1547]: 2024-02-09T09:54:56.962894Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:54:56.967940 waagent[1547]: 2024-02-09T09:54:56.967868Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:54:56.977044 waagent[1547]: 2024-02-09T09:54:56.973464Z INFO Daemon Daemon Run daemon Feb 9 09:54:56.980041 waagent[1547]: 2024-02-09T09:54:56.978456Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:54:56.996742 waagent[1547]: 2024-02-09T09:54:56.996600Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:57.013457 waagent[1547]: 2024-02-09T09:54:57.013318Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:57.024869 waagent[1547]: 2024-02-09T09:54:57.024781Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:57.033144 waagent[1547]: 2024-02-09T09:54:57.033056Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:54:57.041617 waagent[1547]: 2024-02-09T09:54:57.041542Z INFO Daemon Daemon Activate resource disk Feb 9 09:54:57.047902 waagent[1547]: 2024-02-09T09:54:57.047827Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:54:57.063817 waagent[1547]: 2024-02-09T09:54:57.063732Z INFO Daemon Daemon Found device: None Feb 9 09:54:57.069566 waagent[1547]: 2024-02-09T09:54:57.069488Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:54:57.080283 waagent[1547]: 2024-02-09T09:54:57.080201Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:54:57.094806 waagent[1547]: 2024-02-09T09:54:57.094737Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:57.101590 waagent[1547]: 2024-02-09T09:54:57.101522Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:54:57.115729 waagent[1547]: 2024-02-09T09:54:57.115572Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:57.133421 waagent[1547]: 2024-02-09T09:54:57.133279Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:57.144874 waagent[1547]: 2024-02-09T09:54:57.144789Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:57.150959 waagent[1547]: 2024-02-09T09:54:57.150883Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:54:57.245202 waagent[1547]: 2024-02-09T09:54:57.244991Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:54:57.335545 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:54:57.396083 waagent[1547]: 2024-02-09T09:54:57.395902Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:54:57.402001 waagent[1547]: 2024-02-09T09:54:57.401912Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:57.408290 waagent[1547]: 2024-02-09T09:54:57.408213Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:54:57.415418 waagent[1547]: 2024-02-09T09:54:57.415346Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:54:57.421210 waagent[1547]: 2024-02-09T09:54:57.421141Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:54:57.426715 waagent[1547]: 2024-02-09T09:54:57.426646Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:54:57.525505 waagent[1547]: 2024-02-09T09:54:57.525382Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:54:57.533262 waagent[1547]: 2024-02-09T09:54:57.533213Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:54:57.539253 waagent[1547]: 2024-02-09T09:54:57.539174Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:54:58.231673 waagent[1547]: 2024-02-09T09:54:58.231518Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:54:58.253372 waagent[1547]: 2024-02-09T09:54:58.253289Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:54:58.261273 waagent[1547]: 2024-02-09T09:54:58.261191Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:54:58.340893 waagent[1547]: 2024-02-09T09:54:58.340761Z INFO Daemon Daemon Found private key matching thumbprint 575C47585D1459C00D5A0F07B441A63D325669A3 Feb 9 09:54:58.352203 waagent[1547]: 2024-02-09T09:54:58.352113Z INFO Daemon Daemon Certificate with thumbprint B8910F831702841F93B7EF734F4F2193722C3D6E has no matching private key. Feb 9 09:54:58.364667 waagent[1547]: 2024-02-09T09:54:58.364570Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:54:58.396990 waagent[1547]: 2024-02-09T09:54:58.396932Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 93d120e2-a8a5-4f61-badd-892304646a1a New eTag: 4925086741118203080] Feb 9 09:54:58.409837 waagent[1547]: 2024-02-09T09:54:58.409747Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:54:58.428167 waagent[1547]: 2024-02-09T09:54:58.428083Z INFO Daemon Daemon Starting provisioning Feb 9 09:54:58.434402 waagent[1547]: 2024-02-09T09:54:58.434320Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:54:58.440631 waagent[1547]: 2024-02-09T09:54:58.440550Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-37d4719b0b] Feb 9 09:54:58.503897 waagent[1547]: 2024-02-09T09:54:58.503766Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-37d4719b0b] Feb 9 09:54:58.511446 waagent[1547]: 2024-02-09T09:54:58.511357Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:54:58.519440 waagent[1547]: 2024-02-09T09:54:58.519357Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:54:58.536485 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:54:58.536695 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:54:58.536752 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:54:58.536951 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:58.541097 systemd-networkd[1277]: eth0: DHCPv6 lease lost Feb 9 09:54:58.542369 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:58.542611 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:58.544645 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:58.577887 systemd-networkd[1604]: enP14666s1: Link UP Feb 9 09:54:58.577899 systemd-networkd[1604]: enP14666s1: Gained carrier Feb 9 09:54:58.578795 systemd-networkd[1604]: eth0: Link UP Feb 9 09:54:58.578806 systemd-networkd[1604]: eth0: Gained carrier Feb 9 09:54:58.579373 systemd-networkd[1604]: lo: Link UP Feb 9 09:54:58.579384 systemd-networkd[1604]: lo: Gained carrier Feb 9 09:54:58.579619 systemd-networkd[1604]: eth0: Gained IPv6LL Feb 9 09:54:58.580695 systemd-networkd[1604]: Enumeration completed Feb 9 09:54:58.580832 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:58.582366 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:58.582640 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:58.586428 waagent[1547]: 2024-02-09T09:54:58.586271Z INFO Daemon Daemon Create user account if not exists Feb 9 09:54:58.592886 waagent[1547]: 2024-02-09T09:54:58.592795Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:54:58.599757 waagent[1547]: 2024-02-09T09:54:58.599670Z INFO Daemon Daemon Configure sudoer Feb 9 09:54:58.605337 waagent[1547]: 2024-02-09T09:54:58.605258Z INFO Daemon Daemon Configure sshd Feb 9 09:54:58.606133 systemd-networkd[1604]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:58.617048 waagent[1547]: 2024-02-09T09:54:58.609926Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:54:58.618785 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:59.839048 waagent[1547]: 2024-02-09T09:54:59.838966Z INFO Daemon Daemon Provisioning complete Feb 9 09:54:59.863249 waagent[1547]: 2024-02-09T09:54:59.863178Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:54:59.870094 waagent[1547]: 2024-02-09T09:54:59.869999Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:54:59.881314 waagent[1547]: 2024-02-09T09:54:59.881235Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:55:00.184066 waagent[1614]: 2024-02-09T09:55:00.183899Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:55:00.185170 waagent[1614]: 2024-02-09T09:55:00.185112Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:00.185398 waagent[1614]: 2024-02-09T09:55:00.185353Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:00.204616 waagent[1614]: 2024-02-09T09:55:00.204528Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:55:00.204935 waagent[1614]: 2024-02-09T09:55:00.204888Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:55:00.274244 waagent[1614]: 2024-02-09T09:55:00.274108Z INFO ExtHandler ExtHandler Found private key matching thumbprint 575C47585D1459C00D5A0F07B441A63D325669A3 Feb 9 09:55:00.274603 waagent[1614]: 2024-02-09T09:55:00.274554Z INFO ExtHandler ExtHandler Certificate with thumbprint B8910F831702841F93B7EF734F4F2193722C3D6E has no matching private key. Feb 9 09:55:00.274929 waagent[1614]: 2024-02-09T09:55:00.274882Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:55:00.288797 waagent[1614]: 2024-02-09T09:55:00.288741Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 695bafb6-1f6b-4527-80a6-4f1bb0430720 New eTag: 4925086741118203080] Feb 9 09:55:00.289563 waagent[1614]: 2024-02-09T09:55:00.289507Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:55:00.384882 waagent[1614]: 2024-02-09T09:55:00.384739Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:55:00.395455 waagent[1614]: 2024-02-09T09:55:00.395369Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1614 Feb 9 09:55:00.399486 waagent[1614]: 2024-02-09T09:55:00.399412Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:55:00.400961 waagent[1614]: 2024-02-09T09:55:00.400905Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:55:00.507930 waagent[1614]: 2024-02-09T09:55:00.507819Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:55:00.508492 waagent[1614]: 2024-02-09T09:55:00.508438Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:55:00.516483 waagent[1614]: 2024-02-09T09:55:00.516430Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:55:00.517183 waagent[1614]: 2024-02-09T09:55:00.517129Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:55:00.518477 waagent[1614]: 2024-02-09T09:55:00.518416Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:55:00.519961 waagent[1614]: 2024-02-09T09:55:00.519893Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:55:00.520299 waagent[1614]: 2024-02-09T09:55:00.520229Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:00.521006 waagent[1614]: 2024-02-09T09:55:00.520935Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:00.521625 waagent[1614]: 2024-02-09T09:55:00.521560Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:55:00.521943 waagent[1614]: 2024-02-09T09:55:00.521885Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:55:00.521943 waagent[1614]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:55:00.521943 waagent[1614]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:55:00.521943 waagent[1614]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:55:00.521943 waagent[1614]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:00.521943 waagent[1614]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:00.521943 waagent[1614]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:00.524278 waagent[1614]: 2024-02-09T09:55:00.524112Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:55:00.524648 waagent[1614]: 2024-02-09T09:55:00.524575Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:00.525443 waagent[1614]: 2024-02-09T09:55:00.525369Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:00.526029 waagent[1614]: 2024-02-09T09:55:00.525959Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:55:00.526213 waagent[1614]: 2024-02-09T09:55:00.526163Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:55:00.526334 waagent[1614]: 2024-02-09T09:55:00.526290Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:55:00.527322 waagent[1614]: 2024-02-09T09:55:00.527276Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:55:00.527671 waagent[1614]: 2024-02-09T09:55:00.527195Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:55:00.527893 waagent[1614]: 2024-02-09T09:55:00.527818Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:55:00.528074 waagent[1614]: 2024-02-09T09:55:00.527982Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:55:00.528475 waagent[1614]: 2024-02-09T09:55:00.528406Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:55:00.541047 waagent[1614]: 2024-02-09T09:55:00.540955Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:55:00.541928 waagent[1614]: 2024-02-09T09:55:00.541880Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:55:00.542956 waagent[1614]: 2024-02-09T09:55:00.542904Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:55:00.568968 waagent[1614]: 2024-02-09T09:55:00.568889Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1604' Feb 9 09:55:00.603956 waagent[1614]: 2024-02-09T09:55:00.603891Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:55:00.698958 waagent[1614]: 2024-02-09T09:55:00.698818Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:55:00.698958 waagent[1614]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:55:00.698958 waagent[1614]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:55:00.698958 waagent[1614]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:1f:cf brd ff:ff:ff:ff:ff:ff Feb 9 09:55:00.698958 waagent[1614]: 3: enP14666s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:1f:cf brd ff:ff:ff:ff:ff:ff\ altname enP14666p0s2 Feb 9 09:55:00.698958 waagent[1614]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:55:00.698958 waagent[1614]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:55:00.698958 waagent[1614]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:55:00.698958 waagent[1614]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:55:00.698958 waagent[1614]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:55:00.698958 waagent[1614]: 2: eth0 inet6 fe80::20d:3aff:fec5:1fcf/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:55:00.742262 waagent[1614]: 2024-02-09T09:55:00.742201Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:55:00.884975 waagent[1547]: 2024-02-09T09:55:00.884766Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:55:00.888655 waagent[1547]: 2024-02-09T09:55:00.888600Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:55:02.024523 waagent[1643]: 2024-02-09T09:55:02.024423Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:55:02.025604 waagent[1643]: 2024-02-09T09:55:02.025548Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:55:02.025829 waagent[1643]: 2024-02-09T09:55:02.025784Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:55:02.034290 waagent[1643]: 2024-02-09T09:55:02.034171Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:55:02.034837 waagent[1643]: 2024-02-09T09:55:02.034785Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:02.035112 waagent[1643]: 2024-02-09T09:55:02.035033Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:02.047719 waagent[1643]: 2024-02-09T09:55:02.047630Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:55:02.059269 waagent[1643]: 2024-02-09T09:55:02.059205Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:55:02.060518 waagent[1643]: 2024-02-09T09:55:02.060461Z INFO ExtHandler Feb 9 09:55:02.060772 waagent[1643]: 2024-02-09T09:55:02.060724Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 41e415ad-362f-4a66-a9c0-fd13305599a7 eTag: 4925086741118203080 source: Fabric] Feb 9 09:55:02.061638 waagent[1643]: 2024-02-09T09:55:02.061583Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:55:02.062950 waagent[1643]: 2024-02-09T09:55:02.062893Z INFO ExtHandler Feb 9 09:55:02.063197 waagent[1643]: 2024-02-09T09:55:02.063149Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:55:02.069935 waagent[1643]: 2024-02-09T09:55:02.069885Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:55:02.070571 waagent[1643]: 2024-02-09T09:55:02.070526Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:55:02.093790 waagent[1643]: 2024-02-09T09:55:02.093726Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:55:02.170100 waagent[1643]: 2024-02-09T09:55:02.169938Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B8910F831702841F93B7EF734F4F2193722C3D6E', 'hasPrivateKey': False} Feb 9 09:55:02.171332 waagent[1643]: 2024-02-09T09:55:02.171275Z INFO ExtHandler Downloaded certificate {'thumbprint': '575C47585D1459C00D5A0F07B441A63D325669A3', 'hasPrivateKey': True} Feb 9 09:55:02.172529 waagent[1643]: 2024-02-09T09:55:02.172473Z INFO ExtHandler Fetch goal state completed Feb 9 09:55:02.199920 waagent[1643]: 2024-02-09T09:55:02.199833Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1643 Feb 9 09:55:02.203682 waagent[1643]: 2024-02-09T09:55:02.203601Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:55:02.205311 waagent[1643]: 2024-02-09T09:55:02.205251Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:55:02.210642 waagent[1643]: 2024-02-09T09:55:02.210592Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:55:02.211200 waagent[1643]: 2024-02-09T09:55:02.211143Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:55:02.219519 waagent[1643]: 2024-02-09T09:55:02.219460Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:55:02.220282 waagent[1643]: 2024-02-09T09:55:02.220223Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:55:02.226987 waagent[1643]: 2024-02-09T09:55:02.226871Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 09:55:02.230969 waagent[1643]: 2024-02-09T09:55:02.230906Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:55:02.232744 waagent[1643]: 2024-02-09T09:55:02.232673Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:55:02.232975 waagent[1643]: 2024-02-09T09:55:02.232907Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:02.233406 waagent[1643]: 2024-02-09T09:55:02.233337Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:02.234385 waagent[1643]: 2024-02-09T09:55:02.234311Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:55:02.234717 waagent[1643]: 2024-02-09T09:55:02.234653Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:55:02.234717 waagent[1643]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:55:02.234717 waagent[1643]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:55:02.234717 waagent[1643]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:55:02.234717 waagent[1643]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:02.234717 waagent[1643]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:02.234717 waagent[1643]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:02.237271 waagent[1643]: 2024-02-09T09:55:02.237140Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:55:02.237870 waagent[1643]: 2024-02-09T09:55:02.237794Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:02.238073 waagent[1643]: 2024-02-09T09:55:02.238000Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:02.238703 waagent[1643]: 2024-02-09T09:55:02.238518Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:55:02.239272 waagent[1643]: 2024-02-09T09:55:02.239213Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:55:02.239399 waagent[1643]: 2024-02-09T09:55:02.239352Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:55:02.241848 waagent[1643]: 2024-02-09T09:55:02.241698Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:55:02.242082 waagent[1643]: 2024-02-09T09:55:02.241979Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:55:02.245386 waagent[1643]: 2024-02-09T09:55:02.245240Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:55:02.245631 waagent[1643]: 2024-02-09T09:55:02.245547Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:55:02.245957 waagent[1643]: 2024-02-09T09:55:02.245888Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:55:02.268434 waagent[1643]: 2024-02-09T09:55:02.268345Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:55:02.268434 waagent[1643]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:55:02.268434 waagent[1643]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:55:02.268434 waagent[1643]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:1f:cf brd ff:ff:ff:ff:ff:ff Feb 9 09:55:02.268434 waagent[1643]: 3: enP14666s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c5:1f:cf brd ff:ff:ff:ff:ff:ff\ altname enP14666p0s2 Feb 9 09:55:02.268434 waagent[1643]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:55:02.268434 waagent[1643]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:55:02.268434 waagent[1643]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:55:02.268434 waagent[1643]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:55:02.268434 waagent[1643]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:55:02.268434 waagent[1643]: 2: eth0 inet6 fe80::20d:3aff:fec5:1fcf/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:55:02.273879 waagent[1643]: 2024-02-09T09:55:02.273788Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:55:02.277984 waagent[1643]: 2024-02-09T09:55:02.277850Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:55:02.297710 waagent[1643]: 2024-02-09T09:55:02.297642Z INFO ExtHandler ExtHandler Feb 9 09:55:02.297882 waagent[1643]: 2024-02-09T09:55:02.297827Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 59eac849-d0c5-4ca9-b26a-b9801da1a96e correlation 0e5e2555-d526-4742-b090-bfb4a1aef2c5 created: 2024-02-09T09:53:11.557808Z] Feb 9 09:55:02.298861 waagent[1643]: 2024-02-09T09:55:02.298795Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:55:02.300668 waagent[1643]: 2024-02-09T09:55:02.300610Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 9 09:55:02.331973 waagent[1643]: 2024-02-09T09:55:02.331888Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:55:02.352670 waagent[1643]: 2024-02-09T09:55:02.352594Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6E734535-0B69-4064-BF68-B6C09987575C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:55:02.519098 waagent[1643]: 2024-02-09T09:55:02.518941Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 09:55:02.519098 waagent[1643]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:02.519098 waagent[1643]: pkts bytes target prot opt in out source destination Feb 9 09:55:02.519098 waagent[1643]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:02.519098 waagent[1643]: pkts bytes target prot opt in out source destination Feb 9 09:55:02.519098 waagent[1643]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:02.519098 waagent[1643]: pkts bytes target prot opt in out source destination Feb 9 09:55:02.519098 waagent[1643]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:55:02.519098 waagent[1643]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:55:02.519098 waagent[1643]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:55:02.526537 waagent[1643]: 2024-02-09T09:55:02.526407Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:55:02.526537 waagent[1643]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:02.526537 waagent[1643]: pkts bytes target prot opt in out source destination Feb 9 09:55:02.526537 waagent[1643]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:02.526537 waagent[1643]: pkts bytes target prot opt in out source destination Feb 9 09:55:02.526537 waagent[1643]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:02.526537 waagent[1643]: pkts bytes target prot opt in out source destination Feb 9 09:55:02.526537 waagent[1643]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:55:02.526537 waagent[1643]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:55:02.526537 waagent[1643]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:55:02.527099 waagent[1643]: 2024-02-09T09:55:02.527015Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:55:26.083697 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:55:33.330241 update_engine[1420]: I0209 09:55:33.330103 1420 update_attempter.cc:509] Updating boot flags... Feb 9 09:55:48.195794 systemd[1]: Created slice system-sshd.slice. Feb 9 09:55:48.196952 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.12.6:47130.service. Feb 9 09:55:48.823332 sshd[1738]: Accepted publickey for core from 10.200.12.6 port 47130 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:48.840994 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:48.845245 systemd[1]: Started session-3.scope. Feb 9 09:55:48.846178 systemd-logind[1417]: New session 3 of user core. Feb 9 09:55:49.198122 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.12.6:47134.service. Feb 9 09:55:49.652287 sshd[1743]: Accepted publickey for core from 10.200.12.6 port 47134 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:49.653878 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:49.657486 systemd-logind[1417]: New session 4 of user core. Feb 9 09:55:49.657863 systemd[1]: Started session-4.scope. Feb 9 09:55:49.984109 sshd[1743]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:49.986927 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:55:49.988305 systemd[1]: sshd@1-10.200.20.37:22-10.200.12.6:47134.service: Deactivated successfully. Feb 9 09:55:49.988973 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:55:49.990089 systemd-logind[1417]: Removed session 4. Feb 9 09:55:50.059446 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.12.6:47140.service. Feb 9 09:55:50.512829 sshd[1750]: Accepted publickey for core from 10.200.12.6 port 47140 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:50.514461 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:50.518609 systemd[1]: Started session-5.scope. Feb 9 09:55:50.519780 systemd-logind[1417]: New session 5 of user core. Feb 9 09:55:50.840633 sshd[1750]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:50.843093 systemd[1]: sshd@2-10.200.20.37:22-10.200.12.6:47140.service: Deactivated successfully. Feb 9 09:55:50.843898 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:55:50.843952 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:55:50.844994 systemd-logind[1417]: Removed session 5. Feb 9 09:55:50.907888 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.12.6:47156.service. Feb 9 09:55:51.315243 sshd[1757]: Accepted publickey for core from 10.200.12.6 port 47156 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:51.316767 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:51.320760 systemd[1]: Started session-6.scope. Feb 9 09:55:51.321228 systemd-logind[1417]: New session 6 of user core. Feb 9 09:55:51.617324 sshd[1757]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:51.619879 systemd[1]: sshd@3-10.200.20.37:22-10.200.12.6:47156.service: Deactivated successfully. Feb 9 09:55:51.620585 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:55:51.621576 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:55:51.622376 systemd-logind[1417]: Removed session 6. Feb 9 09:55:51.686805 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.12.6:47160.service. Feb 9 09:55:52.105851 sshd[1764]: Accepted publickey for core from 10.200.12.6 port 47160 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:52.107071 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:52.110701 systemd-logind[1417]: New session 7 of user core. Feb 9 09:55:52.111103 systemd[1]: Started session-7.scope. Feb 9 09:55:52.596453 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:55:52.596654 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:55:53.283070 systemd[1]: Starting docker.service... Feb 9 09:55:53.320349 env[1783]: time="2024-02-09T09:55:53.320293101Z" level=info msg="Starting up" Feb 9 09:55:53.321396 env[1783]: time="2024-02-09T09:55:53.321373643Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:53.321486 env[1783]: time="2024-02-09T09:55:53.321473453Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:53.321547 env[1783]: time="2024-02-09T09:55:53.321533458Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:53.321604 env[1783]: time="2024-02-09T09:55:53.321592584Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:53.323168 env[1783]: time="2024-02-09T09:55:53.323146811Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:53.323261 env[1783]: time="2024-02-09T09:55:53.323248061Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:53.323323 env[1783]: time="2024-02-09T09:55:53.323309587Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:53.323377 env[1783]: time="2024-02-09T09:55:53.323365592Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:53.329429 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4142807974-merged.mount: Deactivated successfully. Feb 9 09:55:53.455272 env[1783]: time="2024-02-09T09:55:53.455234329Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:55:53.455272 env[1783]: time="2024-02-09T09:55:53.455262692Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:55:53.455475 env[1783]: time="2024-02-09T09:55:53.455426108Z" level=info msg="Loading containers: start." Feb 9 09:55:53.615059 kernel: Initializing XFRM netlink socket Feb 9 09:55:53.637728 env[1783]: time="2024-02-09T09:55:53.637690941Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:55:53.787132 systemd-networkd[1604]: docker0: Link UP Feb 9 09:55:53.824292 env[1783]: time="2024-02-09T09:55:53.824254102Z" level=info msg="Loading containers: done." Feb 9 09:55:53.835330 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck51825925-merged.mount: Deactivated successfully. Feb 9 09:55:53.860944 env[1783]: time="2024-02-09T09:55:53.860903696Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:55:53.861335 env[1783]: time="2024-02-09T09:55:53.861318215Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:55:53.861520 env[1783]: time="2024-02-09T09:55:53.861506353Z" level=info msg="Daemon has completed initialization" Feb 9 09:55:53.909352 systemd[1]: Started docker.service. Feb 9 09:55:53.918364 env[1783]: time="2024-02-09T09:55:53.918302775Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:55:53.935110 systemd[1]: Reloading. Feb 9 09:55:53.980663 /usr/lib/systemd/system-generators/torcx-generator[1916]: time="2024-02-09T09:55:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:53.981060 /usr/lib/systemd/system-generators/torcx-generator[1916]: time="2024-02-09T09:55:53Z" level=info msg="torcx already run" Feb 9 09:55:54.072478 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:54.072495 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:54.089377 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:54.161108 systemd[1]: Started kubelet.service. Feb 9 09:55:54.218373 kubelet[1978]: E0209 09:55:54.218312 1978 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:55:54.220369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:54.220541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:58.272925 env[1433]: time="2024-02-09T09:55:58.272885485Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:55:59.306377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529416071.mount: Deactivated successfully. Feb 9 09:56:01.499132 env[1433]: time="2024-02-09T09:56:01.499083968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.512655 env[1433]: time="2024-02-09T09:56:01.512616362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.520238 env[1433]: time="2024-02-09T09:56:01.520186100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.525133 env[1433]: time="2024-02-09T09:56:01.525089395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.525966 env[1433]: time="2024-02-09T09:56:01.525928579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:56:01.535335 env[1433]: time="2024-02-09T09:56:01.535275453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:56:03.379071 env[1433]: time="2024-02-09T09:56:03.379004665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.400914 env[1433]: time="2024-02-09T09:56:03.400856288Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.416948 env[1433]: time="2024-02-09T09:56:03.416911092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.428711 env[1433]: time="2024-02-09T09:56:03.428672264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.429741 env[1433]: time="2024-02-09T09:56:03.429714300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:56:03.439627 env[1433]: time="2024-02-09T09:56:03.439587255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:56:04.450960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:56:04.451157 systemd[1]: Stopped kubelet.service. Feb 9 09:56:04.452605 systemd[1]: Started kubelet.service. Feb 9 09:56:04.493905 kubelet[2008]: E0209 09:56:04.493840 2008 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:56:04.496600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:04.496744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:04.917420 env[1433]: time="2024-02-09T09:56:04.916972220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.932599 env[1433]: time="2024-02-09T09:56:04.932554520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.944977 env[1433]: time="2024-02-09T09:56:04.944940034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.955856 env[1433]: time="2024-02-09T09:56:04.955819203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.956529 env[1433]: time="2024-02-09T09:56:04.956502131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:56:04.965260 env[1433]: time="2024-02-09T09:56:04.965217186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:56:06.191183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199425447.mount: Deactivated successfully. Feb 9 09:56:06.689484 env[1433]: time="2024-02-09T09:56:06.689438039Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.698865 env[1433]: time="2024-02-09T09:56:06.698819988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.705127 env[1433]: time="2024-02-09T09:56:06.705090169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.710087 env[1433]: time="2024-02-09T09:56:06.710056942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.710483 env[1433]: time="2024-02-09T09:56:06.710452208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:56:06.719597 env[1433]: time="2024-02-09T09:56:06.719556099Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:56:07.435984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956110596.mount: Deactivated successfully. Feb 9 09:56:07.485854 env[1433]: time="2024-02-09T09:56:07.485776324Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.500768 env[1433]: time="2024-02-09T09:56:07.500717181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.508410 env[1433]: time="2024-02-09T09:56:07.508372761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.519751 env[1433]: time="2024-02-09T09:56:07.519712942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.520179 env[1433]: time="2024-02-09T09:56:07.520147851Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:56:07.528938 env[1433]: time="2024-02-09T09:56:07.528896182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:56:08.727147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341780433.mount: Deactivated successfully. Feb 9 09:56:11.727051 env[1433]: time="2024-02-09T09:56:11.726984445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.742860 env[1433]: time="2024-02-09T09:56:11.742806180Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.752579 env[1433]: time="2024-02-09T09:56:11.752544396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.763923 env[1433]: time="2024-02-09T09:56:11.763873985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.765118 env[1433]: time="2024-02-09T09:56:11.764697874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:56:11.777260 env[1433]: time="2024-02-09T09:56:11.777215814Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:56:12.563488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749781685.mount: Deactivated successfully. Feb 9 09:56:13.156858 env[1433]: time="2024-02-09T09:56:13.156812808Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.183983 env[1433]: time="2024-02-09T09:56:13.183945094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.190265 env[1433]: time="2024-02-09T09:56:13.190233247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.200808 env[1433]: time="2024-02-09T09:56:13.200775000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.201296 env[1433]: time="2024-02-09T09:56:13.201261948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:56:14.700925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:56:14.701123 systemd[1]: Stopped kubelet.service. Feb 9 09:56:14.702554 systemd[1]: Started kubelet.service. Feb 9 09:56:14.764433 kubelet[2092]: E0209 09:56:14.764365 2092 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:56:14.766447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:14.766587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:18.530617 systemd[1]: Stopped kubelet.service. Feb 9 09:56:18.547242 systemd[1]: Reloading. Feb 9 09:56:18.590727 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2024-02-09T09:56:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:18.591118 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2024-02-09T09:56:18Z" level=info msg="torcx already run" Feb 9 09:56:18.686055 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:18.686224 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:18.703509 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:18.789483 systemd[1]: Started kubelet.service. Feb 9 09:56:18.833811 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:18.833811 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:18.834179 kubelet[2189]: I0209 09:56:18.833890 2189 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:18.836190 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:18.836275 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:19.590863 kubelet[2189]: I0209 09:56:19.590836 2189 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:56:19.591023 kubelet[2189]: I0209 09:56:19.591011 2189 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:19.591333 kubelet[2189]: I0209 09:56:19.591318 2189 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:56:19.596372 kubelet[2189]: E0209 09:56:19.596339 2189 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.596463 kubelet[2189]: I0209 09:56:19.596390 2189 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:19.596660 kubelet[2189]: W0209 09:56:19.596646 2189 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:19.598845 kubelet[2189]: I0209 09:56:19.598819 2189 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:19.599311 kubelet[2189]: I0209 09:56:19.599297 2189 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:19.599466 kubelet[2189]: I0209 09:56:19.599454 2189 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:19.599586 kubelet[2189]: I0209 09:56:19.599575 2189 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:19.599650 kubelet[2189]: I0209 09:56:19.599642 2189 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:56:19.599795 kubelet[2189]: I0209 09:56:19.599784 2189 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:19.602742 kubelet[2189]: I0209 09:56:19.602718 2189 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:56:19.602742 kubelet[2189]: I0209 09:56:19.602745 2189 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:19.602834 kubelet[2189]: I0209 09:56:19.602767 2189 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:56:19.602834 kubelet[2189]: I0209 09:56:19.602777 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:19.603578 kubelet[2189]: W0209 09:56:19.603514 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37d4719b0b&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.603578 kubelet[2189]: E0209 09:56:19.603576 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37d4719b0b&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.603669 kubelet[2189]: W0209 09:56:19.603624 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.603669 kubelet[2189]: E0209 09:56:19.603645 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.603757 kubelet[2189]: I0209 09:56:19.603735 2189 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:19.604010 kubelet[2189]: W0209 09:56:19.603982 2189 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:56:19.604381 kubelet[2189]: I0209 09:56:19.604349 2189 server.go:1186] "Started kubelet" Feb 9 09:56:19.607882 kubelet[2189]: E0209 09:56:19.607772 2189 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37d4719b0b.17b22943bb092751", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37d4719b0b", UID:"ci-3510.3.2-a-37d4719b0b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37d4719b0b"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 56, 19, 604326225, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 56, 19, 604326225, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.37:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:56:19.608261 kubelet[2189]: I0209 09:56:19.608245 2189 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:19.608947 kubelet[2189]: I0209 09:56:19.608930 2189 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:56:19.615168 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:56:19.615685 kubelet[2189]: I0209 09:56:19.615658 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:19.615795 kubelet[2189]: E0209 09:56:19.615780 2189 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:19.615859 kubelet[2189]: E0209 09:56:19.615850 2189 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:19.618397 kubelet[2189]: E0209 09:56:19.618378 2189 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-37d4719b0b\" not found" Feb 9 09:56:19.618530 kubelet[2189]: I0209 09:56:19.618517 2189 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:56:19.618666 kubelet[2189]: I0209 09:56:19.618653 2189 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:19.619133 kubelet[2189]: W0209 09:56:19.619098 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.619243 kubelet[2189]: E0209 09:56:19.619231 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.620639 kubelet[2189]: E0209 09:56:19.620619 2189 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37d4719b0b?timeout=10s": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.746527 kubelet[2189]: I0209 09:56:19.746495 2189 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:19.790305 kubelet[2189]: I0209 09:56:19.790272 2189 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:19.790305 kubelet[2189]: I0209 09:56:19.790296 2189 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:56:19.790305 kubelet[2189]: I0209 09:56:19.790312 2189 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:56:19.790475 kubelet[2189]: E0209 09:56:19.790359 2189 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:19.791387 kubelet[2189]: W0209 09:56:19.791340 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.791714 kubelet[2189]: E0209 09:56:19.791697 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.821934 kubelet[2189]: E0209 09:56:19.821890 2189 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37d4719b0b?timeout=10s": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:19.892118 kubelet[2189]: E0209 09:56:19.891109 2189 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:56:19.917582 kubelet[2189]: I0209 09:56:19.917548 2189 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:19.918399 kubelet[2189]: I0209 09:56:19.918385 2189 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:19.918505 kubelet[2189]: I0209 09:56:19.918495 2189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:19.918573 kubelet[2189]: I0209 09:56:19.918564 2189 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:19.918721 kubelet[2189]: E0209 09:56:19.918586 2189 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:19.928028 kubelet[2189]: I0209 09:56:19.928009 2189 policy_none.go:49] "None policy: Start" Feb 9 09:56:19.928837 kubelet[2189]: I0209 09:56:19.928816 2189 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:19.928888 kubelet[2189]: I0209 09:56:19.928843 2189 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:19.945441 kubelet[2189]: I0209 09:56:19.945415 2189 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:19.946824 kubelet[2189]: I0209 09:56:19.946808 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:19.947446 kubelet[2189]: E0209 09:56:19.947432 2189 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-37d4719b0b\" not found" Feb 9 09:56:20.091886 kubelet[2189]: I0209 09:56:20.091853 2189 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:20.093445 kubelet[2189]: I0209 09:56:20.093428 2189 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:20.095786 kubelet[2189]: I0209 09:56:20.095767 2189 status_manager.go:698] "Failed to get status for pod" podUID=6749993fbe8175f0dcfda90deb18ed3f pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" err="Get \"https://10.200.20.37:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-37d4719b0b\": dial tcp 10.200.20.37:6443: connect: connection refused" Feb 9 09:56:20.100045 kubelet[2189]: I0209 09:56:20.100010 2189 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:20.101880 kubelet[2189]: I0209 09:56:20.101857 2189 status_manager.go:698] "Failed to get status for pod" podUID=d073da21568c7b4efb9bb9c0d14bc29c pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" err="Get \"https://10.200.20.37:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-37d4719b0b\": dial tcp 10.200.20.37:6443: connect: connection refused" Feb 9 09:56:20.105943 kubelet[2189]: I0209 09:56:20.105923 2189 status_manager.go:698] "Failed to get status for pod" podUID=d657c48a523e3ccc8be223a00ee9a2eb pod="kube-system/kube-scheduler-ci-3510.3.2-a-37d4719b0b" err="Get \"https://10.200.20.37:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-37d4719b0b\": dial tcp 10.200.20.37:6443: connect: connection refused" Feb 9 09:56:20.120139 kubelet[2189]: I0209 09:56:20.120123 2189 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.120519 kubelet[2189]: E0209 09:56:20.120505 2189 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.122663 kubelet[2189]: I0209 09:56:20.122649 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.122761 kubelet[2189]: I0209 09:56:20.122751 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.122837 kubelet[2189]: I0209 09:56:20.122827 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d657c48a523e3ccc8be223a00ee9a2eb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-37d4719b0b\" (UID: \"d657c48a523e3ccc8be223a00ee9a2eb\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.122914 kubelet[2189]: I0209 09:56:20.122904 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6749993fbe8175f0dcfda90deb18ed3f-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" (UID: \"6749993fbe8175f0dcfda90deb18ed3f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.122989 kubelet[2189]: I0209 09:56:20.122980 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6749993fbe8175f0dcfda90deb18ed3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" (UID: \"6749993fbe8175f0dcfda90deb18ed3f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.123080 kubelet[2189]: I0209 09:56:20.123070 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.123160 kubelet[2189]: I0209 09:56:20.123150 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.123228 kubelet[2189]: I0209 09:56:20.123219 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6749993fbe8175f0dcfda90deb18ed3f-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" (UID: \"6749993fbe8175f0dcfda90deb18ed3f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.123300 kubelet[2189]: I0209 09:56:20.123291 2189 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.223155 kubelet[2189]: E0209 09:56:20.223119 2189 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37d4719b0b?timeout=10s": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:20.399976 env[1433]: time="2024-02-09T09:56:20.399842815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-37d4719b0b,Uid:6749993fbe8175f0dcfda90deb18ed3f,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:20.401557 env[1433]: time="2024-02-09T09:56:20.401448732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-37d4719b0b,Uid:d073da21568c7b4efb9bb9c0d14bc29c,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:20.406232 env[1433]: time="2024-02-09T09:56:20.406189917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-37d4719b0b,Uid:d657c48a523e3ccc8be223a00ee9a2eb,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:20.523132 kubelet[2189]: I0209 09:56:20.522347 2189 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.523132 kubelet[2189]: E0209 09:56:20.522683 2189 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:20.979971 kubelet[2189]: W0209 09:56:20.979922 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:20.980300 kubelet[2189]: E0209 09:56:20.979978 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.024501 kubelet[2189]: E0209 09:56:21.024461 2189 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37d4719b0b?timeout=10s": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.097340 kubelet[2189]: W0209 09:56:21.097257 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37d4719b0b&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.097340 kubelet[2189]: E0209 09:56:21.097313 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37d4719b0b&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.130238 kubelet[2189]: W0209 09:56:21.130168 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.130238 kubelet[2189]: E0209 09:56:21.130220 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.183810 kubelet[2189]: W0209 09:56:21.183772 2189 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.183810 kubelet[2189]: E0209 09:56:21.183813 2189 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.297227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152157656.mount: Deactivated successfully. Feb 9 09:56:21.325123 kubelet[2189]: I0209 09:56:21.324781 2189 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:21.325123 kubelet[2189]: E0209 09:56:21.325099 2189 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:21.399480 env[1433]: time="2024-02-09T09:56:21.399429588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.408281 env[1433]: time="2024-02-09T09:56:21.408246078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.436214 env[1433]: time="2024-02-09T09:56:21.436177977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.443164 env[1433]: time="2024-02-09T09:56:21.443119940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.467921 env[1433]: time="2024-02-09T09:56:21.467857890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.478481 env[1433]: time="2024-02-09T09:56:21.478437462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.491146 env[1433]: time="2024-02-09T09:56:21.491106811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.497687 env[1433]: time="2024-02-09T09:56:21.497642395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.615352 env[1433]: time="2024-02-09T09:56:21.614963850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.640135 env[1433]: time="2024-02-09T09:56:21.640100739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.651255 env[1433]: time="2024-02-09T09:56:21.651219216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.680783 env[1433]: time="2024-02-09T09:56:21.680730908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:21.715277 kubelet[2189]: E0209 09:56:21.715244 2189 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 09:56:21.753328 env[1433]: time="2024-02-09T09:56:21.753259320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:21.753463 env[1433]: time="2024-02-09T09:56:21.753337484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:21.753463 env[1433]: time="2024-02-09T09:56:21.753364605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:21.753613 env[1433]: time="2024-02-09T09:56:21.753521053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d8b91de53096edd99bf7ebafde7fe170580a1a44929cad5f41d007ef5e0ccda pid=2264 runtime=io.containerd.runc.v2 Feb 9 09:56:21.780890 env[1433]: time="2024-02-09T09:56:21.780565150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:21.780890 env[1433]: time="2024-02-09T09:56:21.780611672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:21.780890 env[1433]: time="2024-02-09T09:56:21.780624673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:21.780890 env[1433]: time="2024-02-09T09:56:21.780746039Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/102b0e25d39e1394ffa62121e81eb4072f7ca3093437c8c6d67045cc0b1f0429 pid=2299 runtime=io.containerd.runc.v2 Feb 9 09:56:21.806831 env[1433]: time="2024-02-09T09:56:21.806536518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-37d4719b0b,Uid:6749993fbe8175f0dcfda90deb18ed3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d8b91de53096edd99bf7ebafde7fe170580a1a44929cad5f41d007ef5e0ccda\"" Feb 9 09:56:21.816138 env[1433]: time="2024-02-09T09:56:21.816095722Z" level=info msg="CreateContainer within sandbox \"0d8b91de53096edd99bf7ebafde7fe170580a1a44929cad5f41d007ef5e0ccda\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:56:21.826029 env[1433]: time="2024-02-09T09:56:21.825854056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:21.826029 env[1433]: time="2024-02-09T09:56:21.825890218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:21.826029 env[1433]: time="2024-02-09T09:56:21.825900418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:21.826342 env[1433]: time="2024-02-09T09:56:21.826289236Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a61084e94c7f6f9940b9134aa3a46d6f86c77eed75984cd12c455d0eae97b93f pid=2340 runtime=io.containerd.runc.v2 Feb 9 09:56:21.832470 env[1433]: time="2024-02-09T09:56:21.832430962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-37d4719b0b,Uid:d657c48a523e3ccc8be223a00ee9a2eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"102b0e25d39e1394ffa62121e81eb4072f7ca3093437c8c6d67045cc0b1f0429\"" Feb 9 09:56:21.834914 env[1433]: time="2024-02-09T09:56:21.834885236Z" level=info msg="CreateContainer within sandbox \"102b0e25d39e1394ffa62121e81eb4072f7ca3093437c8c6d67045cc0b1f0429\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:56:21.868826 env[1433]: time="2024-02-09T09:56:21.868282429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-37d4719b0b,Uid:d073da21568c7b4efb9bb9c0d14bc29c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a61084e94c7f6f9940b9134aa3a46d6f86c77eed75984cd12c455d0eae97b93f\"" Feb 9 09:56:21.871455 env[1433]: time="2024-02-09T09:56:21.871419375Z" level=info msg="CreateContainer within sandbox \"a61084e94c7f6f9940b9134aa3a46d6f86c77eed75984cd12c455d0eae97b93f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:56:21.961993 env[1433]: time="2024-02-09T09:56:21.961940904Z" level=info msg="CreateContainer within sandbox \"0d8b91de53096edd99bf7ebafde7fe170580a1a44929cad5f41d007ef5e0ccda\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a4e970ac12bf4184fb1de627d8ef66ffa7d54bde40a2bea753befaeb3a24b90\"" Feb 9 09:56:21.962681 env[1433]: time="2024-02-09T09:56:21.962646136Z" level=info msg="StartContainer for \"5a4e970ac12bf4184fb1de627d8ef66ffa7d54bde40a2bea753befaeb3a24b90\"" Feb 9 09:56:21.989796 env[1433]: time="2024-02-09T09:56:21.989739916Z" level=info msg="CreateContainer within sandbox \"102b0e25d39e1394ffa62121e81eb4072f7ca3093437c8c6d67045cc0b1f0429\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e96ff85ab9240c5ede086501ee0a3d28337e40d1d0fbc36ae21817482b3b8e48\"" Feb 9 09:56:21.990301 env[1433]: time="2024-02-09T09:56:21.990268501Z" level=info msg="StartContainer for \"e96ff85ab9240c5ede086501ee0a3d28337e40d1d0fbc36ae21817482b3b8e48\"" Feb 9 09:56:22.005115 env[1433]: time="2024-02-09T09:56:22.005066625Z" level=info msg="CreateContainer within sandbox \"a61084e94c7f6f9940b9134aa3a46d6f86c77eed75984cd12c455d0eae97b93f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d33ac03cfc38322ffd24ca62eeebec42829a861f05efc17466d489a6c2c4abb\"" Feb 9 09:56:22.009116 env[1433]: time="2024-02-09T09:56:22.009070327Z" level=info msg="StartContainer for \"4d33ac03cfc38322ffd24ca62eeebec42829a861f05efc17466d489a6c2c4abb\"" Feb 9 09:56:22.054455 env[1433]: time="2024-02-09T09:56:22.054419747Z" level=info msg="StartContainer for \"5a4e970ac12bf4184fb1de627d8ef66ffa7d54bde40a2bea753befaeb3a24b90\" returns successfully" Feb 9 09:56:22.076601 env[1433]: time="2024-02-09T09:56:22.076556793Z" level=info msg="StartContainer for \"e96ff85ab9240c5ede086501ee0a3d28337e40d1d0fbc36ae21817482b3b8e48\" returns successfully" Feb 9 09:56:22.127819 env[1433]: time="2024-02-09T09:56:22.127730279Z" level=info msg="StartContainer for \"4d33ac03cfc38322ffd24ca62eeebec42829a861f05efc17466d489a6c2c4abb\" returns successfully" Feb 9 09:56:22.297067 systemd[1]: run-containerd-runc-k8s.io-0d8b91de53096edd99bf7ebafde7fe170580a1a44929cad5f41d007ef5e0ccda-runc.tDZyQV.mount: Deactivated successfully. Feb 9 09:56:22.926732 kubelet[2189]: I0209 09:56:22.926709 2189 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:25.647385 kubelet[2189]: E0209 09:56:25.647340 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-37d4719b0b\" not found" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:25.738168 kubelet[2189]: I0209 09:56:25.738134 2189 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:26.613079 kubelet[2189]: I0209 09:56:26.613034 2189 apiserver.go:52] "Watching apiserver" Feb 9 09:56:26.619583 kubelet[2189]: I0209 09:56:26.619546 2189 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:26.657645 kubelet[2189]: I0209 09:56:26.657611 2189 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:28.385873 systemd[1]: Reloading. Feb 9 09:56:28.485897 /usr/lib/systemd/system-generators/torcx-generator[2513]: time="2024-02-09T09:56:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:28.485928 /usr/lib/systemd/system-generators/torcx-generator[2513]: time="2024-02-09T09:56:28Z" level=info msg="torcx already run" Feb 9 09:56:28.544396 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:28.544415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:28.562463 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:28.650819 kubelet[2189]: I0209 09:56:28.650727 2189 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:28.651009 systemd[1]: Stopping kubelet.service... Feb 9 09:56:28.670504 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:56:28.670849 systemd[1]: Stopped kubelet.service. Feb 9 09:56:28.673185 systemd[1]: Started kubelet.service. Feb 9 09:56:28.755723 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:28.755723 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:28.755723 kubelet[2579]: I0209 09:56:28.754385 2579 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:28.758960 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:28.758960 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:28.763683 kubelet[2579]: I0209 09:56:28.763657 2579 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:56:28.763830 kubelet[2579]: I0209 09:56:28.763820 2579 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:28.764176 kubelet[2579]: I0209 09:56:28.764161 2579 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:56:28.765486 kubelet[2579]: I0209 09:56:28.765469 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:56:28.766367 kubelet[2579]: I0209 09:56:28.766344 2579 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:28.768790 kubelet[2579]: W0209 09:56:28.768776 2579 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:28.769642 kubelet[2579]: I0209 09:56:28.769627 2579 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:28.770163 kubelet[2579]: I0209 09:56:28.770152 2579 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:28.770311 kubelet[2579]: I0209 09:56:28.770300 2579 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:28.770429 kubelet[2579]: I0209 09:56:28.770418 2579 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:28.770488 kubelet[2579]: I0209 09:56:28.770480 2579 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:56:28.770619 kubelet[2579]: I0209 09:56:28.770608 2579 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:28.773446 kubelet[2579]: I0209 09:56:28.773423 2579 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:56:28.773512 kubelet[2579]: I0209 09:56:28.773454 2579 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:28.778173 kubelet[2579]: I0209 09:56:28.778117 2579 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:56:28.778312 kubelet[2579]: I0209 09:56:28.778302 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:28.783666 kubelet[2579]: I0209 09:56:28.783636 2579 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:28.784258 kubelet[2579]: I0209 09:56:28.784244 2579 server.go:1186] "Started kubelet" Feb 9 09:56:28.786298 kubelet[2579]: I0209 09:56:28.786284 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:28.789293 kubelet[2579]: I0209 09:56:28.789276 2579 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:28.790067 kubelet[2579]: I0209 09:56:28.790054 2579 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:56:28.792635 kubelet[2579]: I0209 09:56:28.792620 2579 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:56:28.794076 kubelet[2579]: I0209 09:56:28.794061 2579 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:28.812558 kubelet[2579]: I0209 09:56:28.812534 2579 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:28.837137 kubelet[2579]: I0209 09:56:28.837098 2579 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:28.837512 kubelet[2579]: I0209 09:56:28.837496 2579 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:56:28.837618 kubelet[2579]: I0209 09:56:28.837607 2579 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:56:28.837893 kubelet[2579]: E0209 09:56:28.837880 2579 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:28.844218 kubelet[2579]: E0209 09:56:28.844189 2579 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:28.844313 kubelet[2579]: E0209 09:56:28.844226 2579 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:28.900657 sudo[2630]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:56:28.900858 sudo[2630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:56:28.903671 kubelet[2579]: I0209 09:56:28.902163 2579 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:28.916746 kubelet[2579]: I0209 09:56:28.916665 2579 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:28.916983 kubelet[2579]: I0209 09:56:28.916958 2579 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:28.938070 kubelet[2579]: E0209 09:56:28.938014 2579 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:56:28.941335 kubelet[2579]: I0209 09:56:28.941307 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:28.941335 kubelet[2579]: I0209 09:56:28.941330 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:28.941479 kubelet[2579]: I0209 09:56:28.941347 2579 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:28.941479 kubelet[2579]: I0209 09:56:28.941477 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:56:28.941527 kubelet[2579]: I0209 09:56:28.941490 2579 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:56:28.941527 kubelet[2579]: I0209 09:56:28.941496 2579 policy_none.go:49] "None policy: Start" Feb 9 09:56:28.942601 kubelet[2579]: I0209 09:56:28.942577 2579 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:28.942601 kubelet[2579]: I0209 09:56:28.942604 2579 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:28.942795 kubelet[2579]: I0209 09:56:28.942719 2579 state_mem.go:75] "Updated machine memory state" Feb 9 09:56:28.943862 kubelet[2579]: I0209 09:56:28.943837 2579 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:28.952263 kubelet[2579]: I0209 09:56:28.950329 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:29.138453 kubelet[2579]: I0209 09:56:29.138407 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:29.138604 kubelet[2579]: I0209 09:56:29.138508 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:29.138604 kubelet[2579]: I0209 09:56:29.138540 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:29.201253 kubelet[2579]: I0209 09:56:29.201230 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201444 kubelet[2579]: I0209 09:56:29.201432 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6749993fbe8175f0dcfda90deb18ed3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" (UID: \"6749993fbe8175f0dcfda90deb18ed3f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201541 kubelet[2579]: I0209 09:56:29.201531 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201624 kubelet[2579]: I0209 09:56:29.201615 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201707 kubelet[2579]: I0209 09:56:29.201698 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201801 kubelet[2579]: I0209 09:56:29.201792 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d073da21568c7b4efb9bb9c0d14bc29c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" (UID: \"d073da21568c7b4efb9bb9c0d14bc29c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201890 kubelet[2579]: I0209 09:56:29.201881 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6749993fbe8175f0dcfda90deb18ed3f-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" (UID: \"6749993fbe8175f0dcfda90deb18ed3f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.201976 kubelet[2579]: I0209 09:56:29.201967 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6749993fbe8175f0dcfda90deb18ed3f-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" (UID: \"6749993fbe8175f0dcfda90deb18ed3f\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.202080 kubelet[2579]: I0209 09:56:29.202070 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d657c48a523e3ccc8be223a00ee9a2eb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-37d4719b0b\" (UID: \"d657c48a523e3ccc8be223a00ee9a2eb\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:29.441159 sudo[2630]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:29.783107 kubelet[2579]: I0209 09:56:29.783075 2579 apiserver.go:52] "Watching apiserver" Feb 9 09:56:29.794697 kubelet[2579]: I0209 09:56:29.794640 2579 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:29.807065 kubelet[2579]: I0209 09:56:29.806581 2579 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:29.980984 kubelet[2579]: E0209 09:56:29.980954 2579 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-37d4719b0b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:30.380833 kubelet[2579]: E0209 09:56:30.380793 2579 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-37d4719b0b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:30.562684 sudo[1768]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:30.586202 kubelet[2579]: E0209 09:56:30.586166 2579 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-37d4719b0b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" Feb 9 09:56:30.646222 sshd[1764]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:30.649003 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:56:30.649155 systemd[1]: sshd@4-10.200.20.37:22-10.200.12.6:47160.service: Deactivated successfully. Feb 9 09:56:30.649974 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:56:30.650490 systemd-logind[1417]: Removed session 7. Feb 9 09:56:31.178950 kubelet[2579]: I0209 09:56:31.178879 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-37d4719b0b" podStartSLOduration=2.178843112 pod.CreationTimestamp="2024-02-09 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:31.178769789 +0000 UTC m=+2.495571582" watchObservedRunningTime="2024-02-09 09:56:31.178843112 +0000 UTC m=+2.495644945" Feb 9 09:56:31.179363 kubelet[2579]: I0209 09:56:31.178973 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37d4719b0b" podStartSLOduration=2.178956356 pod.CreationTimestamp="2024-02-09 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:30.783239571 +0000 UTC m=+2.100041404" watchObservedRunningTime="2024-02-09 09:56:31.178956356 +0000 UTC m=+2.495758149" Feb 9 09:56:34.064695 kubelet[2579]: I0209 09:56:34.064667 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-37d4719b0b" podStartSLOduration=5.064629063 pod.CreationTimestamp="2024-02-09 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:31.579635273 +0000 UTC m=+2.896437106" watchObservedRunningTime="2024-02-09 09:56:34.064629063 +0000 UTC m=+5.381430896" Feb 9 09:56:40.729635 kubelet[2579]: I0209 09:56:40.729614 2579 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:56:40.730418 env[1433]: time="2024-02-09T09:56:40.730334233Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:56:40.730781 kubelet[2579]: I0209 09:56:40.730766 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:56:41.421778 kubelet[2579]: I0209 09:56:41.421740 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:41.440577 kubelet[2579]: I0209 09:56:41.440520 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:41.468553 kubelet[2579]: I0209 09:56:41.468523 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-hostproc\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.468760 kubelet[2579]: I0209 09:56:41.468750 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cni-path\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.468836 kubelet[2579]: I0209 09:56:41.468827 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-xtables-lock\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.468903 kubelet[2579]: I0209 09:56:41.468895 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fede369-4a3d-45b7-bf54-e76e12b718cb-clustermesh-secrets\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.468984 kubelet[2579]: I0209 09:56:41.468975 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-etc-cni-netd\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469070 kubelet[2579]: I0209 09:56:41.469061 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e15438d-536a-44e7-aed7-4b49dbb7b6f6-kube-proxy\") pod \"kube-proxy-wsq4r\" (UID: \"4e15438d-536a-44e7-aed7-4b49dbb7b6f6\") " pod="kube-system/kube-proxy-wsq4r" Feb 9 09:56:41.469156 kubelet[2579]: I0209 09:56:41.469146 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e15438d-536a-44e7-aed7-4b49dbb7b6f6-xtables-lock\") pod \"kube-proxy-wsq4r\" (UID: \"4e15438d-536a-44e7-aed7-4b49dbb7b6f6\") " pod="kube-system/kube-proxy-wsq4r" Feb 9 09:56:41.469222 kubelet[2579]: I0209 09:56:41.469215 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-bpf-maps\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469287 kubelet[2579]: I0209 09:56:41.469279 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e15438d-536a-44e7-aed7-4b49dbb7b6f6-lib-modules\") pod \"kube-proxy-wsq4r\" (UID: \"4e15438d-536a-44e7-aed7-4b49dbb7b6f6\") " pod="kube-system/kube-proxy-wsq4r" Feb 9 09:56:41.469361 kubelet[2579]: I0209 09:56:41.469352 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-lib-modules\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469431 kubelet[2579]: I0209 09:56:41.469422 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-hubble-tls\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469516 kubelet[2579]: I0209 09:56:41.469499 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf2z\" (UniqueName: \"kubernetes.io/projected/4e15438d-536a-44e7-aed7-4b49dbb7b6f6-kube-api-access-xgf2z\") pod \"kube-proxy-wsq4r\" (UID: \"4e15438d-536a-44e7-aed7-4b49dbb7b6f6\") " pod="kube-system/kube-proxy-wsq4r" Feb 9 09:56:41.469603 kubelet[2579]: I0209 09:56:41.469593 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-cgroup\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469679 kubelet[2579]: I0209 09:56:41.469671 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-kernel\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469754 kubelet[2579]: I0209 09:56:41.469745 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-run\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469829 kubelet[2579]: I0209 09:56:41.469821 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-config-path\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469906 kubelet[2579]: I0209 09:56:41.469898 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgln7\" (UniqueName: \"kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-kube-api-access-cgln7\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.469987 kubelet[2579]: I0209 09:56:41.469977 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-net\") pod \"cilium-zh99m\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " pod="kube-system/cilium-zh99m" Feb 9 09:56:41.725676 env[1433]: time="2024-02-09T09:56:41.725627942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wsq4r,Uid:4e15438d-536a-44e7-aed7-4b49dbb7b6f6,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:41.743151 kubelet[2579]: I0209 09:56:41.743127 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:41.743491 env[1433]: time="2024-02-09T09:56:41.743328725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zh99m,Uid:2fede369-4a3d-45b7-bf54-e76e12b718cb,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:41.772713 kubelet[2579]: I0209 09:56:41.772689 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6516a85-062a-4698-8b85-50223141b450-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-qf7gp\" (UID: \"d6516a85-062a-4698-8b85-50223141b450\") " pod="kube-system/cilium-operator-f59cbd8c6-qf7gp" Feb 9 09:56:41.772953 kubelet[2579]: I0209 09:56:41.772931 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7vb\" (UniqueName: \"kubernetes.io/projected/d6516a85-062a-4698-8b85-50223141b450-kube-api-access-cv7vb\") pod \"cilium-operator-f59cbd8c6-qf7gp\" (UID: \"d6516a85-062a-4698-8b85-50223141b450\") " pod="kube-system/cilium-operator-f59cbd8c6-qf7gp" Feb 9 09:56:41.841142 env[1433]: time="2024-02-09T09:56:41.840825674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:41.841142 env[1433]: time="2024-02-09T09:56:41.840903636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:41.841142 env[1433]: time="2024-02-09T09:56:41.840914836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:41.841429 env[1433]: time="2024-02-09T09:56:41.841378811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72de7fcaedb1da4d418dfa7809f876c647041d2ca715253be713e1c7c9d90725 pid=2682 runtime=io.containerd.runc.v2 Feb 9 09:56:41.859032 env[1433]: time="2024-02-09T09:56:41.858950229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:41.859267 env[1433]: time="2024-02-09T09:56:41.859241198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:41.859351 env[1433]: time="2024-02-09T09:56:41.859331721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:41.859574 env[1433]: time="2024-02-09T09:56:41.859545808Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e pid=2710 runtime=io.containerd.runc.v2 Feb 9 09:56:41.900520 env[1433]: time="2024-02-09T09:56:41.900475142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wsq4r,Uid:4e15438d-536a-44e7-aed7-4b49dbb7b6f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"72de7fcaedb1da4d418dfa7809f876c647041d2ca715253be713e1c7c9d90725\"" Feb 9 09:56:41.904674 env[1433]: time="2024-02-09T09:56:41.904242538Z" level=info msg="CreateContainer within sandbox \"72de7fcaedb1da4d418dfa7809f876c647041d2ca715253be713e1c7c9d90725\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:56:41.912682 env[1433]: time="2024-02-09T09:56:41.912624715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zh99m,Uid:2fede369-4a3d-45b7-bf54-e76e12b718cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\"" Feb 9 09:56:41.916099 env[1433]: time="2024-02-09T09:56:41.914887304Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:56:42.348244 env[1433]: time="2024-02-09T09:56:42.348189674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-qf7gp,Uid:d6516a85-062a-4698-8b85-50223141b450,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:43.071427 env[1433]: time="2024-02-09T09:56:43.071372244Z" level=info msg="CreateContainer within sandbox \"72de7fcaedb1da4d418dfa7809f876c647041d2ca715253be713e1c7c9d90725\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c822e1a62429b77a3856a71245c0fc529742c8a82a109d7024edfc723575a805\"" Feb 9 09:56:43.073832 env[1433]: time="2024-02-09T09:56:43.072488757Z" level=info msg="StartContainer for \"c822e1a62429b77a3856a71245c0fc529742c8a82a109d7024edfc723575a805\"" Feb 9 09:56:43.118727 env[1433]: time="2024-02-09T09:56:43.118627961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:43.118952 env[1433]: time="2024-02-09T09:56:43.118927250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:43.119097 env[1433]: time="2024-02-09T09:56:43.119073374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:43.119612 env[1433]: time="2024-02-09T09:56:43.119560668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef pid=2792 runtime=io.containerd.runc.v2 Feb 9 09:56:43.148300 env[1433]: time="2024-02-09T09:56:43.148253036Z" level=info msg="StartContainer for \"c822e1a62429b77a3856a71245c0fc529742c8a82a109d7024edfc723575a805\" returns successfully" Feb 9 09:56:43.187582 env[1433]: time="2024-02-09T09:56:43.187535318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-qf7gp,Uid:d6516a85-062a-4698-8b85-50223141b450,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef\"" Feb 9 09:56:47.231297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153814009.mount: Deactivated successfully. Feb 9 09:56:48.856532 kubelet[2579]: I0209 09:56:48.856493 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wsq4r" podStartSLOduration=7.856456662 pod.CreationTimestamp="2024-02-09 09:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:43.907809008 +0000 UTC m=+15.224610841" watchObservedRunningTime="2024-02-09 09:56:48.856456662 +0000 UTC m=+20.173258495" Feb 9 09:56:50.410543 env[1433]: time="2024-02-09T09:56:50.410499967Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:50.436101 env[1433]: time="2024-02-09T09:56:50.436059877Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:50.453196 env[1433]: time="2024-02-09T09:56:50.453142645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:50.453978 env[1433]: time="2024-02-09T09:56:50.453948946Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:56:50.455915 env[1433]: time="2024-02-09T09:56:50.455499147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:56:50.456833 env[1433]: time="2024-02-09T09:56:50.456700098Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:56:50.543028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982729370.mount: Deactivated successfully. Feb 9 09:56:50.580104 env[1433]: time="2024-02-09T09:56:50.580059133Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\"" Feb 9 09:56:50.580812 env[1433]: time="2024-02-09T09:56:50.580741911Z" level=info msg="StartContainer for \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\"" Feb 9 09:56:50.642416 env[1433]: time="2024-02-09T09:56:50.642358446Z" level=info msg="StartContainer for \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\" returns successfully" Feb 9 09:56:51.536413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41-rootfs.mount: Deactivated successfully. Feb 9 09:56:51.863741 env[1433]: time="2024-02-09T09:56:51.863624664Z" level=info msg="shim disconnected" id=471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41 Feb 9 09:56:51.864474 env[1433]: time="2024-02-09T09:56:51.864447645Z" level=warning msg="cleaning up after shim disconnected" id=471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41 namespace=k8s.io Feb 9 09:56:51.864552 env[1433]: time="2024-02-09T09:56:51.864539088Z" level=info msg="cleaning up dead shim" Feb 9 09:56:51.874603 env[1433]: time="2024-02-09T09:56:51.874562066Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2989 runtime=io.containerd.runc.v2\n" Feb 9 09:56:51.927813 env[1433]: time="2024-02-09T09:56:51.927771199Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:56:51.982508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2892541997.mount: Deactivated successfully. Feb 9 09:56:51.990128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288279733.mount: Deactivated successfully. Feb 9 09:56:52.055769 env[1433]: time="2024-02-09T09:56:52.055718238Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\"" Feb 9 09:56:52.057968 env[1433]: time="2024-02-09T09:56:52.056393055Z" level=info msg="StartContainer for \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\"" Feb 9 09:56:52.109670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:52.109912 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:52.110138 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:56:52.112206 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:52.121522 env[1433]: time="2024-02-09T09:56:52.121472508Z" level=info msg="StartContainer for \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\" returns successfully" Feb 9 09:56:52.122537 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:52.179846 env[1433]: time="2024-02-09T09:56:52.179800989Z" level=info msg="shim disconnected" id=24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f Feb 9 09:56:52.180170 env[1433]: time="2024-02-09T09:56:52.180149998Z" level=warning msg="cleaning up after shim disconnected" id=24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f namespace=k8s.io Feb 9 09:56:52.180275 env[1433]: time="2024-02-09T09:56:52.180260240Z" level=info msg="cleaning up dead shim" Feb 9 09:56:52.187616 env[1433]: time="2024-02-09T09:56:52.187577346Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3056 runtime=io.containerd.runc.v2\n" Feb 9 09:56:52.929321 env[1433]: time="2024-02-09T09:56:52.929271300Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:56:52.999922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010355191.mount: Deactivated successfully. Feb 9 09:56:53.037989 env[1433]: time="2024-02-09T09:56:53.037945605Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\"" Feb 9 09:56:53.040570 env[1433]: time="2024-02-09T09:56:53.038750065Z" level=info msg="StartContainer for \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\"" Feb 9 09:56:53.134174 env[1433]: time="2024-02-09T09:56:53.134124930Z" level=info msg="StartContainer for \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\" returns successfully" Feb 9 09:56:53.224940 env[1433]: time="2024-02-09T09:56:53.224894679Z" level=info msg="shim disconnected" id=39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2 Feb 9 09:56:53.225283 env[1433]: time="2024-02-09T09:56:53.225260448Z" level=warning msg="cleaning up after shim disconnected" id=39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2 namespace=k8s.io Feb 9 09:56:53.225354 env[1433]: time="2024-02-09T09:56:53.225341530Z" level=info msg="cleaning up dead shim" Feb 9 09:56:53.233210 env[1433]: time="2024-02-09T09:56:53.233168246Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3115 runtime=io.containerd.runc.v2\n" Feb 9 09:56:53.936089 env[1433]: time="2024-02-09T09:56:53.928629232Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:56:53.991826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280032260.mount: Deactivated successfully. Feb 9 09:56:53.997985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141972442.mount: Deactivated successfully. Feb 9 09:56:54.074660 env[1433]: time="2024-02-09T09:56:54.074615133Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\"" Feb 9 09:56:54.075427 env[1433]: time="2024-02-09T09:56:54.075385712Z" level=info msg="StartContainer for \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\"" Feb 9 09:56:54.131628 env[1433]: time="2024-02-09T09:56:54.131577016Z" level=info msg="StartContainer for \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\" returns successfully" Feb 9 09:56:54.460569 env[1433]: time="2024-02-09T09:56:54.460520593Z" level=info msg="shim disconnected" id=08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32 Feb 9 09:56:54.460569 env[1433]: time="2024-02-09T09:56:54.460565515Z" level=warning msg="cleaning up after shim disconnected" id=08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32 namespace=k8s.io Feb 9 09:56:54.460569 env[1433]: time="2024-02-09T09:56:54.460576355Z" level=info msg="cleaning up dead shim" Feb 9 09:56:54.467411 env[1433]: time="2024-02-09T09:56:54.467355322Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3172 runtime=io.containerd.runc.v2\n" Feb 9 09:56:54.495780 env[1433]: time="2024-02-09T09:56:54.495731980Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:54.521553 env[1433]: time="2024-02-09T09:56:54.521515735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:54.536479 env[1433]: time="2024-02-09T09:56:54.536436182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:54.536808 env[1433]: time="2024-02-09T09:56:54.536776111Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:56:54.541120 env[1433]: time="2024-02-09T09:56:54.541010255Z" level=info msg="CreateContainer within sandbox \"1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:56:54.608131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632990532.mount: Deactivated successfully. Feb 9 09:56:54.649226 env[1433]: time="2024-02-09T09:56:54.649175998Z" level=info msg="CreateContainer within sandbox \"1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\"" Feb 9 09:56:54.650153 env[1433]: time="2024-02-09T09:56:54.650125661Z" level=info msg="StartContainer for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\"" Feb 9 09:56:54.700577 env[1433]: time="2024-02-09T09:56:54.700510142Z" level=info msg="StartContainer for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" returns successfully" Feb 9 09:56:54.931790 env[1433]: time="2024-02-09T09:56:54.931686913Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:56:55.036444 env[1433]: time="2024-02-09T09:56:55.036379037Z" level=info msg="CreateContainer within sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\"" Feb 9 09:56:55.037604 env[1433]: time="2024-02-09T09:56:55.037569546Z" level=info msg="StartContainer for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\"" Feb 9 09:56:55.106133 env[1433]: time="2024-02-09T09:56:55.106072327Z" level=info msg="StartContainer for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" returns successfully" Feb 9 09:56:55.412135 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:55.452672 kubelet[2579]: I0209 09:56:55.452649 2579 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:56:55.615357 kubelet[2579]: I0209 09:56:55.615325 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-qf7gp" podStartSLOduration=-9.223372022239492e+09 pod.CreationTimestamp="2024-02-09 09:56:41 +0000 UTC" firstStartedPulling="2024-02-09 09:56:43.18930501 +0000 UTC m=+14.506106843" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:54.969180476 +0000 UTC m=+26.285982309" watchObservedRunningTime="2024-02-09 09:56:55.615283875 +0000 UTC m=+26.932085708" Feb 9 09:56:55.615762 kubelet[2579]: I0209 09:56:55.615744 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:55.620841 kubelet[2579]: I0209 09:56:55.620803 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:55.670716 kubelet[2579]: I0209 09:56:55.670618 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/517e3b1a-a8db-454a-b71e-6f294c79dbef-config-volume\") pod \"coredns-787d4945fb-hkxn8\" (UID: \"517e3b1a-a8db-454a-b71e-6f294c79dbef\") " pod="kube-system/coredns-787d4945fb-hkxn8" Feb 9 09:56:55.670892 kubelet[2579]: I0209 09:56:55.670869 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da76f2ac-d2bb-4594-8b29-d56e59379e5b-config-volume\") pod \"coredns-787d4945fb-8m6bq\" (UID: \"da76f2ac-d2bb-4594-8b29-d56e59379e5b\") " pod="kube-system/coredns-787d4945fb-8m6bq" Feb 9 09:56:55.670978 kubelet[2579]: I0209 09:56:55.670968 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xcs\" (UniqueName: \"kubernetes.io/projected/da76f2ac-d2bb-4594-8b29-d56e59379e5b-kube-api-access-p8xcs\") pod \"coredns-787d4945fb-8m6bq\" (UID: \"da76f2ac-d2bb-4594-8b29-d56e59379e5b\") " pod="kube-system/coredns-787d4945fb-8m6bq" Feb 9 09:56:55.671077 kubelet[2579]: I0209 09:56:55.671064 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gktts\" (UniqueName: \"kubernetes.io/projected/517e3b1a-a8db-454a-b71e-6f294c79dbef-kube-api-access-gktts\") pod \"coredns-787d4945fb-hkxn8\" (UID: \"517e3b1a-a8db-454a-b71e-6f294c79dbef\") " pod="kube-system/coredns-787d4945fb-hkxn8" Feb 9 09:56:55.911068 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:55.919075 env[1433]: time="2024-02-09T09:56:55.919020680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hkxn8,Uid:517e3b1a-a8db-454a-b71e-6f294c79dbef,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:55.924299 env[1433]: time="2024-02-09T09:56:55.923998481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8m6bq,Uid:da76f2ac-d2bb-4594-8b29-d56e59379e5b,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:55.957707 kubelet[2579]: I0209 09:56:55.957678 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zh99m" podStartSLOduration=-9.223372021897133e+09 pod.CreationTimestamp="2024-02-09 09:56:41 +0000 UTC" firstStartedPulling="2024-02-09 09:56:41.914122281 +0000 UTC m=+13.230924074" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:55.956506229 +0000 UTC m=+27.273308062" watchObservedRunningTime="2024-02-09 09:56:55.957642657 +0000 UTC m=+27.274444490" Feb 9 09:56:58.355175 systemd-networkd[1604]: cilium_host: Link UP Feb 9 09:56:58.356028 systemd-networkd[1604]: cilium_net: Link UP Feb 9 09:56:58.367879 systemd-networkd[1604]: cilium_net: Gained carrier Feb 9 09:56:58.370269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:56:58.370327 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:56:58.373338 systemd-networkd[1604]: cilium_host: Gained carrier Feb 9 09:56:58.450175 systemd-networkd[1604]: cilium_net: Gained IPv6LL Feb 9 09:56:58.529390 systemd-networkd[1604]: cilium_vxlan: Link UP Feb 9 09:56:58.529401 systemd-networkd[1604]: cilium_vxlan: Gained carrier Feb 9 09:56:58.767083 kernel: NET: Registered PF_ALG protocol family Feb 9 09:56:58.818183 systemd-networkd[1604]: cilium_host: Gained IPv6LL Feb 9 09:56:59.476162 systemd-networkd[1604]: lxc_health: Link UP Feb 9 09:56:59.495960 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:56:59.495144 systemd-networkd[1604]: lxc_health: Gained carrier Feb 9 09:56:59.986179 systemd-networkd[1604]: cilium_vxlan: Gained IPv6LL Feb 9 09:57:00.079686 systemd-networkd[1604]: lxc7b80f69c2b2f: Link UP Feb 9 09:57:00.095074 kernel: eth0: renamed from tmp2d3c0 Feb 9 09:57:00.108079 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7b80f69c2b2f: link becomes ready Feb 9 09:57:00.122937 systemd-networkd[1604]: lxc7b80f69c2b2f: Gained carrier Feb 9 09:57:00.123666 systemd-networkd[1604]: lxc5f9522be121d: Link UP Feb 9 09:57:00.134118 kernel: eth0: renamed from tmp7ced4 Feb 9 09:57:00.145075 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5f9522be121d: link becomes ready Feb 9 09:57:00.144883 systemd-networkd[1604]: lxc5f9522be121d: Gained carrier Feb 9 09:57:00.818234 systemd-networkd[1604]: lxc_health: Gained IPv6LL Feb 9 09:57:01.650197 systemd-networkd[1604]: lxc5f9522be121d: Gained IPv6LL Feb 9 09:57:02.162192 systemd-networkd[1604]: lxc7b80f69c2b2f: Gained IPv6LL Feb 9 09:57:03.730176 env[1433]: time="2024-02-09T09:57:03.729434025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:03.730176 env[1433]: time="2024-02-09T09:57:03.729479586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:03.730176 env[1433]: time="2024-02-09T09:57:03.729493306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:03.730176 env[1433]: time="2024-02-09T09:57:03.729634749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ced4d2865499b223f0d42743e4b81b8d2c34fc71c41e1d3d0d66c3cf3ca431a pid=3754 runtime=io.containerd.runc.v2 Feb 9 09:57:03.764632 env[1433]: time="2024-02-09T09:57:03.764566627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:57:03.764821 env[1433]: time="2024-02-09T09:57:03.764799312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:57:03.764926 env[1433]: time="2024-02-09T09:57:03.764904674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:57:03.765467 env[1433]: time="2024-02-09T09:57:03.765432526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d3c0596a9558183212160a7f4eb7a9415839c44e52aa1d59b181036d2d3b6d1 pid=3785 runtime=io.containerd.runc.v2 Feb 9 09:57:03.832304 env[1433]: time="2024-02-09T09:57:03.832263935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8m6bq,Uid:da76f2ac-d2bb-4594-8b29-d56e59379e5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ced4d2865499b223f0d42743e4b81b8d2c34fc71c41e1d3d0d66c3cf3ca431a\"" Feb 9 09:57:03.838394 env[1433]: time="2024-02-09T09:57:03.838358627Z" level=info msg="CreateContainer within sandbox \"7ced4d2865499b223f0d42743e4b81b8d2c34fc71c41e1d3d0d66c3cf3ca431a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:57:03.849431 env[1433]: time="2024-02-09T09:57:03.849379466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hkxn8,Uid:517e3b1a-a8db-454a-b71e-6f294c79dbef,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d3c0596a9558183212160a7f4eb7a9415839c44e52aa1d59b181036d2d3b6d1\"" Feb 9 09:57:03.856341 env[1433]: time="2024-02-09T09:57:03.856301256Z" level=info msg="CreateContainer within sandbox \"2d3c0596a9558183212160a7f4eb7a9415839c44e52aa1d59b181036d2d3b6d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:57:03.955646 env[1433]: time="2024-02-09T09:57:03.955596848Z" level=info msg="CreateContainer within sandbox \"7ced4d2865499b223f0d42743e4b81b8d2c34fc71c41e1d3d0d66c3cf3ca431a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdb5d4141e7a27026a36f67a6d4c7d70f25087f8b482b9606339ef757d3ec775\"" Feb 9 09:57:03.958103 env[1433]: time="2024-02-09T09:57:03.956423386Z" level=info msg="StartContainer for \"fdb5d4141e7a27026a36f67a6d4c7d70f25087f8b482b9606339ef757d3ec775\"" Feb 9 09:57:03.988354 env[1433]: time="2024-02-09T09:57:03.987717145Z" level=info msg="CreateContainer within sandbox \"2d3c0596a9558183212160a7f4eb7a9415839c44e52aa1d59b181036d2d3b6d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae928d473dc290972cfe509fe1a5c20ba1927f7b899a76c275b6a8aff4d93196\"" Feb 9 09:57:03.991733 env[1433]: time="2024-02-09T09:57:03.991692471Z" level=info msg="StartContainer for \"ae928d473dc290972cfe509fe1a5c20ba1927f7b899a76c275b6a8aff4d93196\"" Feb 9 09:57:04.049153 env[1433]: time="2024-02-09T09:57:04.049106982Z" level=info msg="StartContainer for \"fdb5d4141e7a27026a36f67a6d4c7d70f25087f8b482b9606339ef757d3ec775\" returns successfully" Feb 9 09:57:04.081601 env[1433]: time="2024-02-09T09:57:04.081539837Z" level=info msg="StartContainer for \"ae928d473dc290972cfe509fe1a5c20ba1927f7b899a76c275b6a8aff4d93196\" returns successfully" Feb 9 09:57:04.734267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666700485.mount: Deactivated successfully. Feb 9 09:57:04.970152 kubelet[2579]: I0209 09:57:04.970112 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-8m6bq" podStartSLOduration=23.970079374 pod.CreationTimestamp="2024-02-09 09:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:04.967713123 +0000 UTC m=+36.284514996" watchObservedRunningTime="2024-02-09 09:57:04.970079374 +0000 UTC m=+36.286881207" Feb 9 09:57:04.999710 kubelet[2579]: I0209 09:57:04.999610 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-hkxn8" podStartSLOduration=23.999575445 pod.CreationTimestamp="2024-02-09 09:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:57:04.99887943 +0000 UTC m=+36.315681223" watchObservedRunningTime="2024-02-09 09:57:04.999575445 +0000 UTC m=+36.316377278" Feb 9 09:57:07.080881 kubelet[2579]: I0209 09:57:07.080850 2579 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344120 1420 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344154 1420 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344269 1420 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344586 1420 omaha_request_params.cc:62] Current group set to lts Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344676 1420 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344681 1420 update_attempter.cc:643] Scheduling an action processor start. Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344694 1420 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:57:11.344812 update_engine[1420]: I0209 09:57:11.344714 1420 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 09:57:11.345358 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 09:57:11.371068 update_engine[1420]: I0209 09:57:11.370803 1420 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:57:11.371068 update_engine[1420]: I0209 09:57:11.370836 1420 omaha_request_action.cc:271] Request: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: Feb 9 09:57:11.371068 update_engine[1420]: I0209 09:57:11.370847 1420 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:57:11.371716 update_engine[1420]: I0209 09:57:11.371690 1420 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:57:11.371908 update_engine[1420]: I0209 09:57:11.371891 1420 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:57:11.425824 update_engine[1420]: E0209 09:57:11.425787 1420 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:57:11.425943 update_engine[1420]: I0209 09:57:11.425889 1420 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 09:57:21.333925 update_engine[1420]: I0209 09:57:21.333853 1420 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:57:21.334341 update_engine[1420]: I0209 09:57:21.334134 1420 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:57:21.334341 update_engine[1420]: I0209 09:57:21.334317 1420 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:57:21.635667 update_engine[1420]: E0209 09:57:21.635555 1420 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:57:21.635972 update_engine[1420]: I0209 09:57:21.635947 1420 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 09:57:32.333796 update_engine[1420]: I0209 09:57:32.333751 1420 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:57:32.334216 update_engine[1420]: I0209 09:57:32.333940 1420 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:57:32.334216 update_engine[1420]: I0209 09:57:32.334156 1420 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:57:32.612449 update_engine[1420]: E0209 09:57:32.612329 1420 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:57:32.612578 update_engine[1420]: I0209 09:57:32.612454 1420 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 09:57:43.334163 update_engine[1420]: I0209 09:57:43.334077 1420 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:57:43.334523 update_engine[1420]: I0209 09:57:43.334262 1420 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:57:43.334523 update_engine[1420]: I0209 09:57:43.334440 1420 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:57:43.383832 update_engine[1420]: E0209 09:57:43.383800 1420 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:57:43.383949 update_engine[1420]: I0209 09:57:43.383898 1420 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:57:43.383949 update_engine[1420]: I0209 09:57:43.383904 1420 omaha_request_action.cc:621] Omaha request response: Feb 9 09:57:43.383996 update_engine[1420]: E0209 09:57:43.383976 1420 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 09:57:43.383996 update_engine[1420]: I0209 09:57:43.383988 1420 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 09:57:43.383996 update_engine[1420]: I0209 09:57:43.383991 1420 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:57:43.383996 update_engine[1420]: I0209 09:57:43.383994 1420 update_attempter.cc:306] Processing Done. Feb 9 09:57:43.384117 update_engine[1420]: E0209 09:57:43.384007 1420 update_attempter.cc:619] Update failed. Feb 9 09:57:43.384117 update_engine[1420]: I0209 09:57:43.384011 1420 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 09:57:43.384117 update_engine[1420]: I0209 09:57:43.384013 1420 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 09:57:43.384117 update_engine[1420]: I0209 09:57:43.384016 1420 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 09:57:43.384117 update_engine[1420]: I0209 09:57:43.384110 1420 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:57:43.384222 update_engine[1420]: I0209 09:57:43.384128 1420 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:57:43.384222 update_engine[1420]: I0209 09:57:43.384131 1420 omaha_request_action.cc:271] Request: Feb 9 09:57:43.384222 update_engine[1420]: Feb 9 09:57:43.384222 update_engine[1420]: Feb 9 09:57:43.384222 update_engine[1420]: Feb 9 09:57:43.384222 update_engine[1420]: Feb 9 09:57:43.384222 update_engine[1420]: Feb 9 09:57:43.384222 update_engine[1420]: Feb 9 09:57:43.384222 update_engine[1420]: I0209 09:57:43.384135 1420 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:57:43.384394 update_engine[1420]: I0209 09:57:43.384246 1420 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:57:43.384416 update_engine[1420]: I0209 09:57:43.384390 1420 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:57:43.384699 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 09:57:43.426139 update_engine[1420]: E0209 09:57:43.426103 1420 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426203 1420 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426209 1420 omaha_request_action.cc:621] Omaha request response: Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426214 1420 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426218 1420 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426219 1420 update_attempter.cc:306] Processing Done. Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426224 1420 update_attempter.cc:310] Error event sent. Feb 9 09:57:43.426273 update_engine[1420]: I0209 09:57:43.426232 1420 update_check_scheduler.cc:74] Next update check in 47m3s Feb 9 09:57:43.426599 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 09:58:31.647233 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.12.6:45554.service. Feb 9 09:58:32.063807 sshd[3964]: Accepted publickey for core from 10.200.12.6 port 45554 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:32.065140 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:32.069583 systemd[1]: Started session-8.scope. Feb 9 09:58:32.069845 systemd-logind[1417]: New session 8 of user core. Feb 9 09:58:32.478241 sshd[3964]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:32.480785 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:58:32.481358 systemd[1]: sshd@5-10.200.20.37:22-10.200.12.6:45554.service: Deactivated successfully. Feb 9 09:58:32.482251 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:58:32.483300 systemd-logind[1417]: Removed session 8. Feb 9 09:58:37.547773 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.12.6:44158.service. Feb 9 09:58:37.965573 sshd[3979]: Accepted publickey for core from 10.200.12.6 port 44158 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:37.967190 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:37.971463 systemd[1]: Started session-9.scope. Feb 9 09:58:37.972517 systemd-logind[1417]: New session 9 of user core. Feb 9 09:58:38.327981 sshd[3979]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:38.330879 systemd[1]: sshd@6-10.200.20.37:22-10.200.12.6:44158.service: Deactivated successfully. Feb 9 09:58:38.331062 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:58:38.331719 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:58:38.332422 systemd-logind[1417]: Removed session 9. Feb 9 09:58:43.397535 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.12.6:44162.service. Feb 9 09:58:43.812481 sshd[3995]: Accepted publickey for core from 10.200.12.6 port 44162 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:43.814183 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:43.818426 systemd[1]: Started session-10.scope. Feb 9 09:58:43.818741 systemd-logind[1417]: New session 10 of user core. Feb 9 09:58:44.177678 sshd[3995]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:44.180793 systemd[1]: sshd@7-10.200.20.37:22-10.200.12.6:44162.service: Deactivated successfully. Feb 9 09:58:44.182249 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:58:44.182910 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:58:44.183850 systemd-logind[1417]: Removed session 10. Feb 9 09:58:49.251958 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.12.6:60670.service. Feb 9 09:58:49.702078 sshd[4008]: Accepted publickey for core from 10.200.12.6 port 60670 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:49.703716 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:49.707692 systemd-logind[1417]: New session 11 of user core. Feb 9 09:58:49.708167 systemd[1]: Started session-11.scope. Feb 9 09:58:50.089223 sshd[4008]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:50.092324 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:58:50.093096 systemd[1]: sshd@8-10.200.20.37:22-10.200.12.6:60670.service: Deactivated successfully. Feb 9 09:58:50.093920 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:58:50.094631 systemd-logind[1417]: Removed session 11. Feb 9 09:58:55.158087 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.12.6:60680.service. Feb 9 09:58:55.575663 sshd[4021]: Accepted publickey for core from 10.200.12.6 port 60680 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:55.576984 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:55.581435 systemd[1]: Started session-12.scope. Feb 9 09:58:55.581455 systemd-logind[1417]: New session 12 of user core. Feb 9 09:58:55.937164 sshd[4021]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:55.939726 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:58:55.939881 systemd[1]: sshd@9-10.200.20.37:22-10.200.12.6:60680.service: Deactivated successfully. Feb 9 09:58:55.940749 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:58:55.941243 systemd-logind[1417]: Removed session 12. Feb 9 09:59:01.005699 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.12.6:50854.service. Feb 9 09:59:01.421328 sshd[4035]: Accepted publickey for core from 10.200.12.6 port 50854 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:01.422861 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:01.428155 systemd-logind[1417]: New session 13 of user core. Feb 9 09:59:01.428910 systemd[1]: Started session-13.scope. Feb 9 09:59:01.789485 sshd[4035]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:01.791956 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:59:01.792530 systemd[1]: sshd@10-10.200.20.37:22-10.200.12.6:50854.service: Deactivated successfully. Feb 9 09:59:01.793377 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:59:01.794487 systemd-logind[1417]: Removed session 13. Feb 9 09:59:06.861169 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.12.6:50862.service. Feb 9 09:59:07.277814 sshd[4049]: Accepted publickey for core from 10.200.12.6 port 50862 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:07.279476 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:07.284106 systemd[1]: Started session-14.scope. Feb 9 09:59:07.284953 systemd-logind[1417]: New session 14 of user core. Feb 9 09:59:07.651466 sshd[4049]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:07.654538 systemd[1]: sshd@11-10.200.20.37:22-10.200.12.6:50862.service: Deactivated successfully. Feb 9 09:59:07.655433 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:59:07.657024 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:59:07.658400 systemd-logind[1417]: Removed session 14. Feb 9 09:59:07.720975 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.12.6:53106.service. Feb 9 09:59:08.141814 sshd[4063]: Accepted publickey for core from 10.200.12.6 port 53106 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:08.143230 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:08.147587 systemd-logind[1417]: New session 15 of user core. Feb 9 09:59:08.148025 systemd[1]: Started session-15.scope. Feb 9 09:59:09.233178 sshd[4063]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:09.236007 systemd[1]: sshd@12-10.200.20.37:22-10.200.12.6:53106.service: Deactivated successfully. Feb 9 09:59:09.237584 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:59:09.238322 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:59:09.239133 systemd-logind[1417]: Removed session 15. Feb 9 09:59:09.308968 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.12.6:53114.service. Feb 9 09:59:09.759271 sshd[4074]: Accepted publickey for core from 10.200.12.6 port 53114 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:09.760889 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:09.765172 systemd[1]: Started session-16.scope. Feb 9 09:59:09.765504 systemd-logind[1417]: New session 16 of user core. Feb 9 09:59:10.143607 sshd[4074]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:10.146391 systemd[1]: sshd@13-10.200.20.37:22-10.200.12.6:53114.service: Deactivated successfully. Feb 9 09:59:10.147682 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:59:10.148174 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:59:10.149019 systemd-logind[1417]: Removed session 16. Feb 9 09:59:15.218970 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.12.6:53124.service. Feb 9 09:59:15.671082 sshd[4089]: Accepted publickey for core from 10.200.12.6 port 53124 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:15.669644 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:15.674876 systemd[1]: Started session-17.scope. Feb 9 09:59:15.675073 systemd-logind[1417]: New session 17 of user core. Feb 9 09:59:16.059244 sshd[4089]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:16.062193 systemd[1]: sshd@14-10.200.20.37:22-10.200.12.6:53124.service: Deactivated successfully. Feb 9 09:59:16.062994 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:59:16.063627 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:59:16.064354 systemd-logind[1417]: Removed session 17. Feb 9 09:59:21.132254 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.12.6:54976.service. Feb 9 09:59:21.554929 sshd[4101]: Accepted publickey for core from 10.200.12.6 port 54976 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:21.556567 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:21.560915 systemd[1]: Started session-18.scope. Feb 9 09:59:21.561209 systemd-logind[1417]: New session 18 of user core. Feb 9 09:59:21.934008 sshd[4101]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:21.937126 systemd[1]: sshd@15-10.200.20.37:22-10.200.12.6:54976.service: Deactivated successfully. Feb 9 09:59:21.938517 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:59:21.939232 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:59:21.940165 systemd-logind[1417]: Removed session 18. Feb 9 09:59:22.002790 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.12.6:54982.service. Feb 9 09:59:22.423574 sshd[4114]: Accepted publickey for core from 10.200.12.6 port 54982 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:22.425581 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:22.430130 systemd[1]: Started session-19.scope. Feb 9 09:59:22.430625 systemd-logind[1417]: New session 19 of user core. Feb 9 09:59:22.818253 sshd[4114]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:22.820580 systemd[1]: sshd@16-10.200.20.37:22-10.200.12.6:54982.service: Deactivated successfully. Feb 9 09:59:22.821647 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:59:22.822229 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:59:22.823271 systemd-logind[1417]: Removed session 19. Feb 9 09:59:22.891671 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.12.6:54984.service. Feb 9 09:59:23.340848 sshd[4124]: Accepted publickey for core from 10.200.12.6 port 54984 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:23.342635 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:23.346984 systemd-logind[1417]: New session 20 of user core. Feb 9 09:59:23.347046 systemd[1]: Started session-20.scope. Feb 9 09:59:24.558238 sshd[4124]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:24.560698 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:59:24.560913 systemd[1]: sshd@17-10.200.20.37:22-10.200.12.6:54984.service: Deactivated successfully. Feb 9 09:59:24.561789 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:59:24.562310 systemd-logind[1417]: Removed session 20. Feb 9 09:59:24.631794 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.12.6:54994.service. Feb 9 09:59:25.055482 sshd[4191]: Accepted publickey for core from 10.200.12.6 port 54994 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:25.057167 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:25.061535 systemd[1]: Started session-21.scope. Feb 9 09:59:25.061998 systemd-logind[1417]: New session 21 of user core. Feb 9 09:59:25.522435 sshd[4191]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:25.524883 systemd[1]: sshd@18-10.200.20.37:22-10.200.12.6:54994.service: Deactivated successfully. Feb 9 09:59:25.526330 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:59:25.526807 systemd-logind[1417]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:59:25.527760 systemd-logind[1417]: Removed session 21. Feb 9 09:59:25.599063 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.12.6:55000.service. Feb 9 09:59:26.047872 sshd[4202]: Accepted publickey for core from 10.200.12.6 port 55000 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:26.049453 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:26.053876 systemd[1]: Started session-22.scope. Feb 9 09:59:26.054934 systemd-logind[1417]: New session 22 of user core. Feb 9 09:59:26.430129 sshd[4202]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:26.433627 systemd[1]: sshd@19-10.200.20.37:22-10.200.12.6:55000.service: Deactivated successfully. Feb 9 09:59:26.434462 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:59:26.434983 systemd-logind[1417]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:59:26.435711 systemd-logind[1417]: Removed session 22. Feb 9 09:59:31.506759 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.12.6:39690.service. Feb 9 09:59:31.927931 sshd[4217]: Accepted publickey for core from 10.200.12.6 port 39690 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:31.929586 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:31.934108 systemd[1]: Started session-23.scope. Feb 9 09:59:31.934427 systemd-logind[1417]: New session 23 of user core. Feb 9 09:59:32.287819 sshd[4217]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:32.290637 systemd[1]: sshd@20-10.200.20.37:22-10.200.12.6:39690.service: Deactivated successfully. Feb 9 09:59:32.290822 systemd-logind[1417]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:59:32.291493 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:59:32.292146 systemd-logind[1417]: Removed session 23. Feb 9 09:59:37.358396 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.12.6:58586.service. Feb 9 09:59:37.778939 sshd[4258]: Accepted publickey for core from 10.200.12.6 port 58586 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:37.780673 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:37.784961 systemd[1]: Started session-24.scope. Feb 9 09:59:37.785172 systemd-logind[1417]: New session 24 of user core. Feb 9 09:59:38.141700 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:38.144645 systemd[1]: sshd@21-10.200.20.37:22-10.200.12.6:58586.service: Deactivated successfully. Feb 9 09:59:38.145505 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:59:38.146198 systemd-logind[1417]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:59:38.147019 systemd-logind[1417]: Removed session 24. Feb 9 09:59:43.216629 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.12.6:58588.service. Feb 9 09:59:43.666822 sshd[4273]: Accepted publickey for core from 10.200.12.6 port 58588 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:43.667931 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:43.672093 systemd-logind[1417]: New session 25 of user core. Feb 9 09:59:43.672859 systemd[1]: Started session-25.scope. Feb 9 09:59:44.049234 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:44.052025 systemd-logind[1417]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:59:44.052206 systemd[1]: sshd@22-10.200.20.37:22-10.200.12.6:58588.service: Deactivated successfully. Feb 9 09:59:44.053116 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:59:44.053593 systemd-logind[1417]: Removed session 25. Feb 9 09:59:49.126390 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.12.6:38928.service. Feb 9 09:59:49.575723 sshd[4289]: Accepted publickey for core from 10.200.12.6 port 38928 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:49.576993 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:49.581797 systemd[1]: Started session-26.scope. Feb 9 09:59:49.582017 systemd-logind[1417]: New session 26 of user core. Feb 9 09:59:49.961280 sshd[4289]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:49.964354 systemd[1]: sshd@23-10.200.20.37:22-10.200.12.6:38928.service: Deactivated successfully. Feb 9 09:59:49.965209 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:59:49.965662 systemd-logind[1417]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:59:49.966320 systemd-logind[1417]: Removed session 26. Feb 9 09:59:50.030717 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.12.6:38930.service. Feb 9 09:59:50.446329 sshd[4302]: Accepted publickey for core from 10.200.12.6 port 38930 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:50.447865 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:50.452511 systemd[1]: Started session-27.scope. Feb 9 09:59:50.452742 systemd-logind[1417]: New session 27 of user core. Feb 9 09:59:52.750652 systemd[1]: run-containerd-runc-k8s.io-89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb-runc.jdws0N.mount: Deactivated successfully. Feb 9 09:59:52.765825 env[1433]: time="2024-02-09T09:59:52.763197946Z" level=info msg="StopContainer for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" with timeout 30 (s)" Feb 9 09:59:52.766734 env[1433]: time="2024-02-09T09:59:52.766671662Z" level=info msg="Stop container \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" with signal terminated" Feb 9 09:59:52.776630 env[1433]: time="2024-02-09T09:59:52.776578476Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:59:52.781853 env[1433]: time="2024-02-09T09:59:52.781818630Z" level=info msg="StopContainer for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" with timeout 1 (s)" Feb 9 09:59:52.782423 env[1433]: time="2024-02-09T09:59:52.782391843Z" level=info msg="Stop container \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" with signal terminated" Feb 9 09:59:52.790105 systemd-networkd[1604]: lxc_health: Link DOWN Feb 9 09:59:52.790110 systemd-networkd[1604]: lxc_health: Lost carrier Feb 9 09:59:52.803276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970-rootfs.mount: Deactivated successfully. Feb 9 09:59:52.832381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.227548 env[1433]: time="2024-02-09T09:59:53.227506539Z" level=info msg="shim disconnected" id=516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970 Feb 9 09:59:53.227993 env[1433]: time="2024-02-09T09:59:53.227970318Z" level=warning msg="cleaning up after shim disconnected" id=516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970 namespace=k8s.io Feb 9 09:59:53.228126 env[1433]: time="2024-02-09T09:59:53.228109351Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.228304 env[1433]: time="2024-02-09T09:59:53.227938239Z" level=info msg="shim disconnected" id=89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb Feb 9 09:59:53.228352 env[1433]: time="2024-02-09T09:59:53.228306622Z" level=warning msg="cleaning up after shim disconnected" id=89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb namespace=k8s.io Feb 9 09:59:53.228352 env[1433]: time="2024-02-09T09:59:53.228316502Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.236218 env[1433]: time="2024-02-09T09:59:53.236176896Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4372 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.237348 env[1433]: time="2024-02-09T09:59:53.237317483Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4373 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.279146 env[1433]: time="2024-02-09T09:59:53.279100859Z" level=info msg="StopContainer for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" returns successfully" Feb 9 09:59:53.279857 env[1433]: time="2024-02-09T09:59:53.279821025Z" level=info msg="StopPodSandbox for \"1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef\"" Feb 9 09:59:53.279933 env[1433]: time="2024-02-09T09:59:53.279886462Z" level=info msg="Container to stop \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.281798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef-shm.mount: Deactivated successfully. Feb 9 09:59:53.282646 env[1433]: time="2024-02-09T09:59:53.282604416Z" level=info msg="StopContainer for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" returns successfully" Feb 9 09:59:53.283414 env[1433]: time="2024-02-09T09:59:53.283379300Z" level=info msg="StopPodSandbox for \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\"" Feb 9 09:59:53.283498 env[1433]: time="2024-02-09T09:59:53.283437257Z" level=info msg="Container to stop \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.283498 env[1433]: time="2024-02-09T09:59:53.283451816Z" level=info msg="Container to stop \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.283498 env[1433]: time="2024-02-09T09:59:53.283462776Z" level=info msg="Container to stop \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.283498 env[1433]: time="2024-02-09T09:59:53.283475615Z" level=info msg="Container to stop \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.283498 env[1433]: time="2024-02-09T09:59:53.283486615Z" level=info msg="Container to stop \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.744659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.744818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.744914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e-shm.mount: Deactivated successfully. Feb 9 09:59:53.800090 env[1433]: time="2024-02-09T09:59:53.799008349Z" level=info msg="shim disconnected" id=42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e Feb 9 09:59:53.800482 env[1433]: time="2024-02-09T09:59:53.800456482Z" level=warning msg="cleaning up after shim disconnected" id=42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e namespace=k8s.io Feb 9 09:59:53.800596 env[1433]: time="2024-02-09T09:59:53.799150223Z" level=info msg="shim disconnected" id=1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef Feb 9 09:59:53.800677 env[1433]: time="2024-02-09T09:59:53.800645033Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.800799 env[1433]: time="2024-02-09T09:59:53.800647753Z" level=warning msg="cleaning up after shim disconnected" id=1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef namespace=k8s.io Feb 9 09:59:53.800861 env[1433]: time="2024-02-09T09:59:53.800846104Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.808269 env[1433]: time="2024-02-09T09:59:53.808227160Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4440 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.808775 env[1433]: time="2024-02-09T09:59:53.808750896Z" level=info msg="TearDown network for sandbox \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" successfully" Feb 9 09:59:53.808903 env[1433]: time="2024-02-09T09:59:53.808887010Z" level=info msg="StopPodSandbox for \"42df26ee07c0c75701a86e0e89030f116b427774c91fb50828018f39353d457e\" returns successfully" Feb 9 09:59:53.809079 env[1433]: time="2024-02-09T09:59:53.808875370Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.809481 env[1433]: time="2024-02-09T09:59:53.809460183Z" level=info msg="TearDown network for sandbox \"1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef\" successfully" Feb 9 09:59:53.809651 env[1433]: time="2024-02-09T09:59:53.809632655Z" level=info msg="StopPodSandbox for \"1d820722d6dd346c06da7991ca4218e54133a4b32e9f7132cb124490a669d0ef\" returns successfully" Feb 9 09:59:53.940398 kubelet[2579]: I0209 09:59:53.940363 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-xtables-lock\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.942813 kubelet[2579]: I0209 09:59:53.940413 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-hubble-tls\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.942813 kubelet[2579]: I0209 09:59:53.940433 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-etc-cni-netd\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.942813 kubelet[2579]: I0209 09:59:53.940485 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-config-path\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.942813 kubelet[2579]: I0209 09:59:53.940506 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-run\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.942813 kubelet[2579]: I0209 09:59:53.940534 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgln7\" (UniqueName: \"kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-kube-api-access-cgln7\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.942813 kubelet[2579]: I0209 09:59:53.940554 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fede369-4a3d-45b7-bf54-e76e12b718cb-clustermesh-secrets\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943003 kubelet[2579]: I0209 09:59:53.940571 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-lib-modules\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943003 kubelet[2579]: I0209 09:59:53.940587 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-cgroup\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943003 kubelet[2579]: I0209 09:59:53.940604 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-kernel\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943003 kubelet[2579]: I0209 09:59:53.940677 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6516a85-062a-4698-8b85-50223141b450-cilium-config-path\") pod \"d6516a85-062a-4698-8b85-50223141b450\" (UID: \"d6516a85-062a-4698-8b85-50223141b450\") " Feb 9 09:59:53.943003 kubelet[2579]: I0209 09:59:53.940703 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv7vb\" (UniqueName: \"kubernetes.io/projected/d6516a85-062a-4698-8b85-50223141b450-kube-api-access-cv7vb\") pod \"d6516a85-062a-4698-8b85-50223141b450\" (UID: \"d6516a85-062a-4698-8b85-50223141b450\") " Feb 9 09:59:53.943003 kubelet[2579]: I0209 09:59:53.940725 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cni-path\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943216 kubelet[2579]: I0209 09:59:53.940757 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-bpf-maps\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943216 kubelet[2579]: I0209 09:59:53.940776 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-hostproc\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943216 kubelet[2579]: I0209 09:59:53.940792 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-net\") pod \"2fede369-4a3d-45b7-bf54-e76e12b718cb\" (UID: \"2fede369-4a3d-45b7-bf54-e76e12b718cb\") " Feb 9 09:59:53.943216 kubelet[2579]: I0209 09:59:53.940856 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943216 kubelet[2579]: I0209 09:59:53.940891 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943368 kubelet[2579]: I0209 09:59:53.941250 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943368 kubelet[2579]: I0209 09:59:53.941294 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943368 kubelet[2579]: W0209 09:59:53.941458 2579 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2fede369-4a3d-45b7-bf54-e76e12b718cb/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:53.943368 kubelet[2579]: I0209 09:59:53.943126 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:53.943368 kubelet[2579]: I0209 09:59:53.943170 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943516 kubelet[2579]: I0209 09:59:53.943435 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943602 kubelet[2579]: I0209 09:59:53.943571 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.943693 kubelet[2579]: W0209 09:59:53.943669 2579 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d6516a85-062a-4698-8b85-50223141b450/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:53.947878 kubelet[2579]: I0209 09:59:53.947839 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6516a85-062a-4698-8b85-50223141b450-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d6516a85-062a-4698-8b85-50223141b450" (UID: "d6516a85-062a-4698-8b85-50223141b450"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:53.947993 kubelet[2579]: I0209 09:59:53.947898 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.947993 kubelet[2579]: I0209 09:59:53.947916 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cni-path" (OuterVolumeSpecName: "cni-path") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.947993 kubelet[2579]: I0209 09:59:53.947932 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-hostproc" (OuterVolumeSpecName: "hostproc") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.951453 systemd[1]: var-lib-kubelet-pods-2fede369\x2d4a3d\x2d45b7\x2dbf54\x2de76e12b718cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgln7.mount: Deactivated successfully. Feb 9 09:59:53.954566 kubelet[2579]: I0209 09:59:53.954170 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.951616 systemd[1]: var-lib-kubelet-pods-2fede369\x2d4a3d\x2d45b7\x2dbf54\x2de76e12b718cb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:53.955563 kubelet[2579]: I0209 09:59:53.955524 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-kube-api-access-cgln7" (OuterVolumeSpecName: "kube-api-access-cgln7") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "kube-api-access-cgln7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.958220 systemd[1]: var-lib-kubelet-pods-2fede369\x2d4a3d\x2d45b7\x2dbf54\x2de76e12b718cb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:53.959309 kubelet[2579]: I0209 09:59:53.959269 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fede369-4a3d-45b7-bf54-e76e12b718cb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2fede369-4a3d-45b7-bf54-e76e12b718cb" (UID: "2fede369-4a3d-45b7-bf54-e76e12b718cb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:53.961614 systemd[1]: var-lib-kubelet-pods-d6516a85\x2d062a\x2d4698\x2d8b85\x2d50223141b450-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcv7vb.mount: Deactivated successfully. Feb 9 09:59:53.962583 kubelet[2579]: I0209 09:59:53.962543 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6516a85-062a-4698-8b85-50223141b450-kube-api-access-cv7vb" (OuterVolumeSpecName: "kube-api-access-cv7vb") pod "d6516a85-062a-4698-8b85-50223141b450" (UID: "d6516a85-062a-4698-8b85-50223141b450"). InnerVolumeSpecName "kube-api-access-cv7vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.987114 kubelet[2579]: E0209 09:59:53.987082 2579 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:59:54.041670 kubelet[2579]: I0209 09:59:54.041577 2579 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cni-path\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.041670 kubelet[2579]: I0209 09:59:54.041613 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6516a85-062a-4698-8b85-50223141b450-cilium-config-path\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.041670 kubelet[2579]: I0209 09:59:54.041626 2579 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cv7vb\" (UniqueName: \"kubernetes.io/projected/d6516a85-062a-4698-8b85-50223141b450-kube-api-access-cv7vb\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.041670 kubelet[2579]: I0209 09:59:54.041637 2579 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-hostproc\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.041670 kubelet[2579]: I0209 09:59:54.041647 2579 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-net\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.042335 kubelet[2579]: I0209 09:59:54.042251 2579 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-bpf-maps\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.042564 kubelet[2579]: I0209 09:59:54.042528 2579 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-xtables-lock\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.042696 kubelet[2579]: I0209 09:59:54.042686 2579 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-hubble-tls\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.042829 kubelet[2579]: I0209 09:59:54.042820 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-config-path\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.042982 kubelet[2579]: I0209 09:59:54.042943 2579 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-etc-cni-netd\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.043092 kubelet[2579]: I0209 09:59:54.043081 2579 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-cgln7\" (UniqueName: \"kubernetes.io/projected/2fede369-4a3d-45b7-bf54-e76e12b718cb-kube-api-access-cgln7\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.043161 kubelet[2579]: I0209 09:59:54.043152 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-run\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.043220 kubelet[2579]: I0209 09:59:54.043211 2579 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.043280 kubelet[2579]: I0209 09:59:54.043272 2579 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fede369-4a3d-45b7-bf54-e76e12b718cb-clustermesh-secrets\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.043340 kubelet[2579]: I0209 09:59:54.043332 2579 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-lib-modules\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.043397 kubelet[2579]: I0209 09:59:54.043389 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fede369-4a3d-45b7-bf54-e76e12b718cb-cilium-cgroup\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:54.238852 kubelet[2579]: I0209 09:59:54.238810 2579 scope.go:115] "RemoveContainer" containerID="516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970" Feb 9 09:59:54.240938 env[1433]: time="2024-02-09T09:59:54.240628445Z" level=info msg="RemoveContainer for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\"" Feb 9 09:59:54.265430 env[1433]: time="2024-02-09T09:59:54.265379946Z" level=info msg="RemoveContainer for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" returns successfully" Feb 9 09:59:54.265833 kubelet[2579]: I0209 09:59:54.265813 2579 scope.go:115] "RemoveContainer" containerID="516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970" Feb 9 09:59:54.266228 env[1433]: time="2024-02-09T09:59:54.266146471Z" level=error msg="ContainerStatus for \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\": not found" Feb 9 09:59:54.266382 kubelet[2579]: E0209 09:59:54.266368 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\": not found" containerID="516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970" Feb 9 09:59:54.266503 kubelet[2579]: I0209 09:59:54.266491 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970} err="failed to get container status \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\": rpc error: code = NotFound desc = an error occurred when try to find container \"516074dce0410cd399e318203a4368a6a0232a679d094818064073d3c3ebf970\": not found" Feb 9 09:59:54.266596 kubelet[2579]: I0209 09:59:54.266585 2579 scope.go:115] "RemoveContainer" containerID="89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb" Feb 9 09:59:54.268816 env[1433]: time="2024-02-09T09:59:54.268707753Z" level=info msg="RemoveContainer for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\"" Feb 9 09:59:54.287837 env[1433]: time="2024-02-09T09:59:54.287697719Z" level=info msg="RemoveContainer for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" returns successfully" Feb 9 09:59:54.288163 kubelet[2579]: I0209 09:59:54.288113 2579 scope.go:115] "RemoveContainer" containerID="08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32" Feb 9 09:59:54.289753 env[1433]: time="2024-02-09T09:59:54.289476637Z" level=info msg="RemoveContainer for \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\"" Feb 9 09:59:54.311547 env[1433]: time="2024-02-09T09:59:54.311350910Z" level=info msg="RemoveContainer for \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\" returns successfully" Feb 9 09:59:54.312172 kubelet[2579]: I0209 09:59:54.312116 2579 scope.go:115] "RemoveContainer" containerID="39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2" Feb 9 09:59:54.313507 env[1433]: time="2024-02-09T09:59:54.313478253Z" level=info msg="RemoveContainer for \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\"" Feb 9 09:59:54.336709 env[1433]: time="2024-02-09T09:59:54.336671225Z" level=info msg="RemoveContainer for \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\" returns successfully" Feb 9 09:59:54.337142 kubelet[2579]: I0209 09:59:54.337122 2579 scope.go:115] "RemoveContainer" containerID="24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f" Feb 9 09:59:54.338332 env[1433]: time="2024-02-09T09:59:54.338306350Z" level=info msg="RemoveContainer for \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\"" Feb 9 09:59:54.360718 env[1433]: time="2024-02-09T09:59:54.360673001Z" level=info msg="RemoveContainer for \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\" returns successfully" Feb 9 09:59:54.361174 kubelet[2579]: I0209 09:59:54.361146 2579 scope.go:115] "RemoveContainer" containerID="471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41" Feb 9 09:59:54.362282 env[1433]: time="2024-02-09T09:59:54.362256368Z" level=info msg="RemoveContainer for \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\"" Feb 9 09:59:54.387501 env[1433]: time="2024-02-09T09:59:54.387453968Z" level=info msg="RemoveContainer for \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\" returns successfully" Feb 9 09:59:54.387739 kubelet[2579]: I0209 09:59:54.387720 2579 scope.go:115] "RemoveContainer" containerID="89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb" Feb 9 09:59:54.388091 env[1433]: time="2024-02-09T09:59:54.388014223Z" level=error msg="ContainerStatus for \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\": not found" Feb 9 09:59:54.388266 kubelet[2579]: E0209 09:59:54.388245 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\": not found" containerID="89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb" Feb 9 09:59:54.388321 kubelet[2579]: I0209 09:59:54.388281 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb} err="failed to get container status \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"89323517f86cb6eedd4591550604f40301f79539c36f8fd41844c849ff7b8fdb\": not found" Feb 9 09:59:54.388321 kubelet[2579]: I0209 09:59:54.388292 2579 scope.go:115] "RemoveContainer" containerID="08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32" Feb 9 09:59:54.388497 env[1433]: time="2024-02-09T09:59:54.388449163Z" level=error msg="ContainerStatus for \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\": not found" Feb 9 09:59:54.388621 kubelet[2579]: E0209 09:59:54.388602 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\": not found" containerID="08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32" Feb 9 09:59:54.388663 kubelet[2579]: I0209 09:59:54.388653 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32} err="failed to get container status \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\": rpc error: code = NotFound desc = an error occurred when try to find container \"08c7ec84b6f923e838b73a8a4614e2e27f752bf32c3c4243954bbff8d9cd9c32\": not found" Feb 9 09:59:54.388693 kubelet[2579]: I0209 09:59:54.388664 2579 scope.go:115] "RemoveContainer" containerID="39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2" Feb 9 09:59:54.388863 env[1433]: time="2024-02-09T09:59:54.388818986Z" level=error msg="ContainerStatus for \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\": not found" Feb 9 09:59:54.388964 kubelet[2579]: E0209 09:59:54.388945 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\": not found" containerID="39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2" Feb 9 09:59:54.389005 kubelet[2579]: I0209 09:59:54.388988 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2} err="failed to get container status \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"39d0b63a4bc2ca4136e90addfd41dd264bc0a91d4e7b7836578553ad74ff9fa2\": not found" Feb 9 09:59:54.389005 kubelet[2579]: I0209 09:59:54.388998 2579 scope.go:115] "RemoveContainer" containerID="24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f" Feb 9 09:59:54.389205 env[1433]: time="2024-02-09T09:59:54.389159970Z" level=error msg="ContainerStatus for \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\": not found" Feb 9 09:59:54.389348 kubelet[2579]: E0209 09:59:54.389309 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\": not found" containerID="24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f" Feb 9 09:59:54.389391 kubelet[2579]: I0209 09:59:54.389360 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f} err="failed to get container status \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"24af84ac6ff6e53cc5afd2cf3ec18f624924823b85d9ce32f381c8ebd8794a7f\": not found" Feb 9 09:59:54.389391 kubelet[2579]: I0209 09:59:54.389371 2579 scope.go:115] "RemoveContainer" containerID="471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41" Feb 9 09:59:54.389605 env[1433]: time="2024-02-09T09:59:54.389561631Z" level=error msg="ContainerStatus for \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\": not found" Feb 9 09:59:54.389721 kubelet[2579]: E0209 09:59:54.389700 2579 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\": not found" containerID="471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41" Feb 9 09:59:54.389785 kubelet[2579]: I0209 09:59:54.389744 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41} err="failed to get container status \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\": rpc error: code = NotFound desc = an error occurred when try to find container \"471c1f28fd7c2e3d6ac236bbd721c08ba946a73f4171a719774e10b432899e41\": not found" Feb 9 09:59:54.764229 sshd[4302]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:54.767070 systemd[1]: sshd@24-10.200.20.37:22-10.200.12.6:38930.service: Deactivated successfully. Feb 9 09:59:54.767897 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:59:54.768669 systemd-logind[1417]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:59:54.769369 systemd-logind[1417]: Removed session 27. Feb 9 09:59:54.838096 systemd[1]: Started sshd@25-10.200.20.37:22-10.200.12.6:38932.service. Feb 9 09:59:54.842209 kubelet[2579]: I0209 09:59:54.842178 2579 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2fede369-4a3d-45b7-bf54-e76e12b718cb path="/var/lib/kubelet/pods/2fede369-4a3d-45b7-bf54-e76e12b718cb/volumes" Feb 9 09:59:54.843125 kubelet[2579]: I0209 09:59:54.843108 2579 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d6516a85-062a-4698-8b85-50223141b450 path="/var/lib/kubelet/pods/d6516a85-062a-4698-8b85-50223141b450/volumes" Feb 9 09:59:55.287365 sshd[4473]: Accepted publickey for core from 10.200.12.6 port 38932 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:55.289209 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:55.293088 systemd-logind[1417]: New session 28 of user core. Feb 9 09:59:55.293561 systemd[1]: Started session-28.scope. Feb 9 09:59:55.838080 kubelet[2579]: E0209 09:59:55.838031 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 09:59:56.779202 kubelet[2579]: I0209 09:59:56.779158 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:56.779202 kubelet[2579]: E0209 09:59:56.779216 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fede369-4a3d-45b7-bf54-e76e12b718cb" containerName="apply-sysctl-overwrites" Feb 9 09:59:56.779397 kubelet[2579]: E0209 09:59:56.779229 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fede369-4a3d-45b7-bf54-e76e12b718cb" containerName="clean-cilium-state" Feb 9 09:59:56.779397 kubelet[2579]: E0209 09:59:56.779238 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fede369-4a3d-45b7-bf54-e76e12b718cb" containerName="mount-cgroup" Feb 9 09:59:56.779397 kubelet[2579]: E0209 09:59:56.779246 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fede369-4a3d-45b7-bf54-e76e12b718cb" containerName="mount-bpf-fs" Feb 9 09:59:56.779397 kubelet[2579]: E0209 09:59:56.779252 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d6516a85-062a-4698-8b85-50223141b450" containerName="cilium-operator" Feb 9 09:59:56.779397 kubelet[2579]: E0209 09:59:56.779259 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fede369-4a3d-45b7-bf54-e76e12b718cb" containerName="cilium-agent" Feb 9 09:59:56.779397 kubelet[2579]: I0209 09:59:56.779285 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="2fede369-4a3d-45b7-bf54-e76e12b718cb" containerName="cilium-agent" Feb 9 09:59:56.779397 kubelet[2579]: I0209 09:59:56.779292 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="d6516a85-062a-4698-8b85-50223141b450" containerName="cilium-operator" Feb 9 09:59:56.847481 sshd[4473]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:56.850959 systemd[1]: sshd@25-10.200.20.37:22-10.200.12.6:38932.service: Deactivated successfully. Feb 9 09:59:56.851797 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 09:59:56.852171 systemd-logind[1417]: Session 28 logged out. Waiting for processes to exit. Feb 9 09:59:56.853445 systemd-logind[1417]: Removed session 28. Feb 9 09:59:56.858919 kubelet[2579]: I0209 09:59:56.858890 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-run\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859250 kubelet[2579]: I0209 09:59:56.859198 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-xtables-lock\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859250 kubelet[2579]: I0209 09:59:56.859232 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-ipsec-secrets\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859326 kubelet[2579]: I0209 09:59:56.859259 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-hubble-tls\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859326 kubelet[2579]: I0209 09:59:56.859281 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-etc-cni-netd\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859326 kubelet[2579]: I0209 09:59:56.859303 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-clustermesh-secrets\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859326 kubelet[2579]: I0209 09:59:56.859324 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfgmx\" (UniqueName: \"kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-kube-api-access-dfgmx\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859426 kubelet[2579]: I0209 09:59:56.859344 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-config-path\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859426 kubelet[2579]: I0209 09:59:56.859364 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-bpf-maps\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859426 kubelet[2579]: I0209 09:59:56.859383 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-hostproc\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859426 kubelet[2579]: I0209 09:59:56.859400 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-cgroup\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859426 kubelet[2579]: I0209 09:59:56.859419 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cni-path\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859540 kubelet[2579]: I0209 09:59:56.859437 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-lib-modules\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859540 kubelet[2579]: I0209 09:59:56.859460 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-net\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.859540 kubelet[2579]: I0209 09:59:56.859480 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-kernel\") pod \"cilium-hq9dv\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " pod="kube-system/cilium-hq9dv" Feb 9 09:59:56.923507 systemd[1]: Started sshd@26-10.200.20.37:22-10.200.12.6:38936.service. Feb 9 09:59:57.084871 env[1433]: time="2024-02-09T09:59:57.083742880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hq9dv,Uid:4071add9-2959-42fe-8fb4-be5a647f26b6,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:57.183162 env[1433]: time="2024-02-09T09:59:57.183083777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:57.183318 env[1433]: time="2024-02-09T09:59:57.183172053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:57.183318 env[1433]: time="2024-02-09T09:59:57.183199611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:57.183479 env[1433]: time="2024-02-09T09:59:57.183424241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a pid=4497 runtime=io.containerd.runc.v2 Feb 9 09:59:57.223774 env[1433]: time="2024-02-09T09:59:57.223712967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hq9dv,Uid:4071add9-2959-42fe-8fb4-be5a647f26b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\"" Feb 9 09:59:57.228458 env[1433]: time="2024-02-09T09:59:57.228421118Z" level=info msg="CreateContainer within sandbox \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:59:57.335888 env[1433]: time="2024-02-09T09:59:57.335554827Z" level=info msg="CreateContainer within sandbox \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8\"" Feb 9 09:59:57.337334 env[1433]: time="2024-02-09T09:59:57.337305469Z" level=info msg="StartContainer for \"7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8\"" Feb 9 09:59:57.375075 sshd[4484]: Accepted publickey for core from 10.200.12.6 port 38936 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:57.377621 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:57.387327 systemd-logind[1417]: New session 29 of user core. Feb 9 09:59:57.387570 systemd[1]: Started session-29.scope. Feb 9 09:59:57.399810 env[1433]: time="2024-02-09T09:59:57.399759408Z" level=info msg="StartContainer for \"7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8\" returns successfully" Feb 9 09:59:57.501314 env[1433]: time="2024-02-09T09:59:57.501241489Z" level=info msg="shim disconnected" id=7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8 Feb 9 09:59:57.501314 env[1433]: time="2024-02-09T09:59:57.501313446Z" level=warning msg="cleaning up after shim disconnected" id=7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8 namespace=k8s.io Feb 9 09:59:57.501566 env[1433]: time="2024-02-09T09:59:57.501326165Z" level=info msg="cleaning up dead shim" Feb 9 09:59:57.508144 env[1433]: time="2024-02-09T09:59:57.508084824Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4582 runtime=io.containerd.runc.v2\n" Feb 9 09:59:57.796271 sshd[4484]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:57.799134 systemd[1]: sshd@26-10.200.20.37:22-10.200.12.6:38936.service: Deactivated successfully. Feb 9 09:59:57.800796 systemd-logind[1417]: Session 29 logged out. Waiting for processes to exit. Feb 9 09:59:57.801484 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 09:59:57.802556 systemd-logind[1417]: Removed session 29. Feb 9 09:59:57.838202 kubelet[2579]: E0209 09:59:57.838167 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 09:59:57.871815 systemd[1]: Started sshd@27-10.200.20.37:22-10.200.12.6:52314.service. Feb 9 09:59:58.257652 env[1433]: time="2024-02-09T09:59:58.257611090Z" level=info msg="CreateContainer within sandbox \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:59:58.316121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251851524.mount: Deactivated successfully. Feb 9 09:59:58.322214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243289257.mount: Deactivated successfully. Feb 9 09:59:58.327835 sshd[4605]: Accepted publickey for core from 10.200.12.6 port 52314 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:58.329292 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:58.333510 systemd[1]: Started session-30.scope. Feb 9 09:59:58.333679 systemd-logind[1417]: New session 30 of user core. Feb 9 09:59:58.363716 env[1433]: time="2024-02-09T09:59:58.363661338Z" level=info msg="CreateContainer within sandbox \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9\"" Feb 9 09:59:58.366781 env[1433]: time="2024-02-09T09:59:58.365113635Z" level=info msg="StartContainer for \"f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9\"" Feb 9 09:59:58.424471 env[1433]: time="2024-02-09T09:59:58.424416542Z" level=info msg="StartContainer for \"f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9\" returns successfully" Feb 9 09:59:58.475755 env[1433]: time="2024-02-09T09:59:58.475705523Z" level=info msg="shim disconnected" id=f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9 Feb 9 09:59:58.475755 env[1433]: time="2024-02-09T09:59:58.475755201Z" level=warning msg="cleaning up after shim disconnected" id=f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9 namespace=k8s.io Feb 9 09:59:58.475994 env[1433]: time="2024-02-09T09:59:58.475764760Z" level=info msg="cleaning up dead shim" Feb 9 09:59:58.483748 env[1433]: time="2024-02-09T09:59:58.483704450Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4658 runtime=io.containerd.runc.v2\n" Feb 9 09:59:58.988177 kubelet[2579]: E0209 09:59:58.988141 2579 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:59:59.260340 env[1433]: time="2024-02-09T09:59:59.259242489Z" level=info msg="StopPodSandbox for \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\"" Feb 9 09:59:59.260802 env[1433]: time="2024-02-09T09:59:59.260753823Z" level=info msg="Container to stop \"7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:59.260915 env[1433]: time="2024-02-09T09:59:59.260878498Z" level=info msg="Container to stop \"f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:59.262735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a-shm.mount: Deactivated successfully. Feb 9 09:59:59.301269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a-rootfs.mount: Deactivated successfully. Feb 9 09:59:59.339389 env[1433]: time="2024-02-09T09:59:59.339332159Z" level=info msg="shim disconnected" id=f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a Feb 9 09:59:59.339584 env[1433]: time="2024-02-09T09:59:59.339399116Z" level=warning msg="cleaning up after shim disconnected" id=f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a namespace=k8s.io Feb 9 09:59:59.339584 env[1433]: time="2024-02-09T09:59:59.339409876Z" level=info msg="cleaning up dead shim" Feb 9 09:59:59.347900 env[1433]: time="2024-02-09T09:59:59.347850748Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4695 runtime=io.containerd.runc.v2\n" Feb 9 09:59:59.348248 env[1433]: time="2024-02-09T09:59:59.348215252Z" level=info msg="TearDown network for sandbox \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" successfully" Feb 9 09:59:59.348294 env[1433]: time="2024-02-09T09:59:59.348245371Z" level=info msg="StopPodSandbox for \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" returns successfully" Feb 9 09:59:59.479848 kubelet[2579]: I0209 09:59:59.479800 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.479848 kubelet[2579]: I0209 09:59:59.479843 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-run\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480153 kubelet[2579]: I0209 09:59:59.479889 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-ipsec-secrets\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480153 kubelet[2579]: I0209 09:59:59.479925 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-hubble-tls\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480153 kubelet[2579]: I0209 09:59:59.479945 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-clustermesh-secrets\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480237 kubelet[2579]: I0209 09:59:59.480157 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-cgroup\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480237 kubelet[2579]: I0209 09:59:59.480181 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-lib-modules\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480237 kubelet[2579]: I0209 09:59:59.480204 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-kernel\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480521 kubelet[2579]: I0209 09:59:59.480505 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-config-path\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480559 kubelet[2579]: I0209 09:59:59.480554 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfgmx\" (UniqueName: \"kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-kube-api-access-dfgmx\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480585 kubelet[2579]: I0209 09:59:59.480574 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-bpf-maps\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480612 kubelet[2579]: I0209 09:59:59.480590 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-hostproc\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480612 kubelet[2579]: I0209 09:59:59.480607 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cni-path\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480657 kubelet[2579]: I0209 09:59:59.480635 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-xtables-lock\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480657 kubelet[2579]: I0209 09:59:59.480657 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-etc-cni-netd\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480702 kubelet[2579]: I0209 09:59:59.480675 2579 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-net\") pod \"4071add9-2959-42fe-8fb4-be5a647f26b6\" (UID: \"4071add9-2959-42fe-8fb4-be5a647f26b6\") " Feb 9 09:59:59.480728 kubelet[2579]: I0209 09:59:59.480719 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-run\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.480754 kubelet[2579]: I0209 09:59:59.480740 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.481251 kubelet[2579]: I0209 09:59:59.481217 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.481320 kubelet[2579]: I0209 09:59:59.481253 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.481320 kubelet[2579]: I0209 09:59:59.481269 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.481395 kubelet[2579]: W0209 09:59:59.481365 2579 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4071add9-2959-42fe-8fb4-be5a647f26b6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:59.485316 kubelet[2579]: I0209 09:59:59.483256 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:59.485316 kubelet[2579]: I0209 09:59:59.483308 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.485316 kubelet[2579]: I0209 09:59:59.483327 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.485316 kubelet[2579]: I0209 09:59:59.483342 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.485316 kubelet[2579]: I0209 09:59:59.483360 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.484669 systemd[1]: var-lib-kubelet-pods-4071add9\x2d2959\x2d42fe\x2d8fb4\x2dbe5a647f26b6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:59.485739 kubelet[2579]: I0209 09:59:59.483375 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.489427 systemd[1]: var-lib-kubelet-pods-4071add9\x2d2959\x2d42fe\x2d8fb4\x2dbe5a647f26b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:59.490130 kubelet[2579]: I0209 09:59:59.487938 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:59.490808 kubelet[2579]: I0209 09:59:59.490768 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:59.490888 kubelet[2579]: I0209 09:59:59.490869 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:59.493522 kubelet[2579]: I0209 09:59:59.493493 2579 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-kube-api-access-dfgmx" (OuterVolumeSpecName: "kube-api-access-dfgmx") pod "4071add9-2959-42fe-8fb4-be5a647f26b6" (UID: "4071add9-2959-42fe-8fb4-be5a647f26b6"). InnerVolumeSpecName "kube-api-access-dfgmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581785 2579 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-dfgmx\" (UniqueName: \"kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-kube-api-access-dfgmx\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581818 2579 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-hostproc\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581830 2579 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cni-path\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581841 2579 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-bpf-maps\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581850 2579 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-xtables-lock\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581860 2579 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-etc-cni-netd\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581869 2579 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-net\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.582868 kubelet[2579]: I0209 09:59:59.581879 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.583163 kubelet[2579]: I0209 09:59:59.581889 2579 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4071add9-2959-42fe-8fb4-be5a647f26b6-hubble-tls\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.583163 kubelet[2579]: I0209 09:59:59.581898 2579 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4071add9-2959-42fe-8fb4-be5a647f26b6-clustermesh-secrets\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.583163 kubelet[2579]: I0209 09:59:59.581910 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-cgroup\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.583163 kubelet[2579]: I0209 09:59:59.581921 2579 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-lib-modules\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.583163 kubelet[2579]: I0209 09:59:59.581931 2579 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4071add9-2959-42fe-8fb4-be5a647f26b6-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.583163 kubelet[2579]: I0209 09:59:59.581940 2579 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4071add9-2959-42fe-8fb4-be5a647f26b6-cilium-config-path\") on node \"ci-3510.3.2-a-37d4719b0b\" DevicePath \"\"" Feb 9 09:59:59.838603 kubelet[2579]: E0209 09:59:59.838109 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 09:59:59.970979 systemd[1]: var-lib-kubelet-pods-4071add9\x2d2959\x2d42fe\x2d8fb4\x2dbe5a647f26b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfgmx.mount: Deactivated successfully. Feb 9 09:59:59.971138 systemd[1]: var-lib-kubelet-pods-4071add9\x2d2959\x2d42fe\x2d8fb4\x2dbe5a647f26b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:00:00.260743 kubelet[2579]: I0209 10:00:00.260712 2579 scope.go:115] "RemoveContainer" containerID="f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9" Feb 9 10:00:00.264080 env[1433]: time="2024-02-09T10:00:00.263907149Z" level=info msg="RemoveContainer for \"f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9\"" Feb 9 10:00:00.286883 env[1433]: time="2024-02-09T10:00:00.286773124Z" level=info msg="RemoveContainer for \"f9d239d8af6f14d863d1111c7d6a801a16e301dbb33d4c1f062e63cc3dee67d9\" returns successfully" Feb 9 10:00:00.287241 kubelet[2579]: I0209 10:00:00.287188 2579 scope.go:115] "RemoveContainer" containerID="7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8" Feb 9 10:00:00.288640 env[1433]: time="2024-02-09T10:00:00.288356575Z" level=info msg="RemoveContainer for \"7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8\"" Feb 9 10:00:00.302585 kubelet[2579]: I0209 10:00:00.302542 2579 topology_manager.go:210] "Topology Admit Handler" Feb 9 10:00:00.302746 kubelet[2579]: E0209 10:00:00.302607 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4071add9-2959-42fe-8fb4-be5a647f26b6" containerName="mount-cgroup" Feb 9 10:00:00.302746 kubelet[2579]: E0209 10:00:00.302619 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4071add9-2959-42fe-8fb4-be5a647f26b6" containerName="apply-sysctl-overwrites" Feb 9 10:00:00.302746 kubelet[2579]: I0209 10:00:00.302642 2579 memory_manager.go:346] "RemoveStaleState removing state" podUID="4071add9-2959-42fe-8fb4-be5a647f26b6" containerName="apply-sysctl-overwrites" Feb 9 10:00:00.315900 env[1433]: time="2024-02-09T10:00:00.315775433Z" level=info msg="RemoveContainer for \"7ff637be5cff86d03817936b2946968483843e687dfc812d156329638b8269d8\" returns successfully" Feb 9 10:00:00.386201 kubelet[2579]: I0209 10:00:00.386174 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-cilium-run\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386402 kubelet[2579]: I0209 10:00:00.386391 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-lib-modules\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386496 kubelet[2579]: I0209 10:00:00.386486 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-cilium-cgroup\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386587 kubelet[2579]: I0209 10:00:00.386577 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-host-proc-sys-kernel\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386683 kubelet[2579]: I0209 10:00:00.386673 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-hostproc\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386778 kubelet[2579]: I0209 10:00:00.386768 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-bpf-maps\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386868 kubelet[2579]: I0209 10:00:00.386859 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-cilium-ipsec-secrets\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.386958 kubelet[2579]: I0209 10:00:00.386948 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-cni-path\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387064 kubelet[2579]: I0209 10:00:00.387055 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-hubble-tls\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387184 kubelet[2579]: I0209 10:00:00.387175 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-xtables-lock\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387418 kubelet[2579]: I0209 10:00:00.387387 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-clustermesh-secrets\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387471 kubelet[2579]: I0209 10:00:00.387427 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-host-proc-sys-net\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387471 kubelet[2579]: I0209 10:00:00.387451 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rklqn\" (UniqueName: \"kubernetes.io/projected/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-kube-api-access-rklqn\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387526 kubelet[2579]: I0209 10:00:00.387520 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-etc-cni-netd\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.387567 kubelet[2579]: I0209 10:00:00.387551 2579 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aef0ea5-2b73-456c-a6ba-fd638b4d9835-cilium-config-path\") pod \"cilium-mc2vb\" (UID: \"0aef0ea5-2b73-456c-a6ba-fd638b4d9835\") " pod="kube-system/cilium-mc2vb" Feb 9 10:00:00.606065 env[1433]: time="2024-02-09T10:00:00.605891285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mc2vb,Uid:0aef0ea5-2b73-456c-a6ba-fd638b4d9835,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:00.699218 env[1433]: time="2024-02-09T10:00:00.699138625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:00.699218 env[1433]: time="2024-02-09T10:00:00.699182463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:00.699522 env[1433]: time="2024-02-09T10:00:00.699193423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:00.699522 env[1433]: time="2024-02-09T10:00:00.699455691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991 pid=4724 runtime=io.containerd.runc.v2 Feb 9 10:00:00.735207 env[1433]: time="2024-02-09T10:00:00.735160192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mc2vb,Uid:0aef0ea5-2b73-456c-a6ba-fd638b4d9835,Namespace:kube-system,Attempt:0,} returns sandbox id \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\"" Feb 9 10:00:00.740116 env[1433]: time="2024-02-09T10:00:00.740070100Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:00:00.839972 env[1433]: time="2024-02-09T10:00:00.839933275Z" level=info msg="StopPodSandbox for \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\"" Feb 9 10:00:00.841300 env[1433]: time="2024-02-09T10:00:00.840164425Z" level=info msg="TearDown network for sandbox \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" successfully" Feb 9 10:00:00.841300 env[1433]: time="2024-02-09T10:00:00.840207183Z" level=info msg="StopPodSandbox for \"f97b199933588bd2ce46629b0eedd6565ac2af9ad7277e616a391d878b75214a\" returns successfully" Feb 9 10:00:00.842265 kubelet[2579]: I0209 10:00:00.842238 2579 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4071add9-2959-42fe-8fb4-be5a647f26b6 path="/var/lib/kubelet/pods/4071add9-2959-42fe-8fb4-be5a647f26b6/volumes" Feb 9 10:00:00.845618 env[1433]: time="2024-02-09T10:00:00.845577152Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b318bea38e9125ec3b12719f15e8175e5db8ef7360c6e636406ccca0156ffc6\"" Feb 9 10:00:00.846306 env[1433]: time="2024-02-09T10:00:00.846280281Z" level=info msg="StartContainer for \"4b318bea38e9125ec3b12719f15e8175e5db8ef7360c6e636406ccca0156ffc6\"" Feb 9 10:00:00.899603 env[1433]: time="2024-02-09T10:00:00.899508746Z" level=info msg="StartContainer for \"4b318bea38e9125ec3b12719f15e8175e5db8ef7360c6e636406ccca0156ffc6\" returns successfully" Feb 9 10:00:00.957377 env[1433]: time="2024-02-09T10:00:00.957334333Z" level=info msg="shim disconnected" id=4b318bea38e9125ec3b12719f15e8175e5db8ef7360c6e636406ccca0156ffc6 Feb 9 10:00:00.957745 env[1433]: time="2024-02-09T10:00:00.957711157Z" level=warning msg="cleaning up after shim disconnected" id=4b318bea38e9125ec3b12719f15e8175e5db8ef7360c6e636406ccca0156ffc6 namespace=k8s.io Feb 9 10:00:00.957828 env[1433]: time="2024-02-09T10:00:00.957812793Z" level=info msg="cleaning up dead shim" Feb 9 10:00:00.965951 env[1433]: time="2024-02-09T10:00:00.965911324Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4807 runtime=io.containerd.runc.v2\n" Feb 9 10:00:01.272172 env[1433]: time="2024-02-09T10:00:01.272131805Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:00:01.331544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205132281.mount: Deactivated successfully. Feb 9 10:00:01.336822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544788546.mount: Deactivated successfully. Feb 9 10:00:01.381706 env[1433]: time="2024-02-09T10:00:01.381638414Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5fc0e6db4dd0ca20aaa6d80dc06d33bc67b175903d56b1f969eb8c811e45151\"" Feb 9 10:00:01.383984 env[1433]: time="2024-02-09T10:00:01.383948276Z" level=info msg="StartContainer for \"f5fc0e6db4dd0ca20aaa6d80dc06d33bc67b175903d56b1f969eb8c811e45151\"" Feb 9 10:00:01.441000 env[1433]: time="2024-02-09T10:00:01.440955644Z" level=info msg="StartContainer for \"f5fc0e6db4dd0ca20aaa6d80dc06d33bc67b175903d56b1f969eb8c811e45151\" returns successfully" Feb 9 10:00:01.492867 env[1433]: time="2024-02-09T10:00:01.492816432Z" level=info msg="shim disconnected" id=f5fc0e6db4dd0ca20aaa6d80dc06d33bc67b175903d56b1f969eb8c811e45151 Feb 9 10:00:01.493207 env[1433]: time="2024-02-09T10:00:01.493187816Z" level=warning msg="cleaning up after shim disconnected" id=f5fc0e6db4dd0ca20aaa6d80dc06d33bc67b175903d56b1f969eb8c811e45151 namespace=k8s.io Feb 9 10:00:01.493356 env[1433]: time="2024-02-09T10:00:01.493340729Z" level=info msg="cleaning up dead shim" Feb 9 10:00:01.501441 env[1433]: time="2024-02-09T10:00:01.501400945Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4869 runtime=io.containerd.runc.v2\n" Feb 9 10:00:01.838261 kubelet[2579]: E0209 10:00:01.838225 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 10:00:02.271292 env[1433]: time="2024-02-09T10:00:02.271252228Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:00:02.402497 env[1433]: time="2024-02-09T10:00:02.402444891Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c850c9c00885045545fa9185d9853aaccd107bc4da517f0caf75f1bb47891269\"" Feb 9 10:00:02.403366 env[1433]: time="2024-02-09T10:00:02.403341613Z" level=info msg="StartContainer for \"c850c9c00885045545fa9185d9853aaccd107bc4da517f0caf75f1bb47891269\"" Feb 9 10:00:02.470860 env[1433]: time="2024-02-09T10:00:02.470816765Z" level=info msg="StartContainer for \"c850c9c00885045545fa9185d9853aaccd107bc4da517f0caf75f1bb47891269\" returns successfully" Feb 9 10:00:02.517481 env[1433]: time="2024-02-09T10:00:02.517429078Z" level=info msg="shim disconnected" id=c850c9c00885045545fa9185d9853aaccd107bc4da517f0caf75f1bb47891269 Feb 9 10:00:02.517481 env[1433]: time="2024-02-09T10:00:02.517474236Z" level=warning msg="cleaning up after shim disconnected" id=c850c9c00885045545fa9185d9853aaccd107bc4da517f0caf75f1bb47891269 namespace=k8s.io Feb 9 10:00:02.517481 env[1433]: time="2024-02-09T10:00:02.517483156Z" level=info msg="cleaning up dead shim" Feb 9 10:00:02.531926 env[1433]: time="2024-02-09T10:00:02.531808671Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4930 runtime=io.containerd.runc.v2\n" Feb 9 10:00:02.971187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c850c9c00885045545fa9185d9853aaccd107bc4da517f0caf75f1bb47891269-rootfs.mount: Deactivated successfully. Feb 9 10:00:03.281136 env[1433]: time="2024-02-09T10:00:03.276523522Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:00:03.342123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597668753.mount: Deactivated successfully. Feb 9 10:00:03.348413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40645058.mount: Deactivated successfully. Feb 9 10:00:03.386223 env[1433]: time="2024-02-09T10:00:03.386169983Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35e260f94ff9adfb679b5e65677aa2fbe30db938f606c00fde6c9a204c2f3bfd\"" Feb 9 10:00:03.386818 env[1433]: time="2024-02-09T10:00:03.386790597Z" level=info msg="StartContainer for \"35e260f94ff9adfb679b5e65677aa2fbe30db938f606c00fde6c9a204c2f3bfd\"" Feb 9 10:00:03.448524 env[1433]: time="2024-02-09T10:00:03.448470901Z" level=info msg="StartContainer for \"35e260f94ff9adfb679b5e65677aa2fbe30db938f606c00fde6c9a204c2f3bfd\" returns successfully" Feb 9 10:00:03.458888 kubelet[2579]: I0209 10:00:03.458850 2579 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-37d4719b0b" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:00:03.45878903 +0000 UTC m=+214.775590863 LastTransitionTime:2024-02-09 10:00:03.45878903 +0000 UTC m=+214.775590863 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:00:03.525388 env[1433]: time="2024-02-09T10:00:03.525342051Z" level=info msg="shim disconnected" id=35e260f94ff9adfb679b5e65677aa2fbe30db938f606c00fde6c9a204c2f3bfd Feb 9 10:00:03.525646 env[1433]: time="2024-02-09T10:00:03.525628199Z" level=warning msg="cleaning up after shim disconnected" id=35e260f94ff9adfb679b5e65677aa2fbe30db938f606c00fde6c9a204c2f3bfd namespace=k8s.io Feb 9 10:00:03.525728 env[1433]: time="2024-02-09T10:00:03.525713315Z" level=info msg="cleaning up dead shim" Feb 9 10:00:03.532621 env[1433]: time="2024-02-09T10:00:03.532510231Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4987 runtime=io.containerd.runc.v2\n" Feb 9 10:00:03.838382 kubelet[2579]: E0209 10:00:03.838271 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 10:00:03.988970 kubelet[2579]: E0209 10:00:03.988940 2579 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:00:04.278185 env[1433]: time="2024-02-09T10:00:04.278145053Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:00:04.357280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241533534.mount: Deactivated successfully. Feb 9 10:00:04.364077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419269813.mount: Deactivated successfully. Feb 9 10:00:04.401196 env[1433]: time="2024-02-09T10:00:04.401144810Z" level=info msg="CreateContainer within sandbox \"02b7e021639451c6fe8296632fb4674fed7dc32ded365441cc003ef8d54a0991\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c144f7877c781f2bea4b7d5826f193f9577fab44b4ddcbcecb9c38de315173c3\"" Feb 9 10:00:04.403511 env[1433]: time="2024-02-09T10:00:04.403481754Z" level=info msg="StartContainer for \"c144f7877c781f2bea4b7d5826f193f9577fab44b4ddcbcecb9c38de315173c3\"" Feb 9 10:00:04.474187 env[1433]: time="2024-02-09T10:00:04.473352026Z" level=info msg="StartContainer for \"c144f7877c781f2bea4b7d5826f193f9577fab44b4ddcbcecb9c38de315173c3\" returns successfully" Feb 9 10:00:04.915355 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:00:05.838281 kubelet[2579]: E0209 10:00:05.838239 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 10:00:06.969305 systemd[1]: run-containerd-runc-k8s.io-c144f7877c781f2bea4b7d5826f193f9577fab44b4ddcbcecb9c38de315173c3-runc.t9U1Vk.mount: Deactivated successfully. Feb 9 10:00:07.492725 systemd-networkd[1604]: lxc_health: Link UP Feb 9 10:00:07.514500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:00:07.514277 systemd-networkd[1604]: lxc_health: Gained carrier Feb 9 10:00:07.839014 kubelet[2579]: E0209 10:00:07.838560 2579 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-hkxn8" podUID=517e3b1a-a8db-454a-b71e-6f294c79dbef Feb 9 10:00:08.622645 kubelet[2579]: I0209 10:00:08.622610 2579 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mc2vb" podStartSLOduration=8.622572872 pod.CreationTimestamp="2024-02-09 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:05.294558015 +0000 UTC m=+216.611359848" watchObservedRunningTime="2024-02-09 10:00:08.622572872 +0000 UTC m=+219.939374705" Feb 9 10:00:08.978166 systemd-networkd[1604]: lxc_health: Gained IPv6LL Feb 9 10:00:09.166782 systemd[1]: run-containerd-runc-k8s.io-c144f7877c781f2bea4b7d5826f193f9577fab44b4ddcbcecb9c38de315173c3-runc.5UKmN7.mount: Deactivated successfully. Feb 9 10:00:13.488098 systemd[1]: run-containerd-runc-k8s.io-c144f7877c781f2bea4b7d5826f193f9577fab44b4ddcbcecb9c38de315173c3-runc.AYFmvP.mount: Deactivated successfully. Feb 9 10:00:13.615402 sshd[4605]: pam_unix(sshd:session): session closed for user core Feb 9 10:00:13.618489 systemd[1]: sshd@27-10.200.20.37:22-10.200.12.6:52314.service: Deactivated successfully. Feb 9 10:00:13.620012 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 10:00:13.620661 systemd-logind[1417]: Session 30 logged out. Waiting for processes to exit. Feb 9 10:00:13.621534 systemd-logind[1417]: Removed session 30. Feb 9 10:00:17.820238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.838401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.855718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.872584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.889482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.906397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.906629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.926470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.926702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.945323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.945544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.964331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.964574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:17.983172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.031063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.031241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.031353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.031456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.031565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.042363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.042596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.062140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.062368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.082011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.082616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.102123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.102420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.122709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.122950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.142803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.143077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.163230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.163576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.183061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.183315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.205642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.205926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.225561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.225825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.246066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.282203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.282369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.282474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.282569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.294762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.294991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.314619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.315080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.333758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.334027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.353129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.353438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.372680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.406954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.407106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.407250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.407388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.422936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.423518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.442493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.464451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.464562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.464672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.484117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.484370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.503844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.504117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.523030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.523334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.542357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.542617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.562298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.580277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.580400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.591740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.591959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.612233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.612488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.631876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.632174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.651280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.651586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.671219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.699260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.699444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.699637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.709900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.710145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.729451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.729680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.750369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.750593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.770501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.770736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.790210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.790533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.800128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.831488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.831726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.831847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.848061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.862727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.862965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.883352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.883591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.903098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.903312 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.922902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.923168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.943399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.943604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.964329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.964593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.985242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:18.985509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.006360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.006573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.026089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.026330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.046679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.046879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.067113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.067343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.096806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.097085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.097205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.118277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.118477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.138684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.138871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.158647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.158849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.177754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.177973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.197152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.197377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.216329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.216541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.235805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.236002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.254820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.255050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.274375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.274586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.294532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.294736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.314717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.314901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.335194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.335434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.361048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.400505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.400675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.400860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.400969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.401141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.422470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.422690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.441841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.442226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.462587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.462813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.482340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.482617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.502422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.502695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.523988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.524283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.543738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.544094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.574280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.574542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.574645 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.603803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.604106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.604220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.623319 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.623597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.643696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.643928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.664942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.665184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.684708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.684942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.695075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.715233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.715512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.735693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.735906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.745878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.765750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.765961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.785875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.814261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.814388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.814486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.825649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.825861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.846545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.846813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.867328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.867553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.896655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.896876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.896987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.916367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.916604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.935942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.936188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.955859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.956115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.975651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.975895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.996458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.996759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.016967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.017280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.049991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.050481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.077927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.078205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.105774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.106078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.136102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.180377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.221298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.262570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.273823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.274374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.274578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.275813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.275915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.281060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.281201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.281302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.281402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.281502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.281603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.283104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.303476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.303679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.323733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.361993 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.362164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.362283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.362386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.375572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.426609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.446877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.447005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.447164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.447263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.447368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.447460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.447555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.457299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.487220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.487507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.487613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.507459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.507734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.528571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.528819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.549581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.549855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001