Jan 23 23:52:10.180705 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:52:10.180727 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:52:10.180735 kernel: KASLR enabled Jan 23 23:52:10.180741 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:52:10.180749 kernel: printk: bootconsole [pl11] enabled Jan 23 23:52:10.180754 kernel: efi: EFI v2.7 by EDK II Jan 23 23:52:10.180762 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:52:10.180768 kernel: random: crng init done Jan 23 23:52:10.180774 kernel: ACPI: Early table checksum verification disabled Jan 23 23:52:10.180780 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:52:10.180786 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180792 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180800 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:52:10.180806 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180814 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180820 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180827 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180835 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180841 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180848 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:52:10.180854 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180861 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:52:10.180867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:52:10.180874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:52:10.180880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:52:10.180887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:52:10.180893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:52:10.180900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:52:10.180908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:52:10.180915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:52:10.180921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:52:10.180928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:52:10.180934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:52:10.180941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:52:10.180947 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Jan 23 23:52:10.180953 kernel: Zone ranges: Jan 23 23:52:10.180960 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:52:10.180966 kernel: DMA32 empty Jan 23 23:52:10.180973 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:10.180979 kernel: Movable zone start for each node Jan 23 23:52:10.180990 kernel: Early memory node ranges Jan 23 23:52:10.180997 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:52:10.181004 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:52:10.181011 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:52:10.181017 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:52:10.181026 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:52:10.183054 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:52:10.183068 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:10.183075 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:52:10.183083 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:52:10.183089 kernel: psci: probing for conduit method from ACPI. Jan 23 23:52:10.183097 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:52:10.183104 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:52:10.183110 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:52:10.183118 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:52:10.183125 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:52:10.183132 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:52:10.183144 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:52:10.183151 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:52:10.183158 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:52:10.183165 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:52:10.183172 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:52:10.183179 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:52:10.183186 kernel: CPU features: detected: Spectre-BHB Jan 23 23:52:10.183193 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:52:10.183200 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:52:10.183206 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:52:10.183213 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:52:10.183222 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:52:10.183229 kernel: alternatives: applying boot alternatives Jan 23 23:52:10.183237 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:10.183244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:52:10.183251 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:52:10.183258 kernel: Fallback order for Node 0: 0 Jan 23 23:52:10.183265 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:52:10.183272 kernel: Policy zone: Normal Jan 23 23:52:10.183279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:52:10.183286 kernel: software IO TLB: area num 2. Jan 23 23:52:10.183293 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:52:10.183302 kernel: Memory: 3982640K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211520K reserved, 0K cma-reserved) Jan 23 23:52:10.183309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:52:10.183316 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:52:10.183324 kernel: rcu: RCU event tracing is enabled. Jan 23 23:52:10.183331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:52:10.183338 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:52:10.183345 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:52:10.183352 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:52:10.183359 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:52:10.183366 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:52:10.183373 kernel: GICv3: 960 SPIs implemented Jan 23 23:52:10.183381 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:52:10.183388 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:52:10.183395 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:52:10.183401 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:52:10.183408 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:52:10.183415 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:52:10.183422 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:10.183429 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:52:10.183436 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:52:10.183443 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:52:10.183451 kernel: Console: colour dummy device 80x25 Jan 23 23:52:10.183459 kernel: printk: console [tty1] enabled Jan 23 23:52:10.183466 kernel: ACPI: Core revision 20230628 Jan 23 23:52:10.183474 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:52:10.183481 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:52:10.183488 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:52:10.183495 kernel: landlock: Up and running. Jan 23 23:52:10.183502 kernel: SELinux: Initializing. Jan 23 23:52:10.183509 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.183516 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.183526 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:10.183533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:10.183540 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:52:10.183547 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:52:10.183554 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:52:10.183561 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:52:10.183568 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:52:10.183576 kernel: Remapping and enabling EFI services. Jan 23 23:52:10.183589 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:52:10.183596 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:52:10.183604 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:52:10.183611 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:10.183620 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:52:10.183627 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:52:10.183635 kernel: SMP: Total of 2 processors activated. Jan 23 23:52:10.183642 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:52:10.183650 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:52:10.183659 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:52:10.183666 kernel: CPU features: detected: CRC32 instructions Jan 23 23:52:10.183674 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:52:10.183681 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:52:10.183689 kernel: CPU features: detected: Privileged Access Never Jan 23 23:52:10.183696 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:52:10.183704 kernel: alternatives: applying system-wide alternatives Jan 23 23:52:10.183711 kernel: devtmpfs: initialized Jan 23 23:52:10.183719 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:52:10.183727 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:52:10.183735 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:52:10.183742 kernel: SMBIOS 3.1.0 present. Jan 23 23:52:10.183750 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:52:10.183757 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:52:10.183765 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:52:10.183772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:52:10.183780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:52:10.183787 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:52:10.183796 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:52:10.183804 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:52:10.183811 kernel: cpuidle: using governor menu Jan 23 23:52:10.183819 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:52:10.183826 kernel: ASID allocator initialised with 32768 entries Jan 23 23:52:10.183834 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:52:10.183842 kernel: Serial: AMBA PL011 UART driver Jan 23 23:52:10.183849 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:52:10.183857 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:52:10.183866 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:52:10.183873 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:52:10.183881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:52:10.183888 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:52:10.183896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:52:10.183903 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:52:10.183910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:52:10.183918 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:52:10.183925 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:52:10.183934 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:52:10.183942 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:52:10.183950 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:52:10.183957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:52:10.183964 kernel: ACPI: Interpreter enabled Jan 23 23:52:10.183976 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:52:10.183983 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:52:10.183991 kernel: printk: console [ttyAMA0] enabled Jan 23 23:52:10.183998 kernel: printk: bootconsole [pl11] disabled Jan 23 23:52:10.184007 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:52:10.184015 kernel: iommu: Default domain type: Translated Jan 23 23:52:10.184022 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:52:10.186075 kernel: efivars: Registered efivars operations Jan 23 23:52:10.186097 kernel: vgaarb: loaded Jan 23 23:52:10.186105 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:52:10.186113 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:52:10.186120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:52:10.186128 kernel: pnp: PnP ACPI init Jan 23 23:52:10.186142 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:52:10.186150 kernel: NET: Registered PF_INET protocol family Jan 23 23:52:10.186157 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:52:10.186165 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:52:10.186173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:52:10.186180 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:52:10.186187 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:52:10.186195 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:52:10.186202 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.186211 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.186219 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:52:10.186226 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:52:10.186234 kernel: kvm [1]: HYP mode not available Jan 23 23:52:10.186241 kernel: Initialise system trusted keyrings Jan 23 23:52:10.186249 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:52:10.186256 kernel: Key type asymmetric registered Jan 23 23:52:10.186263 kernel: Asymmetric key parser 'x509' registered Jan 23 23:52:10.186271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:52:10.186280 kernel: io scheduler mq-deadline registered Jan 23 23:52:10.186287 kernel: io scheduler kyber registered Jan 23 23:52:10.186295 kernel: io scheduler bfq registered Jan 23 23:52:10.186302 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:52:10.186309 kernel: thunder_xcv, ver 1.0 Jan 23 23:52:10.186317 kernel: thunder_bgx, ver 1.0 Jan 23 23:52:10.186324 kernel: nicpf, ver 1.0 Jan 23 23:52:10.186332 kernel: nicvf, ver 1.0 Jan 23 23:52:10.186489 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:52:10.186564 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:52:09 UTC (1769212329) Jan 23 23:52:10.186574 kernel: efifb: probing for efifb Jan 23 23:52:10.186582 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:52:10.186590 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:52:10.186597 kernel: efifb: scrolling: redraw Jan 23 23:52:10.186605 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:52:10.186613 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:52:10.186620 kernel: fb0: EFI VGA frame buffer device Jan 23 23:52:10.186630 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:52:10.186637 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:52:10.186645 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:52:10.186652 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:52:10.186660 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:52:10.186667 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:52:10.186675 kernel: Segment Routing with IPv6 Jan 23 23:52:10.186682 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:52:10.186690 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:52:10.186699 kernel: Key type dns_resolver registered Jan 23 23:52:10.186706 kernel: registered taskstats version 1 Jan 23 23:52:10.186714 kernel: Loading compiled-in X.509 certificates Jan 23 23:52:10.186721 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:52:10.186728 kernel: Key type .fscrypt registered Jan 23 23:52:10.186736 kernel: Key type fscrypt-provisioning registered Jan 23 23:52:10.186743 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:52:10.186750 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:52:10.186758 kernel: ima: No architecture policies found Jan 23 23:52:10.186767 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:52:10.186774 kernel: clk: Disabling unused clocks Jan 23 23:52:10.186781 kernel: Freeing unused kernel memory: 39424K Jan 23 23:52:10.186789 kernel: Run /init as init process Jan 23 23:52:10.186796 kernel: with arguments: Jan 23 23:52:10.186803 kernel: /init Jan 23 23:52:10.186810 kernel: with environment: Jan 23 23:52:10.186817 kernel: HOME=/ Jan 23 23:52:10.186825 kernel: TERM=linux Jan 23 23:52:10.186834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:10.186846 systemd[1]: Detected virtualization microsoft. Jan 23 23:52:10.186854 systemd[1]: Detected architecture arm64. Jan 23 23:52:10.186862 systemd[1]: Running in initrd. Jan 23 23:52:10.186869 systemd[1]: No hostname configured, using default hostname. Jan 23 23:52:10.186877 systemd[1]: Hostname set to . Jan 23 23:52:10.186885 systemd[1]: Initializing machine ID from random generator. Jan 23 23:52:10.186895 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:52:10.186903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:10.186911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:10.186920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:52:10.186928 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:10.186937 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:52:10.186945 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:52:10.186955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:52:10.186964 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:52:10.186972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:10.186981 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:10.186989 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:10.186997 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:10.187005 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:10.187013 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:10.187021 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:10.187044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:10.187055 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:10.187063 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:10.187072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:10.187080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:10.187088 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:10.187096 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:10.187104 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:52:10.187116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:10.187124 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:52:10.187132 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:52:10.187140 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:10.187148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:10.187175 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:52:10.187197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:10.187205 systemd-journald[217]: Journal started Jan 23 23:52:10.187224 systemd-journald[217]: Runtime Journal (/run/log/journal/2feb3de5867b423b8e8616870a9581f3) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:52:10.190281 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:52:10.211299 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:10.211325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:52:10.218089 kernel: Bridge firewalling registered Jan 23 23:52:10.215801 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:52:10.218970 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:10.228775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:10.244237 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:52:10.248052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:10.256240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:10.277391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:10.289404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:10.300193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:10.322738 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:10.337063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:10.350077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:10.355011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:10.365110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:10.386290 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:52:10.392200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:10.408696 dracut-cmdline[250]: dracut-dracut-053 Jan 23 23:52:10.414911 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:10.413233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:10.459760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:10.482516 systemd-resolved[255]: Positive Trust Anchors: Jan 23 23:52:10.485908 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:10.485944 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:10.488263 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 23 23:52:10.489120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:10.500525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:10.551045 kernel: SCSI subsystem initialized Jan 23 23:52:10.558039 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:52:10.567042 kernel: iscsi: registered transport (tcp) Jan 23 23:52:10.583739 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:52:10.583813 kernel: QLogic iSCSI HBA Driver Jan 23 23:52:10.618081 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:10.628253 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:52:10.660751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:52:10.660810 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:52:10.666028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:52:10.712047 kernel: raid6: neonx8 gen() 15820 MB/s Jan 23 23:52:10.733045 kernel: raid6: neonx4 gen() 15687 MB/s Jan 23 23:52:10.752039 kernel: raid6: neonx2 gen() 13327 MB/s Jan 23 23:52:10.771036 kernel: raid6: neonx1 gen() 10486 MB/s Jan 23 23:52:10.791036 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:52:10.811038 kernel: raid6: int64x4 gen() 7353 MB/s Jan 23 23:52:10.830035 kernel: raid6: int64x2 gen() 6149 MB/s Jan 23 23:52:10.852750 kernel: raid6: int64x1 gen() 5071 MB/s Jan 23 23:52:10.852772 kernel: raid6: using algorithm neonx8 gen() 15820 MB/s Jan 23 23:52:10.875348 kernel: raid6: .... xor() 12050 MB/s, rmw enabled Jan 23 23:52:10.875358 kernel: raid6: using neon recovery algorithm Jan 23 23:52:10.885317 kernel: xor: measuring software checksum speed Jan 23 23:52:10.885332 kernel: 8regs : 19788 MB/sec Jan 23 23:52:10.888620 kernel: 32regs : 19664 MB/sec Jan 23 23:52:10.891676 kernel: arm64_neon : 27195 MB/sec Jan 23 23:52:10.895259 kernel: xor: using function: arm64_neon (27195 MB/sec) Jan 23 23:52:10.945045 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:52:10.954638 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:10.969160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:10.989904 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 23 23:52:10.994369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:11.010154 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:52:11.030712 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Jan 23 23:52:11.057077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:11.075262 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:11.112087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:11.126267 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:52:11.145048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:11.156726 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:11.166114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:11.179351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:11.200359 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:52:11.222234 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:11.245045 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:52:11.246446 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:11.246606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:11.280271 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:52:11.280293 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:52:11.280303 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:52:11.280299 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:11.300135 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:52:11.289272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:11.331123 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:52:11.331146 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:52:11.331156 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:52:11.331296 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:52:11.289489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:11.352054 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:52:11.352075 kernel: scsi host0: storvsc_host_t Jan 23 23:52:11.352105 kernel: scsi host1: storvsc_host_t Jan 23 23:52:11.319885 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:11.362998 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:11.362678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:11.379520 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:11.388796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:11.399254 kernel: PTP clock support registered Jan 23 23:52:11.405251 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:11.431501 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:52:11.431710 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: VF slot 1 added Jan 23 23:52:11.431813 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:52:11.440843 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:52:11.445051 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:52:11.450062 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:52:11.462804 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:52:11.462851 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:52:11.457310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:11.607129 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:52:11.615860 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:52:11.615876 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:52:11.616049 kernel: hv_pci fc1d493a-23c7-47ab-9ee2-92904f0d4c9a: PCI VMBus probing: Using version 0x10004 Jan 23 23:52:11.616148 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:52:11.616235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:11.593731 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 23 23:52:11.634017 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:52:11.634211 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:52:11.634300 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:52:11.646818 kernel: hv_pci fc1d493a-23c7-47ab-9ee2-92904f0d4c9a: PCI host bridge to bus 23c7:00 Jan 23 23:52:11.647109 kernel: pci_bus 23c7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:52:11.647222 kernel: pci_bus 23c7:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:52:11.660320 kernel: pci 23c7:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:52:11.660386 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:11.660397 kernel: pci 23c7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:11.668835 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:52:11.669010 kernel: pci 23c7:00:02.0: enabling Extended Tags Jan 23 23:52:11.690867 kernel: pci 23c7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 23c7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:52:11.690950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#99 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:11.702838 kernel: pci_bus 23c7:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:52:11.708376 kernel: pci 23c7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:11.748117 kernel: mlx5_core 23c7:00:02.0: enabling device (0000 -> 0002) Jan 23 23:52:11.754815 kernel: mlx5_core 23c7:00:02.0: firmware version: 16.30.5026 Jan 23 23:52:11.952269 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: VF registering: eth1 Jan 23 23:52:11.952448 kernel: mlx5_core 23c7:00:02.0 eth1: joined to eth0 Jan 23 23:52:11.959818 kernel: mlx5_core 23c7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:52:11.971833 kernel: mlx5_core 23c7:00:02.0 enP9159s1: renamed from eth1 Jan 23 23:52:12.138830 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (485) Jan 23 23:52:12.153316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:52:12.177980 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (483) Jan 23 23:52:12.187761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:52:12.197682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:52:12.203127 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:52:12.234043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:52:12.254873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:52:12.269982 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:13.283759 disk-uuid[603]: The operation has completed successfully. Jan 23 23:52:13.290493 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:13.348958 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:52:13.353824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:52:13.384942 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:52:13.394899 sh[716]: Success Jan 23 23:52:13.419828 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:52:13.692293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:52:13.699939 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:52:13.705239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:52:13.738408 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:52:13.738460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:13.744015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:52:13.748042 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:52:13.751420 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:52:14.057712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:52:14.062562 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:52:14.077081 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:52:14.086939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:52:14.114046 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:14.114108 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:14.117771 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:14.153858 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:14.162051 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:52:14.173286 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:14.182825 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:52:14.194996 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:52:14.213831 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:52:14.233002 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:52:14.255586 systemd-networkd[900]: lo: Link UP Jan 23 23:52:14.255597 systemd-networkd[900]: lo: Gained carrier Jan 23 23:52:14.257226 systemd-networkd[900]: Enumeration completed Jan 23 23:52:14.260060 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:52:14.260339 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:14.260342 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:14.264961 systemd[1]: Reached target network.target - Network. Jan 23 23:52:14.341817 kernel: mlx5_core 23c7:00:02.0 enP9159s1: Link up Jan 23 23:52:14.384286 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: Data path switched to VF: enP9159s1 Jan 23 23:52:14.384164 systemd-networkd[900]: enP9159s1: Link UP Jan 23 23:52:14.384243 systemd-networkd[900]: eth0: Link UP Jan 23 23:52:14.387523 systemd-networkd[900]: eth0: Gained carrier Jan 23 23:52:14.387534 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:14.403010 systemd-networkd[900]: enP9159s1: Gained carrier Jan 23 23:52:14.415853 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:52:15.067573 ignition[887]: Ignition 2.19.0 Jan 23 23:52:15.067584 ignition[887]: Stage: fetch-offline Jan 23 23:52:15.070620 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:52:15.067619 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.082019 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:52:15.067629 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.067728 ignition[887]: parsed url from cmdline: "" Jan 23 23:52:15.067731 ignition[887]: no config URL provided Jan 23 23:52:15.067735 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:52:15.067742 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:52:15.067746 ignition[887]: failed to fetch config: resource requires networking Jan 23 23:52:15.067926 ignition[887]: Ignition finished successfully Jan 23 23:52:15.109074 ignition[913]: Ignition 2.19.0 Jan 23 23:52:15.109080 ignition[913]: Stage: fetch Jan 23 23:52:15.109273 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.109282 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.109392 ignition[913]: parsed url from cmdline: "" Jan 23 23:52:15.109396 ignition[913]: no config URL provided Jan 23 23:52:15.109400 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:52:15.109407 ignition[913]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:52:15.109434 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:52:15.199147 ignition[913]: GET result: OK Jan 23 23:52:15.199209 ignition[913]: config has been read from IMDS userdata Jan 23 23:52:15.199249 ignition[913]: parsing config with SHA512: d2b8c948c3d99649763e86567b71db03d183e8ed76476c9a4dc5d149f040a1a7d5fdfaac0fef47dfbbb6ff441abe81ee8a7e5e5099feca255ee82b22516e9e71 Jan 23 23:52:15.203067 unknown[913]: fetched base config from "system" Jan 23 23:52:15.203480 ignition[913]: fetch: fetch complete Jan 23 23:52:15.203074 unknown[913]: fetched base config from "system" Jan 23 23:52:15.203484 ignition[913]: fetch: fetch passed Jan 23 23:52:15.203079 unknown[913]: fetched user config from "azure" Jan 23 23:52:15.203523 ignition[913]: Ignition finished successfully Jan 23 23:52:15.209447 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:52:15.225983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:52:15.246679 ignition[919]: Ignition 2.19.0 Jan 23 23:52:15.246688 ignition[919]: Stage: kargs Jan 23 23:52:15.250726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:52:15.246874 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.246883 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.247817 ignition[919]: kargs: kargs passed Jan 23 23:52:15.247863 ignition[919]: Ignition finished successfully Jan 23 23:52:15.271936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:52:15.287677 ignition[925]: Ignition 2.19.0 Jan 23 23:52:15.287685 ignition[925]: Stage: disks Jan 23 23:52:15.287862 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.293517 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:52:15.287870 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.301915 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:52:15.288785 ignition[925]: disks: disks passed Jan 23 23:52:15.310398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:52:15.288875 ignition[925]: Ignition finished successfully Jan 23 23:52:15.320442 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:52:15.329172 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:52:15.336650 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:52:15.359964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:52:15.435837 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:52:15.446828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:52:15.464045 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:52:15.519822 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:52:15.519901 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:52:15.524052 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:52:15.563896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:52:15.584820 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jan 23 23:52:15.599415 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:15.599465 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:15.599476 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:15.602936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:52:15.611974 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:52:15.629180 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:15.624118 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:52:15.624153 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:52:15.635259 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:52:15.643182 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:52:15.665093 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:52:15.853916 systemd-networkd[900]: eth0: Gained IPv6LL Jan 23 23:52:16.209080 coreos-metadata[961]: Jan 23 23:52:16.209 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:52:16.217833 coreos-metadata[961]: Jan 23 23:52:16.217 INFO Fetch successful Jan 23 23:52:16.221977 coreos-metadata[961]: Jan 23 23:52:16.217 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:52:16.231299 coreos-metadata[961]: Jan 23 23:52:16.231 INFO Fetch successful Jan 23 23:52:16.245902 coreos-metadata[961]: Jan 23 23:52:16.245 INFO wrote hostname ci-4081.3.6-n-2167bbe937 to /sysroot/etc/hostname Jan 23 23:52:16.253395 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:52:16.327589 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:52:16.348726 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:52:16.370891 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:52:16.377983 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:52:17.358669 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:52:17.370895 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:52:17.379952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:52:17.399447 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:17.397541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:52:17.422518 ignition[1061]: INFO : Ignition 2.19.0 Jan 23 23:52:17.422518 ignition[1061]: INFO : Stage: mount Jan 23 23:52:17.430867 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:17.430867 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:17.430867 ignition[1061]: INFO : mount: mount passed Jan 23 23:52:17.430867 ignition[1061]: INFO : Ignition finished successfully Jan 23 23:52:17.429877 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:52:17.451929 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:52:17.461822 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:52:17.480144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:52:17.501820 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jan 23 23:52:17.512892 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:17.512932 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:17.516310 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:17.522808 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:17.525259 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:52:17.549437 ignition[1090]: INFO : Ignition 2.19.0 Jan 23 23:52:17.549437 ignition[1090]: INFO : Stage: files Jan 23 23:52:17.556620 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:17.556620 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:17.556620 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:52:17.579024 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:52:17.585323 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:52:17.668312 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:52:17.675132 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:52:17.675132 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:52:17.668683 unknown[1090]: wrote ssh authorized keys file for user: core Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:52:17.740571 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:52:17.871403 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:52:17.880223 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:52:17.880223 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:52:18.066220 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:52:18.766424 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 23 23:52:19.031201 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:19.031201 ignition[1090]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: files passed Jan 23 23:52:19.047127 ignition[1090]: INFO : Ignition finished successfully Jan 23 23:52:19.047032 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:52:19.076689 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:52:19.083975 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:52:19.109768 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:52:19.195177 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:19.195177 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:19.109868 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:52:19.227391 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:19.115472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:52:19.126255 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:52:19.152057 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:52:19.193297 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:52:19.193420 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:52:19.200734 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:52:19.211464 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:52:19.222706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:52:19.240064 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:52:19.263531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:52:19.289991 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:52:19.310624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:19.317217 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:19.327256 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:52:19.336183 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:52:19.336353 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:52:19.349445 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:52:19.358797 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:52:19.367044 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:52:19.375341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:52:19.385076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:52:19.394776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:52:19.403747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:19.413394 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:52:19.422892 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:52:19.431481 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:52:19.438888 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:52:19.439061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:19.450795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:19.460049 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:19.469541 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:52:19.469654 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:19.480068 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:52:19.480246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:19.494534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:52:19.494696 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:52:19.503945 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:52:19.504088 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:52:19.512698 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:52:19.512872 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:52:19.538593 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:52:19.545311 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:52:19.545543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:19.576954 ignition[1142]: INFO : Ignition 2.19.0 Jan 23 23:52:19.576954 ignition[1142]: INFO : Stage: umount Jan 23 23:52:19.576954 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:19.576954 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:19.576954 ignition[1142]: INFO : umount: umount passed Jan 23 23:52:19.576954 ignition[1142]: INFO : Ignition finished successfully Jan 23 23:52:19.576326 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:52:19.585532 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:52:19.586143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:19.600133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:52:19.600254 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:19.607370 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:52:19.608027 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:52:19.608125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:52:19.616674 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:52:19.616895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:52:19.622154 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:52:19.622207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:52:19.629497 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:52:19.629534 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:52:19.638337 systemd[1]: Stopped target network.target - Network. Jan 23 23:52:19.646742 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:52:19.646796 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:52:19.656379 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:52:19.665676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:52:19.677890 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:19.683495 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:52:19.691359 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:52:19.700063 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:52:19.700145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:19.709562 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:52:19.709619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:19.717717 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:52:19.717768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:52:19.725744 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:52:19.725781 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:19.734140 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:52:19.748703 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:52:19.751841 systemd-networkd[900]: eth0: DHCPv6 lease lost Jan 23 23:52:19.757591 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:52:19.759755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:52:19.766670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:52:19.766749 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:52:19.777936 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:52:19.778695 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:52:19.785938 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:52:19.786018 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:52:19.800586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:52:19.800645 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:19.971958 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: Data path switched from VF: enP9159s1 Jan 23 23:52:19.807650 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:52:19.807704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:52:19.831015 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:52:19.838471 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:52:19.838540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:52:19.847627 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:52:19.847734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:19.856234 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:52:19.856272 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:19.864891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:52:19.864929 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:19.874738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:19.912217 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:52:19.912372 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:19.926431 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:52:19.926478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:19.934519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:52:19.934550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:19.943042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:52:19.943086 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:19.962699 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:52:19.962756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:19.971821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:19.971875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:19.995067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:52:20.010896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:52:20.010991 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:20.019469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:20.019524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:20.029121 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:52:20.029214 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:52:20.039185 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:52:20.039269 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:52:20.048346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:52:20.073977 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:52:20.214429 systemd[1]: Switching root. Jan 23 23:52:20.240770 systemd-journald[217]: Journal stopped Jan 23 23:52:10.180705 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:52:10.180727 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:52:10.180735 kernel: KASLR enabled Jan 23 23:52:10.180741 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:52:10.180749 kernel: printk: bootconsole [pl11] enabled Jan 23 23:52:10.180754 kernel: efi: EFI v2.7 by EDK II Jan 23 23:52:10.180762 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:52:10.180768 kernel: random: crng init done Jan 23 23:52:10.180774 kernel: ACPI: Early table checksum verification disabled Jan 23 23:52:10.180780 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:52:10.180786 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180792 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180800 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:52:10.180806 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180814 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180820 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180827 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180835 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180841 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180848 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:52:10.180854 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:52:10.180861 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:52:10.180867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:52:10.180874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:52:10.180880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:52:10.180887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:52:10.180893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:52:10.180900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:52:10.180908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:52:10.180915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:52:10.180921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:52:10.180928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:52:10.180934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:52:10.180941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:52:10.180947 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Jan 23 23:52:10.180953 kernel: Zone ranges: Jan 23 23:52:10.180960 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:52:10.180966 kernel: DMA32 empty Jan 23 23:52:10.180973 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:10.180979 kernel: Movable zone start for each node Jan 23 23:52:10.180990 kernel: Early memory node ranges Jan 23 23:52:10.180997 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:52:10.181004 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:52:10.181011 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:52:10.181017 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:52:10.181026 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:52:10.183054 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:52:10.183068 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:52:10.183075 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:52:10.183083 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:52:10.183089 kernel: psci: probing for conduit method from ACPI. Jan 23 23:52:10.183097 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:52:10.183104 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:52:10.183110 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:52:10.183118 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:52:10.183125 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:52:10.183132 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:52:10.183144 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:52:10.183151 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:52:10.183158 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:52:10.183165 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:52:10.183172 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:52:10.183179 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:52:10.183186 kernel: CPU features: detected: Spectre-BHB Jan 23 23:52:10.183193 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:52:10.183200 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:52:10.183206 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:52:10.183213 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:52:10.183222 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:52:10.183229 kernel: alternatives: applying boot alternatives Jan 23 23:52:10.183237 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:10.183244 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:52:10.183251 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:52:10.183258 kernel: Fallback order for Node 0: 0 Jan 23 23:52:10.183265 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:52:10.183272 kernel: Policy zone: Normal Jan 23 23:52:10.183279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:52:10.183286 kernel: software IO TLB: area num 2. Jan 23 23:52:10.183293 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:52:10.183302 kernel: Memory: 3982640K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211520K reserved, 0K cma-reserved) Jan 23 23:52:10.183309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:52:10.183316 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:52:10.183324 kernel: rcu: RCU event tracing is enabled. Jan 23 23:52:10.183331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:52:10.183338 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:52:10.183345 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:52:10.183352 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:52:10.183359 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:52:10.183366 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:52:10.183373 kernel: GICv3: 960 SPIs implemented Jan 23 23:52:10.183381 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:52:10.183388 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:52:10.183395 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:52:10.183401 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:52:10.183408 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:52:10.183415 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:52:10.183422 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:10.183429 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:52:10.183436 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:52:10.183443 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:52:10.183451 kernel: Console: colour dummy device 80x25 Jan 23 23:52:10.183459 kernel: printk: console [tty1] enabled Jan 23 23:52:10.183466 kernel: ACPI: Core revision 20230628 Jan 23 23:52:10.183474 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:52:10.183481 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:52:10.183488 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:52:10.183495 kernel: landlock: Up and running. Jan 23 23:52:10.183502 kernel: SELinux: Initializing. Jan 23 23:52:10.183509 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.183516 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.183526 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:10.183533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:52:10.183540 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:52:10.183547 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:52:10.183554 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:52:10.183561 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:52:10.183568 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:52:10.183576 kernel: Remapping and enabling EFI services. Jan 23 23:52:10.183589 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:52:10.183596 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:52:10.183604 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:52:10.183611 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:52:10.183620 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:52:10.183627 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:52:10.183635 kernel: SMP: Total of 2 processors activated. Jan 23 23:52:10.183642 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:52:10.183650 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:52:10.183659 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:52:10.183666 kernel: CPU features: detected: CRC32 instructions Jan 23 23:52:10.183674 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:52:10.183681 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:52:10.183689 kernel: CPU features: detected: Privileged Access Never Jan 23 23:52:10.183696 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:52:10.183704 kernel: alternatives: applying system-wide alternatives Jan 23 23:52:10.183711 kernel: devtmpfs: initialized Jan 23 23:52:10.183719 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:52:10.183727 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:52:10.183735 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:52:10.183742 kernel: SMBIOS 3.1.0 present. Jan 23 23:52:10.183750 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:52:10.183757 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:52:10.183765 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:52:10.183772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:52:10.183780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:52:10.183787 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:52:10.183796 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:52:10.183804 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:52:10.183811 kernel: cpuidle: using governor menu Jan 23 23:52:10.183819 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:52:10.183826 kernel: ASID allocator initialised with 32768 entries Jan 23 23:52:10.183834 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:52:10.183842 kernel: Serial: AMBA PL011 UART driver Jan 23 23:52:10.183849 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:52:10.183857 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:52:10.183866 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:52:10.183873 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:52:10.183881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:52:10.183888 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:52:10.183896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:52:10.183903 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:52:10.183910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:52:10.183918 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:52:10.183925 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:52:10.183934 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:52:10.183942 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:52:10.183950 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:52:10.183957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:52:10.183964 kernel: ACPI: Interpreter enabled Jan 23 23:52:10.183976 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:52:10.183983 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:52:10.183991 kernel: printk: console [ttyAMA0] enabled Jan 23 23:52:10.183998 kernel: printk: bootconsole [pl11] disabled Jan 23 23:52:10.184007 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:52:10.184015 kernel: iommu: Default domain type: Translated Jan 23 23:52:10.184022 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:52:10.186075 kernel: efivars: Registered efivars operations Jan 23 23:52:10.186097 kernel: vgaarb: loaded Jan 23 23:52:10.186105 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:52:10.186113 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:52:10.186120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:52:10.186128 kernel: pnp: PnP ACPI init Jan 23 23:52:10.186142 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:52:10.186150 kernel: NET: Registered PF_INET protocol family Jan 23 23:52:10.186157 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:52:10.186165 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:52:10.186173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:52:10.186180 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:52:10.186187 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:52:10.186195 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:52:10.186202 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.186211 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:52:10.186219 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:52:10.186226 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:52:10.186234 kernel: kvm [1]: HYP mode not available Jan 23 23:52:10.186241 kernel: Initialise system trusted keyrings Jan 23 23:52:10.186249 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:52:10.186256 kernel: Key type asymmetric registered Jan 23 23:52:10.186263 kernel: Asymmetric key parser 'x509' registered Jan 23 23:52:10.186271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:52:10.186280 kernel: io scheduler mq-deadline registered Jan 23 23:52:10.186287 kernel: io scheduler kyber registered Jan 23 23:52:10.186295 kernel: io scheduler bfq registered Jan 23 23:52:10.186302 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:52:10.186309 kernel: thunder_xcv, ver 1.0 Jan 23 23:52:10.186317 kernel: thunder_bgx, ver 1.0 Jan 23 23:52:10.186324 kernel: nicpf, ver 1.0 Jan 23 23:52:10.186332 kernel: nicvf, ver 1.0 Jan 23 23:52:10.186489 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:52:10.186564 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:52:09 UTC (1769212329) Jan 23 23:52:10.186574 kernel: efifb: probing for efifb Jan 23 23:52:10.186582 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:52:10.186590 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:52:10.186597 kernel: efifb: scrolling: redraw Jan 23 23:52:10.186605 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:52:10.186613 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:52:10.186620 kernel: fb0: EFI VGA frame buffer device Jan 23 23:52:10.186630 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:52:10.186637 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:52:10.186645 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:52:10.186652 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:52:10.186660 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:52:10.186667 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:52:10.186675 kernel: Segment Routing with IPv6 Jan 23 23:52:10.186682 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:52:10.186690 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:52:10.186699 kernel: Key type dns_resolver registered Jan 23 23:52:10.186706 kernel: registered taskstats version 1 Jan 23 23:52:10.186714 kernel: Loading compiled-in X.509 certificates Jan 23 23:52:10.186721 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:52:10.186728 kernel: Key type .fscrypt registered Jan 23 23:52:10.186736 kernel: Key type fscrypt-provisioning registered Jan 23 23:52:10.186743 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:52:10.186750 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:52:10.186758 kernel: ima: No architecture policies found Jan 23 23:52:10.186767 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:52:10.186774 kernel: clk: Disabling unused clocks Jan 23 23:52:10.186781 kernel: Freeing unused kernel memory: 39424K Jan 23 23:52:10.186789 kernel: Run /init as init process Jan 23 23:52:10.186796 kernel: with arguments: Jan 23 23:52:10.186803 kernel: /init Jan 23 23:52:10.186810 kernel: with environment: Jan 23 23:52:10.186817 kernel: HOME=/ Jan 23 23:52:10.186825 kernel: TERM=linux Jan 23 23:52:10.186834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:10.186846 systemd[1]: Detected virtualization microsoft. Jan 23 23:52:10.186854 systemd[1]: Detected architecture arm64. Jan 23 23:52:10.186862 systemd[1]: Running in initrd. Jan 23 23:52:10.186869 systemd[1]: No hostname configured, using default hostname. Jan 23 23:52:10.186877 systemd[1]: Hostname set to . Jan 23 23:52:10.186885 systemd[1]: Initializing machine ID from random generator. Jan 23 23:52:10.186895 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:52:10.186903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:10.186911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:10.186920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:52:10.186928 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:10.186937 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:52:10.186945 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:52:10.186955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:52:10.186964 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:52:10.186972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:10.186981 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:10.186989 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:10.186997 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:10.187005 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:10.187013 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:10.187021 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:10.187044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:10.187055 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:10.187063 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:10.187072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:10.187080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:10.187088 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:10.187096 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:10.187104 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:52:10.187116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:10.187124 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:52:10.187132 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:52:10.187140 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:10.187148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:10.187175 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:52:10.187197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:10.187205 systemd-journald[217]: Journal started Jan 23 23:52:10.187224 systemd-journald[217]: Runtime Journal (/run/log/journal/2feb3de5867b423b8e8616870a9581f3) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:52:10.190281 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:52:10.211299 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:10.211325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:52:10.218089 kernel: Bridge firewalling registered Jan 23 23:52:10.215801 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:52:10.218970 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:10.228775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:10.244237 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:52:10.248052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:10.256240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:10.277391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:10.289404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:10.300193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:10.322738 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:10.337063 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:10.350077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:10.355011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:10.365110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:10.386290 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:52:10.392200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:10.408696 dracut-cmdline[250]: dracut-dracut-053 Jan 23 23:52:10.414911 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:52:10.413233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:10.459760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:10.482516 systemd-resolved[255]: Positive Trust Anchors: Jan 23 23:52:10.485908 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:10.485944 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:10.488263 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 23 23:52:10.489120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:10.500525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:10.551045 kernel: SCSI subsystem initialized Jan 23 23:52:10.558039 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:52:10.567042 kernel: iscsi: registered transport (tcp) Jan 23 23:52:10.583739 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:52:10.583813 kernel: QLogic iSCSI HBA Driver Jan 23 23:52:10.618081 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:10.628253 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:52:10.660751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:52:10.660810 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:52:10.666028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:52:10.712047 kernel: raid6: neonx8 gen() 15820 MB/s Jan 23 23:52:10.733045 kernel: raid6: neonx4 gen() 15687 MB/s Jan 23 23:52:10.752039 kernel: raid6: neonx2 gen() 13327 MB/s Jan 23 23:52:10.771036 kernel: raid6: neonx1 gen() 10486 MB/s Jan 23 23:52:10.791036 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:52:10.811038 kernel: raid6: int64x4 gen() 7353 MB/s Jan 23 23:52:10.830035 kernel: raid6: int64x2 gen() 6149 MB/s Jan 23 23:52:10.852750 kernel: raid6: int64x1 gen() 5071 MB/s Jan 23 23:52:10.852772 kernel: raid6: using algorithm neonx8 gen() 15820 MB/s Jan 23 23:52:10.875348 kernel: raid6: .... xor() 12050 MB/s, rmw enabled Jan 23 23:52:10.875358 kernel: raid6: using neon recovery algorithm Jan 23 23:52:10.885317 kernel: xor: measuring software checksum speed Jan 23 23:52:10.885332 kernel: 8regs : 19788 MB/sec Jan 23 23:52:10.888620 kernel: 32regs : 19664 MB/sec Jan 23 23:52:10.891676 kernel: arm64_neon : 27195 MB/sec Jan 23 23:52:10.895259 kernel: xor: using function: arm64_neon (27195 MB/sec) Jan 23 23:52:10.945045 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:52:10.954638 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:10.969160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:10.989904 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 23 23:52:10.994369 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:11.010154 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:52:11.030712 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Jan 23 23:52:11.057077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:11.075262 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:11.112087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:11.126267 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:52:11.145048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:11.156726 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:11.166114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:11.179351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:11.200359 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:52:11.222234 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:11.245045 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:52:11.246446 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:11.246606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:11.280271 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:52:11.280293 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:52:11.280303 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:52:11.280299 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:11.300135 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:52:11.289272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:11.331123 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:52:11.331146 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:52:11.331156 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:52:11.331296 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:52:11.289489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:11.352054 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:52:11.352075 kernel: scsi host0: storvsc_host_t Jan 23 23:52:11.352105 kernel: scsi host1: storvsc_host_t Jan 23 23:52:11.319885 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:11.362998 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:11.362678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:11.379520 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:52:11.388796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:11.399254 kernel: PTP clock support registered Jan 23 23:52:11.405251 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:52:11.431501 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:52:11.431710 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: VF slot 1 added Jan 23 23:52:11.431813 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:52:11.440843 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:52:11.445051 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:52:11.450062 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:52:11.462804 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:52:11.462851 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:52:11.457310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:11.607129 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:52:11.615860 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:52:11.615876 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:52:11.616049 kernel: hv_pci fc1d493a-23c7-47ab-9ee2-92904f0d4c9a: PCI VMBus probing: Using version 0x10004 Jan 23 23:52:11.616148 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:52:11.616235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:11.593731 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 23 23:52:11.634017 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:52:11.634211 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:52:11.634300 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:52:11.646818 kernel: hv_pci fc1d493a-23c7-47ab-9ee2-92904f0d4c9a: PCI host bridge to bus 23c7:00 Jan 23 23:52:11.647109 kernel: pci_bus 23c7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:52:11.647222 kernel: pci_bus 23c7:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:52:11.660320 kernel: pci 23c7:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:52:11.660386 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:11.660397 kernel: pci 23c7:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:11.668835 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:52:11.669010 kernel: pci 23c7:00:02.0: enabling Extended Tags Jan 23 23:52:11.690867 kernel: pci 23c7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 23c7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:52:11.690950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#99 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:11.702838 kernel: pci_bus 23c7:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:52:11.708376 kernel: pci 23c7:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:52:11.748117 kernel: mlx5_core 23c7:00:02.0: enabling device (0000 -> 0002) Jan 23 23:52:11.754815 kernel: mlx5_core 23c7:00:02.0: firmware version: 16.30.5026 Jan 23 23:52:11.952269 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: VF registering: eth1 Jan 23 23:52:11.952448 kernel: mlx5_core 23c7:00:02.0 eth1: joined to eth0 Jan 23 23:52:11.959818 kernel: mlx5_core 23c7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:52:11.971833 kernel: mlx5_core 23c7:00:02.0 enP9159s1: renamed from eth1 Jan 23 23:52:12.138830 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (485) Jan 23 23:52:12.153316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:52:12.177980 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (483) Jan 23 23:52:12.187761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:52:12.197682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:52:12.203127 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:52:12.234043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:52:12.254873 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:52:12.269982 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:13.283759 disk-uuid[603]: The operation has completed successfully. Jan 23 23:52:13.290493 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:52:13.348958 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:52:13.353824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:52:13.384942 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:52:13.394899 sh[716]: Success Jan 23 23:52:13.419828 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:52:13.692293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:52:13.699939 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:52:13.705239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:52:13.738408 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:52:13.738460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:13.744015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:52:13.748042 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:52:13.751420 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:52:14.057712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:52:14.062562 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:52:14.077081 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:52:14.086939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:52:14.114046 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:14.114108 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:14.117771 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:14.153858 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:14.162051 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:52:14.173286 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:14.182825 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:52:14.194996 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:52:14.213831 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:52:14.233002 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:52:14.255586 systemd-networkd[900]: lo: Link UP Jan 23 23:52:14.255597 systemd-networkd[900]: lo: Gained carrier Jan 23 23:52:14.257226 systemd-networkd[900]: Enumeration completed Jan 23 23:52:14.260060 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:52:14.260339 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:14.260342 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:14.264961 systemd[1]: Reached target network.target - Network. Jan 23 23:52:14.341817 kernel: mlx5_core 23c7:00:02.0 enP9159s1: Link up Jan 23 23:52:14.384286 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: Data path switched to VF: enP9159s1 Jan 23 23:52:14.384164 systemd-networkd[900]: enP9159s1: Link UP Jan 23 23:52:14.384243 systemd-networkd[900]: eth0: Link UP Jan 23 23:52:14.387523 systemd-networkd[900]: eth0: Gained carrier Jan 23 23:52:14.387534 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:14.403010 systemd-networkd[900]: enP9159s1: Gained carrier Jan 23 23:52:14.415853 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:52:15.067573 ignition[887]: Ignition 2.19.0 Jan 23 23:52:15.067584 ignition[887]: Stage: fetch-offline Jan 23 23:52:15.070620 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:52:15.067619 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.082019 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:52:15.067629 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.067728 ignition[887]: parsed url from cmdline: "" Jan 23 23:52:15.067731 ignition[887]: no config URL provided Jan 23 23:52:15.067735 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:52:15.067742 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:52:15.067746 ignition[887]: failed to fetch config: resource requires networking Jan 23 23:52:15.067926 ignition[887]: Ignition finished successfully Jan 23 23:52:15.109074 ignition[913]: Ignition 2.19.0 Jan 23 23:52:15.109080 ignition[913]: Stage: fetch Jan 23 23:52:15.109273 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.109282 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.109392 ignition[913]: parsed url from cmdline: "" Jan 23 23:52:15.109396 ignition[913]: no config URL provided Jan 23 23:52:15.109400 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:52:15.109407 ignition[913]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:52:15.109434 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:52:15.199147 ignition[913]: GET result: OK Jan 23 23:52:15.199209 ignition[913]: config has been read from IMDS userdata Jan 23 23:52:15.199249 ignition[913]: parsing config with SHA512: d2b8c948c3d99649763e86567b71db03d183e8ed76476c9a4dc5d149f040a1a7d5fdfaac0fef47dfbbb6ff441abe81ee8a7e5e5099feca255ee82b22516e9e71 Jan 23 23:52:15.203067 unknown[913]: fetched base config from "system" Jan 23 23:52:15.203480 ignition[913]: fetch: fetch complete Jan 23 23:52:15.203074 unknown[913]: fetched base config from "system" Jan 23 23:52:15.203484 ignition[913]: fetch: fetch passed Jan 23 23:52:15.203079 unknown[913]: fetched user config from "azure" Jan 23 23:52:15.203523 ignition[913]: Ignition finished successfully Jan 23 23:52:15.209447 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:52:15.225983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:52:15.246679 ignition[919]: Ignition 2.19.0 Jan 23 23:52:15.246688 ignition[919]: Stage: kargs Jan 23 23:52:15.250726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:52:15.246874 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.246883 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.247817 ignition[919]: kargs: kargs passed Jan 23 23:52:15.247863 ignition[919]: Ignition finished successfully Jan 23 23:52:15.271936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:52:15.287677 ignition[925]: Ignition 2.19.0 Jan 23 23:52:15.287685 ignition[925]: Stage: disks Jan 23 23:52:15.287862 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:15.293517 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:52:15.287870 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:15.301915 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:52:15.288785 ignition[925]: disks: disks passed Jan 23 23:52:15.310398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:52:15.288875 ignition[925]: Ignition finished successfully Jan 23 23:52:15.320442 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:52:15.329172 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:52:15.336650 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:52:15.359964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:52:15.435837 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:52:15.446828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:52:15.464045 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:52:15.519822 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:52:15.519901 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:52:15.524052 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:52:15.563896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:52:15.584820 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jan 23 23:52:15.599415 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:15.599465 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:15.599476 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:15.602936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:52:15.611974 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:52:15.629180 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:15.624118 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:52:15.624153 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:52:15.635259 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:52:15.643182 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:52:15.665093 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:52:15.853916 systemd-networkd[900]: eth0: Gained IPv6LL Jan 23 23:52:16.209080 coreos-metadata[961]: Jan 23 23:52:16.209 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:52:16.217833 coreos-metadata[961]: Jan 23 23:52:16.217 INFO Fetch successful Jan 23 23:52:16.221977 coreos-metadata[961]: Jan 23 23:52:16.217 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:52:16.231299 coreos-metadata[961]: Jan 23 23:52:16.231 INFO Fetch successful Jan 23 23:52:16.245902 coreos-metadata[961]: Jan 23 23:52:16.245 INFO wrote hostname ci-4081.3.6-n-2167bbe937 to /sysroot/etc/hostname Jan 23 23:52:16.253395 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:52:16.327589 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:52:16.348726 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:52:16.370891 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:52:16.377983 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:52:17.358669 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:52:17.370895 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:52:17.379952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:52:17.399447 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:17.397541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:52:17.422518 ignition[1061]: INFO : Ignition 2.19.0 Jan 23 23:52:17.422518 ignition[1061]: INFO : Stage: mount Jan 23 23:52:17.430867 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:17.430867 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:17.430867 ignition[1061]: INFO : mount: mount passed Jan 23 23:52:17.430867 ignition[1061]: INFO : Ignition finished successfully Jan 23 23:52:17.429877 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:52:17.451929 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:52:17.461822 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:52:17.480144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:52:17.501820 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jan 23 23:52:17.512892 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:52:17.512932 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:52:17.516310 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:52:17.522808 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:52:17.525259 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:52:17.549437 ignition[1090]: INFO : Ignition 2.19.0 Jan 23 23:52:17.549437 ignition[1090]: INFO : Stage: files Jan 23 23:52:17.556620 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:17.556620 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:17.556620 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:52:17.579024 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:52:17.585323 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:52:17.668312 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:52:17.675132 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:52:17.675132 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:52:17.668683 unknown[1090]: wrote ssh authorized keys file for user: core Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:52:17.691294 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:52:17.740571 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:52:17.871403 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:52:17.880223 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:52:17.880223 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:52:18.066220 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:18.147840 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:52:18.766424 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 23 23:52:19.031201 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:52:19.031201 ignition[1090]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:52:19.047127 ignition[1090]: INFO : files: files passed Jan 23 23:52:19.047127 ignition[1090]: INFO : Ignition finished successfully Jan 23 23:52:19.047032 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:52:19.076689 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:52:19.083975 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:52:19.109768 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:52:19.195177 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:19.195177 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:19.109868 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:52:19.227391 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:52:19.115472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:52:19.126255 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:52:19.152057 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:52:19.193297 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:52:19.193420 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:52:19.200734 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:52:19.211464 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:52:19.222706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:52:19.240064 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:52:19.263531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:52:19.289991 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:52:19.310624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:19.317217 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:19.327256 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:52:19.336183 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:52:19.336353 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:52:19.349445 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:52:19.358797 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:52:19.367044 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:52:19.375341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:52:19.385076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:52:19.394776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:52:19.403747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:52:19.413394 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:52:19.422892 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:52:19.431481 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:52:19.438888 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:52:19.439061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:52:19.450795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:19.460049 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:19.469541 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:52:19.469654 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:19.480068 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:52:19.480246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:52:19.494534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:52:19.494696 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:52:19.503945 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:52:19.504088 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:52:19.512698 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:52:19.512872 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:52:19.538593 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:52:19.545311 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:52:19.545543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:19.576954 ignition[1142]: INFO : Ignition 2.19.0 Jan 23 23:52:19.576954 ignition[1142]: INFO : Stage: umount Jan 23 23:52:19.576954 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:52:19.576954 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:52:19.576954 ignition[1142]: INFO : umount: umount passed Jan 23 23:52:19.576954 ignition[1142]: INFO : Ignition finished successfully Jan 23 23:52:19.576326 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:52:19.585532 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:52:19.586143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:19.600133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:52:19.600254 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:52:19.607370 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:52:19.608027 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:52:19.608125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:52:19.616674 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:52:19.616895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:52:19.622154 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:52:19.622207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:52:19.629497 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:52:19.629534 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:52:19.638337 systemd[1]: Stopped target network.target - Network. Jan 23 23:52:19.646742 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:52:19.646796 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:52:19.656379 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:52:19.665676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:52:19.677890 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:19.683495 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:52:19.691359 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:52:19.700063 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:52:19.700145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:52:19.709562 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:52:19.709619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:52:19.717717 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:52:19.717768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:52:19.725744 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:52:19.725781 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:52:19.734140 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:52:19.748703 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:52:19.751841 systemd-networkd[900]: eth0: DHCPv6 lease lost Jan 23 23:52:19.757591 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:52:19.759755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:52:19.766670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:52:19.766749 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:52:19.777936 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:52:19.778695 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:52:19.785938 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:52:19.786018 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:52:19.800586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:52:19.800645 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:19.971958 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: Data path switched from VF: enP9159s1 Jan 23 23:52:19.807650 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:52:19.807704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:52:19.831015 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:52:19.838471 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:52:19.838540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:52:19.847627 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:52:19.847734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:19.856234 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:52:19.856272 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:19.864891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:52:19.864929 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:19.874738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:19.912217 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:52:19.912372 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:19.926431 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:52:19.926478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:19.934519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:52:19.934550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:19.943042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:52:19.943086 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:52:19.962699 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:52:19.962756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:52:19.971821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:52:19.971875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:52:19.995067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:52:20.010896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:52:20.010991 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:20.019469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:20.019524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:20.029121 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:52:20.029214 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:52:20.039185 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:52:20.039269 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:52:20.048346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:52:20.073977 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:52:20.214429 systemd[1]: Switching root. Jan 23 23:52:20.240770 systemd-journald[217]: Journal stopped Jan 23 23:52:25.678128 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 23 23:52:25.678151 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:52:25.678161 kernel: SELinux: policy capability open_perms=1 Jan 23 23:52:25.678171 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:52:25.678179 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:52:25.678188 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:52:25.678197 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:52:25.678205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:52:25.678213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:52:25.678221 systemd[1]: Successfully loaded SELinux policy in 189.456ms. Jan 23 23:52:25.678232 kernel: audit: type=1403 audit(1769212342.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:52:25.678241 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.685ms. Jan 23 23:52:25.678251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:52:25.678260 systemd[1]: Detected virtualization microsoft. Jan 23 23:52:25.678269 systemd[1]: Detected architecture arm64. Jan 23 23:52:25.678280 systemd[1]: Detected first boot. Jan 23 23:52:25.678290 systemd[1]: Hostname set to . Jan 23 23:52:25.678299 systemd[1]: Initializing machine ID from random generator. Jan 23 23:52:25.678308 zram_generator::config[1204]: No configuration found. Jan 23 23:52:25.678318 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:52:25.678327 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:52:25.678337 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:52:25.678347 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:52:25.678357 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:52:25.678366 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:52:25.678375 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:52:25.678385 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:52:25.678395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:52:25.678406 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:52:25.678415 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:52:25.678424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:52:25.678434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:52:25.678443 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:52:25.678453 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:52:25.678462 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:52:25.678471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:52:25.678481 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:52:25.678493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:52:25.678502 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:52:25.678511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:52:25.678523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:52:25.678533 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:52:25.678543 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:52:25.678552 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:52:25.678563 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:52:25.678573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:52:25.678583 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:52:25.678594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:52:25.678603 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:52:25.678613 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:52:25.678623 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:52:25.678634 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:52:25.678644 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:52:25.678653 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:52:25.678663 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:52:25.678673 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:52:25.678683 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:52:25.678695 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:52:25.678705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:25.678715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:52:25.678725 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:52:25.678735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:25.678744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:52:25.678755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:25.678764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:52:25.678774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:25.678786 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:52:25.678797 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 23 23:52:25.678813 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 23 23:52:25.678823 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:52:25.678832 kernel: loop: module loaded Jan 23 23:52:25.678841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:52:25.678851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:52:25.678860 kernel: fuse: init (API version 7.39) Jan 23 23:52:25.678871 kernel: ACPI: bus type drm_connector registered Jan 23 23:52:25.678880 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:52:25.678903 systemd-journald[1322]: Collecting audit messages is disabled. Jan 23 23:52:25.678923 systemd-journald[1322]: Journal started Jan 23 23:52:25.678944 systemd-journald[1322]: Runtime Journal (/run/log/journal/cd5314489cb042b7b467dd3ece96e4c0) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:52:25.695405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:52:25.709500 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:52:25.710663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:52:25.715598 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:52:25.721176 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:52:25.725885 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:52:25.730600 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:52:25.735318 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:52:25.740060 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:52:25.745747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:52:25.751710 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:52:25.751886 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:52:25.757483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:25.757622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:25.762642 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:52:25.762777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:52:25.767685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:25.767830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:25.773451 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:52:25.773586 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:52:25.778376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:25.778550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:25.784007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:52:25.789450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:52:25.795014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:52:25.800784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:52:25.813404 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:52:25.823864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:52:25.830910 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:52:25.839010 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:52:25.858967 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:52:25.865092 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:52:25.870090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:52:25.871151 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:52:25.875783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:52:25.879085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:52:25.887070 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:52:25.898945 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:52:25.908952 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:52:25.914116 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:52:25.919708 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:52:25.929050 systemd-journald[1322]: Time spent on flushing to /var/log/journal/cd5314489cb042b7b467dd3ece96e4c0 is 13.765ms for 883 entries. Jan 23 23:52:25.929050 systemd-journald[1322]: System Journal (/var/log/journal/cd5314489cb042b7b467dd3ece96e4c0) is 8.0M, max 2.6G, 2.6G free. Jan 23 23:52:25.959127 systemd-journald[1322]: Received client request to flush runtime journal. Jan 23 23:52:25.929554 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:52:25.940037 udevadm[1365]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 23 23:52:25.962176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:52:26.017645 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 23 23:52:26.017659 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 23 23:52:26.022179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:52:26.032060 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:52:26.065964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:52:26.191200 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:52:26.201916 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:52:26.219843 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 23 23:52:26.219859 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 23 23:52:26.226433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:52:26.714855 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:52:26.725978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:52:26.746922 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Jan 23 23:52:27.070940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:52:27.091047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:52:27.118004 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:52:27.155220 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 23 23:52:27.188347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:52:27.231856 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:52:27.231921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:52:27.272833 kernel: hv_vmbus: registering driver hv_balloon Jan 23 23:52:27.272894 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 23:52:27.277511 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 23:52:27.297146 systemd-networkd[1399]: lo: Link UP Jan 23 23:52:27.297453 systemd-networkd[1399]: lo: Gained carrier Jan 23 23:52:27.299292 systemd-networkd[1399]: Enumeration completed Jan 23 23:52:27.299456 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:52:27.305880 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:27.305887 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:27.319069 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 23:52:27.319133 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 23:52:27.325608 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 23:52:27.329643 kernel: Console: switching to colour dummy device 80x25 Jan 23 23:52:27.327221 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:52:27.340072 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:52:27.342096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:27.360112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:52:27.362062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:27.374989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:52:27.388826 kernel: mlx5_core 23c7:00:02.0 enP9159s1: Link up Jan 23 23:52:27.406825 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1406) Jan 23 23:52:27.420196 kernel: hv_netvsc 7ced8dd0-148c-7ced-8dd0-148c7ced8dd0 eth0: Data path switched to VF: enP9159s1 Jan 23 23:52:27.419867 systemd-networkd[1399]: enP9159s1: Link UP Jan 23 23:52:27.419962 systemd-networkd[1399]: eth0: Link UP Jan 23 23:52:27.419966 systemd-networkd[1399]: eth0: Gained carrier Jan 23 23:52:27.419978 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:27.424323 systemd-networkd[1399]: enP9159s1: Gained carrier Jan 23 23:52:27.430878 systemd-networkd[1399]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:52:27.464945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:52:27.528872 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:52:27.539973 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:52:27.608818 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:52:27.639161 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:52:27.644967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:52:27.656244 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:52:27.660032 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:52:27.681338 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:52:27.686989 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:52:27.692414 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:52:27.692444 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:52:27.697046 systemd[1]: Reached target machines.target - Containers. Jan 23 23:52:27.701999 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:52:27.713948 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:52:27.719997 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:52:27.724603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:27.725579 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:52:27.731725 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:52:27.739900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:52:27.757793 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:52:27.789907 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:52:27.791589 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:52:27.797688 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:52:27.808869 kernel: loop0: detected capacity change from 0 to 114328 Jan 23 23:52:27.872078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:52:28.195834 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:52:28.293039 kernel: loop1: detected capacity change from 0 to 114432 Jan 23 23:52:28.651943 kernel: loop2: detected capacity change from 0 to 207008 Jan 23 23:52:28.717827 kernel: loop3: detected capacity change from 0 to 31320 Jan 23 23:52:29.073877 kernel: loop4: detected capacity change from 0 to 114328 Jan 23 23:52:29.100817 kernel: loop5: detected capacity change from 0 to 114432 Jan 23 23:52:29.112818 kernel: loop6: detected capacity change from 0 to 207008 Jan 23 23:52:29.136820 kernel: loop7: detected capacity change from 0 to 31320 Jan 23 23:52:29.144205 (sd-merge)[1508]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 23:52:29.145830 (sd-merge)[1508]: Merged extensions into '/usr'. Jan 23 23:52:29.149022 systemd[1]: Reloading requested from client PID 1489 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:52:29.149246 systemd[1]: Reloading... Jan 23 23:52:29.201836 zram_generator::config[1539]: No configuration found. Jan 23 23:52:29.293914 systemd-networkd[1399]: eth0: Gained IPv6LL Jan 23 23:52:29.330439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:29.407524 systemd[1]: Reloading finished in 257 ms. Jan 23 23:52:29.421628 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:52:29.427679 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:52:29.446913 systemd[1]: Starting ensure-sysext.service... Jan 23 23:52:29.453955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:52:29.463314 systemd[1]: Reloading requested from client PID 1599 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:52:29.463437 systemd[1]: Reloading... Jan 23 23:52:29.490186 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:52:29.490464 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:52:29.492114 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:52:29.492341 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 23 23:52:29.492393 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 23 23:52:29.512534 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:52:29.512547 systemd-tmpfiles[1600]: Skipping /boot Jan 23 23:52:29.523980 zram_generator::config[1626]: No configuration found. Jan 23 23:52:29.524760 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:52:29.524944 systemd-tmpfiles[1600]: Skipping /boot Jan 23 23:52:29.639214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:29.713666 systemd[1]: Reloading finished in 249 ms. Jan 23 23:52:29.728728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:52:29.749077 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:52:29.755024 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:52:29.762644 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:52:29.771023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:52:29.780985 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:52:29.792637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:29.801062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:29.817317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:29.838024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:29.844940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:29.845761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:29.845955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:29.855276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:29.855435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:29.862530 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:29.862706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:29.879477 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:52:29.886768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:29.893111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:29.901449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:29.907021 systemd-resolved[1698]: Positive Trust Anchors: Jan 23 23:52:29.907032 systemd-resolved[1698]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:52:29.907063 systemd-resolved[1698]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:52:29.908948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:29.922954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:29.923958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:52:29.929922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:29.930076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:29.935772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:29.935980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:29.942069 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:29.942984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:29.950693 systemd-resolved[1698]: Using system hostname 'ci-4081.3.6-n-2167bbe937'. Jan 23 23:52:29.952580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:52:29.960221 augenrules[1733]: No rules Jan 23 23:52:29.962406 systemd[1]: Reached target network.target - Network. Jan 23 23:52:29.966482 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:52:29.971326 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:52:29.976660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:52:29.983942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:52:29.990050 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:52:29.997970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:52:30.008975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:52:30.013605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:52:30.013745 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:52:30.019113 systemd[1]: Finished ensure-sysext.service. Jan 23 23:52:30.023122 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:52:30.028523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:52:30.028768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:52:30.034555 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:52:30.034794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:52:30.040266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:52:30.040513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:52:30.047368 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:52:30.047523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:52:30.055146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:52:30.055234 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:52:30.273634 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:52:30.279831 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:52:32.978526 ldconfig[1486]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:52:32.989233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:52:33.001020 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:52:33.012826 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:52:33.018184 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:52:33.022780 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:52:33.028173 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:52:33.033744 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:52:33.038348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:52:33.043636 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:52:33.049124 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:52:33.049157 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:52:33.053286 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:52:33.058489 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:52:33.065054 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:52:33.073329 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:52:33.078327 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:52:33.083135 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:52:33.087165 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:52:33.091385 systemd[1]: System is tainted: cgroupsv1 Jan 23 23:52:33.091433 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:52:33.091460 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:52:33.113896 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 23:52:33.119464 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:52:33.139932 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:52:33.148773 (chronyd)[1769]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 23 23:52:33.149373 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:52:33.156923 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:52:33.163299 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:52:33.167599 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:52:33.167639 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 23 23:52:33.172038 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 23:52:33.176782 jq[1776]: false Jan 23 23:52:33.179908 KVP[1778]: KVP starting; pid is:1778 Jan 23 23:52:33.180622 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 23:52:33.181924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:33.184343 chronyd[1782]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 23 23:52:33.193021 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:52:33.198984 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:52:33.205967 KVP[1778]: KVP LIC Version: 3.1 Jan 23 23:52:33.207334 kernel: hv_utils: KVP IC version 4.0 Jan 23 23:52:33.209485 chronyd[1782]: Timezone right/UTC failed leap second check, ignoring Jan 23 23:52:33.209664 chronyd[1782]: Loaded seccomp filter (level 2) Jan 23 23:52:33.218979 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:52:33.232085 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:52:33.244180 extend-filesystems[1777]: Found loop4 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found loop5 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found loop6 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found loop7 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda1 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda2 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda3 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found usr Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda4 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda6 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda7 Jan 23 23:52:33.248111 extend-filesystems[1777]: Found sda9 Jan 23 23:52:33.248111 extend-filesystems[1777]: Checking size of /dev/sda9 Jan 23 23:52:33.342957 extend-filesystems[1777]: Old size kept for /dev/sda9 Jan 23 23:52:33.342957 extend-filesystems[1777]: Found sr0 Jan 23 23:52:33.252968 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:52:33.352058 dbus-daemon[1775]: [system] SELinux support is enabled Jan 23 23:52:33.273058 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:52:33.283156 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:52:33.292204 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:52:33.384756 update_engine[1808]: I20260123 23:52:33.376008 1808 main.cc:92] Flatcar Update Engine starting Jan 23 23:52:33.384756 update_engine[1808]: I20260123 23:52:33.384362 1808 update_check_scheduler.cc:74] Next update check in 3m47s Jan 23 23:52:33.303911 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:52:33.385074 jq[1812]: true Jan 23 23:52:33.326421 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 23:52:33.342332 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:52:33.342563 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:52:33.342789 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:52:33.345034 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:52:33.375953 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:52:33.391216 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:52:33.391559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:52:33.398221 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:52:33.404216 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:52:33.404530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:52:33.428641 systemd-logind[1803]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 23:52:33.431601 systemd-logind[1803]: New seat seat0. Jan 23 23:52:33.436245 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:52:33.443327 coreos-metadata[1772]: Jan 23 23:52:33.442 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:52:33.452370 coreos-metadata[1772]: Jan 23 23:52:33.449 INFO Fetch successful Jan 23 23:52:33.452370 coreos-metadata[1772]: Jan 23 23:52:33.449 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 23:52:33.449983 (ntainerd)[1834]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:52:33.453568 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:52:33.466996 jq[1833]: true Jan 23 23:52:33.460242 dbus-daemon[1775]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:52:33.467223 coreos-metadata[1772]: Jan 23 23:52:33.460 INFO Fetch successful Jan 23 23:52:33.467223 coreos-metadata[1772]: Jan 23 23:52:33.460 INFO Fetching http://168.63.129.16/machine/7a18e200-0ed3-4e9d-b587-27224a92028d/290aa8e5%2D6923%2D4b67%2D85ea%2D7093b69846d1.%5Fci%2D4081.3.6%2Dn%2D2167bbe937?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 23:52:33.453598 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:52:33.461650 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:52:33.461667 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:52:33.467635 coreos-metadata[1772]: Jan 23 23:52:33.467 INFO Fetch successful Jan 23 23:52:33.467675 coreos-metadata[1772]: Jan 23 23:52:33.467 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:52:33.470654 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:52:33.487463 coreos-metadata[1772]: Jan 23 23:52:33.487 INFO Fetch successful Jan 23 23:52:33.489215 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:52:33.492401 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:52:33.509997 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1830) Jan 23 23:52:33.517188 tar[1828]: linux-arm64/LICENSE Jan 23 23:52:33.521458 tar[1828]: linux-arm64/helm Jan 23 23:52:33.557086 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:52:33.566752 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:52:33.664149 bash[1901]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:52:33.668933 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:52:33.680831 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 23:52:33.691152 locksmithd[1860]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:52:34.095814 tar[1828]: linux-arm64/README.md Jan 23 23:52:34.115403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:52:34.126817 containerd[1834]: time="2026-01-23T23:52:34.126726360Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:52:34.155094 containerd[1834]: time="2026-01-23T23:52:34.154524640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.155982560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156017160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156033440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156185480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156201240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156260560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156272400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156474240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156490120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156502560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:34.156804 containerd[1834]: time="2026-01-23T23:52:34.156512480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.157021 containerd[1834]: time="2026-01-23T23:52:34.156598400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.157021 containerd[1834]: time="2026-01-23T23:52:34.156777920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:52:34.157021 containerd[1834]: time="2026-01-23T23:52:34.156925520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:52:34.157021 containerd[1834]: time="2026-01-23T23:52:34.156940560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:52:34.157021 containerd[1834]: time="2026-01-23T23:52:34.157014160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:52:34.157108 containerd[1834]: time="2026-01-23T23:52:34.157050440Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.167593680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.167646080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.167661680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.167676160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.167691480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.167904080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:52:34.168308 containerd[1834]: time="2026-01-23T23:52:34.168267160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168371160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168386080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168399160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168411680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168424360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168436840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168454320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168468600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168481200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168493320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168505120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:52:34.168523 containerd[1834]: time="2026-01-23T23:52:34.168524840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168539080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168550440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168562960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168579400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168591840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168603720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168615520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168627640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168641160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168652880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168664320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168675560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168689800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168711840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.168740 containerd[1834]: time="2026-01-23T23:52:34.168723840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168735120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168782640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168810480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168822560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168834040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168843560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168854840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168865200Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:52:34.169059 containerd[1834]: time="2026-01-23T23:52:34.168875120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:52:34.169216 containerd[1834]: time="2026-01-23T23:52:34.169140880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:52:34.169216 containerd[1834]: time="2026-01-23T23:52:34.169203600Z" level=info msg="Connect containerd service" Jan 23 23:52:34.169338 containerd[1834]: time="2026-01-23T23:52:34.169234880Z" level=info msg="using legacy CRI server" Jan 23 23:52:34.169338 containerd[1834]: time="2026-01-23T23:52:34.169243280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:52:34.169371 containerd[1834]: time="2026-01-23T23:52:34.169343120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.169856480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170284680Z" level=info msg="Start subscribing containerd event" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170379640Z" level=info msg="Start recovering state" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170447560Z" level=info msg="Start event monitor" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170506840Z" level=info msg="Start snapshots syncer" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170517160Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170529080Z" level=info msg="Start streaming server" Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170803640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.170854000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:52:34.176634 containerd[1834]: time="2026-01-23T23:52:34.171300320Z" level=info msg="containerd successfully booted in 0.047658s" Jan 23 23:52:34.171429 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:52:34.428970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:34.436433 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:52:34.836281 kubelet[1933]: E0123 23:52:34.836184 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:52:34.839973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:52:34.840120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:52:35.747499 sshd_keygen[1811]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:52:35.766345 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:52:35.779075 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:52:35.785298 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 23:52:35.790445 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:52:35.790764 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:52:35.800167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:52:35.816281 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:52:35.826914 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 23:52:35.834867 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:52:35.840929 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:52:35.846519 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:52:35.850993 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:52:35.856882 systemd[1]: Startup finished in 13.138s (kernel) + 13.487s (userspace) = 26.625s. Jan 23 23:52:36.320992 login[1968]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:52:36.323021 login[1969]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:52:36.329264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:52:36.337116 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:52:36.339274 systemd-logind[1803]: New session 1 of user core. Jan 23 23:52:36.342269 systemd-logind[1803]: New session 2 of user core. Jan 23 23:52:36.362743 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:52:36.371000 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:52:36.387473 (systemd)[1978]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:52:36.705478 systemd[1978]: Queued start job for default target default.target. Jan 23 23:52:36.706135 systemd[1978]: Created slice app.slice - User Application Slice. Jan 23 23:52:36.706156 systemd[1978]: Reached target paths.target - Paths. Jan 23 23:52:36.706167 systemd[1978]: Reached target timers.target - Timers. Jan 23 23:52:36.716913 systemd[1978]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:52:36.726122 systemd[1978]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:52:36.726179 systemd[1978]: Reached target sockets.target - Sockets. Jan 23 23:52:36.726191 systemd[1978]: Reached target basic.target - Basic System. Jan 23 23:52:36.726228 systemd[1978]: Reached target default.target - Main User Target. Jan 23 23:52:36.726256 systemd[1978]: Startup finished in 330ms. Jan 23 23:52:36.726563 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:52:36.735088 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:52:36.735847 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:52:37.636533 waagent[1966]: 2026-01-23T23:52:37.636448Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 23 23:52:37.641186 waagent[1966]: 2026-01-23T23:52:37.641132Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 23 23:52:37.644935 waagent[1966]: 2026-01-23T23:52:37.644897Z INFO Daemon Daemon Python: 3.11.9 Jan 23 23:52:37.649925 waagent[1966]: 2026-01-23T23:52:37.649872Z INFO Daemon Daemon Run daemon Jan 23 23:52:37.653483 waagent[1966]: 2026-01-23T23:52:37.653446Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 23 23:52:37.660561 waagent[1966]: 2026-01-23T23:52:37.660521Z INFO Daemon Daemon Using waagent for provisioning Jan 23 23:52:37.665096 waagent[1966]: 2026-01-23T23:52:37.664979Z INFO Daemon Daemon Activate resource disk Jan 23 23:52:37.668930 waagent[1966]: 2026-01-23T23:52:37.668890Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 23:52:37.678970 waagent[1966]: 2026-01-23T23:52:37.678921Z INFO Daemon Daemon Found device: None Jan 23 23:52:37.682620 waagent[1966]: 2026-01-23T23:52:37.682581Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 23:52:37.689449 waagent[1966]: 2026-01-23T23:52:37.689413Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 23:52:37.699694 waagent[1966]: 2026-01-23T23:52:37.699644Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:52:37.704339 waagent[1966]: 2026-01-23T23:52:37.704296Z INFO Daemon Daemon Running default provisioning handler Jan 23 23:52:37.715294 waagent[1966]: 2026-01-23T23:52:37.715231Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 23:52:37.726101 waagent[1966]: 2026-01-23T23:52:37.726046Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 23:52:37.734235 waagent[1966]: 2026-01-23T23:52:37.734195Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 23:52:37.738431 waagent[1966]: 2026-01-23T23:52:37.738396Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 23:52:37.847232 waagent[1966]: 2026-01-23T23:52:37.847134Z INFO Daemon Daemon Successfully mounted dvd Jan 23 23:52:37.860225 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 23:52:37.862724 waagent[1966]: 2026-01-23T23:52:37.862657Z INFO Daemon Daemon Detect protocol endpoint Jan 23 23:52:37.866617 waagent[1966]: 2026-01-23T23:52:37.866568Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:52:37.871170 waagent[1966]: 2026-01-23T23:52:37.871124Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 23:52:37.876276 waagent[1966]: 2026-01-23T23:52:37.876236Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 23:52:37.880396 waagent[1966]: 2026-01-23T23:52:37.880354Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 23:52:37.884250 waagent[1966]: 2026-01-23T23:52:37.884215Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 23:52:37.927490 waagent[1966]: 2026-01-23T23:52:37.927402Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 23:52:37.932801 waagent[1966]: 2026-01-23T23:52:37.932775Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 23:52:37.937054 waagent[1966]: 2026-01-23T23:52:37.937007Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 23:52:38.208874 waagent[1966]: 2026-01-23T23:52:38.208622Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 23:52:38.214007 waagent[1966]: 2026-01-23T23:52:38.213949Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 23:52:38.221070 waagent[1966]: 2026-01-23T23:52:38.221023Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:52:38.239435 waagent[1966]: 2026-01-23T23:52:38.239393Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 23:52:38.244166 waagent[1966]: 2026-01-23T23:52:38.244121Z INFO Daemon Jan 23 23:52:38.246423 waagent[1966]: 2026-01-23T23:52:38.246381Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8fe3cf2d-44e5-4840-85fd-8f15bb495a93 eTag: 10088688830176533128 source: Fabric] Jan 23 23:52:38.255374 waagent[1966]: 2026-01-23T23:52:38.255325Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 23:52:38.260807 waagent[1966]: 2026-01-23T23:52:38.260763Z INFO Daemon Jan 23 23:52:38.263218 waagent[1966]: 2026-01-23T23:52:38.263172Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:52:38.274489 waagent[1966]: 2026-01-23T23:52:38.274458Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 23:52:38.348757 waagent[1966]: 2026-01-23T23:52:38.348678Z INFO Daemon Downloaded certificate {'thumbprint': 'AC46AE8EDB9292E37B510F226AA69595BB4A0A1D', 'hasPrivateKey': True} Jan 23 23:52:38.356653 waagent[1966]: 2026-01-23T23:52:38.356604Z INFO Daemon Fetch goal state completed Jan 23 23:52:38.366511 waagent[1966]: 2026-01-23T23:52:38.366475Z INFO Daemon Daemon Starting provisioning Jan 23 23:52:38.370509 waagent[1966]: 2026-01-23T23:52:38.370462Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 23:52:38.374096 waagent[1966]: 2026-01-23T23:52:38.374061Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-2167bbe937] Jan 23 23:52:38.400678 waagent[1966]: 2026-01-23T23:52:38.400610Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-2167bbe937] Jan 23 23:52:38.405616 waagent[1966]: 2026-01-23T23:52:38.405570Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 23:52:38.410484 waagent[1966]: 2026-01-23T23:52:38.410445Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 23:52:38.451065 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:52:38.451071 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:52:38.451111 systemd-networkd[1399]: eth0: DHCP lease lost Jan 23 23:52:38.452826 waagent[1966]: 2026-01-23T23:52:38.452336Z INFO Daemon Daemon Create user account if not exists Jan 23 23:52:38.456914 waagent[1966]: 2026-01-23T23:52:38.456768Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 23:52:38.459891 systemd-networkd[1399]: eth0: DHCPv6 lease lost Jan 23 23:52:38.461656 waagent[1966]: 2026-01-23T23:52:38.461602Z INFO Daemon Daemon Configure sudoer Jan 23 23:52:38.465343 waagent[1966]: 2026-01-23T23:52:38.465296Z INFO Daemon Daemon Configure sshd Jan 23 23:52:38.468874 waagent[1966]: 2026-01-23T23:52:38.468829Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 23:52:38.479367 waagent[1966]: 2026-01-23T23:52:38.478791Z INFO Daemon Daemon Deploy ssh public key. Jan 23 23:52:38.488893 systemd-networkd[1399]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:52:39.558179 waagent[1966]: 2026-01-23T23:52:39.558132Z INFO Daemon Daemon Provisioning complete Jan 23 23:52:39.573410 waagent[1966]: 2026-01-23T23:52:39.573370Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 23:52:39.578269 waagent[1966]: 2026-01-23T23:52:39.578221Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 23:52:39.585966 waagent[1966]: 2026-01-23T23:52:39.585930Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 23 23:52:39.712482 waagent[2033]: 2026-01-23T23:52:39.712409Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 23 23:52:39.713444 waagent[2033]: 2026-01-23T23:52:39.712948Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 23 23:52:39.713444 waagent[2033]: 2026-01-23T23:52:39.713020Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 23 23:52:39.751846 waagent[2033]: 2026-01-23T23:52:39.750543Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 23 23:52:39.751846 waagent[2033]: 2026-01-23T23:52:39.750775Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:52:39.751846 waagent[2033]: 2026-01-23T23:52:39.750860Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:52:39.758656 waagent[2033]: 2026-01-23T23:52:39.758595Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:52:39.763709 waagent[2033]: 2026-01-23T23:52:39.763670Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 23:52:39.764206 waagent[2033]: 2026-01-23T23:52:39.764164Z INFO ExtHandler Jan 23 23:52:39.764275 waagent[2033]: 2026-01-23T23:52:39.764248Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1802dab0-1267-42ef-a47b-a5234b22fc3d eTag: 10088688830176533128 source: Fabric] Jan 23 23:52:39.764554 waagent[2033]: 2026-01-23T23:52:39.764518Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:52:39.765131 waagent[2033]: 2026-01-23T23:52:39.765087Z INFO ExtHandler Jan 23 23:52:39.765192 waagent[2033]: 2026-01-23T23:52:39.765167Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:52:39.768214 waagent[2033]: 2026-01-23T23:52:39.768185Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:52:39.834070 waagent[2033]: 2026-01-23T23:52:39.833939Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC46AE8EDB9292E37B510F226AA69595BB4A0A1D', 'hasPrivateKey': True} Jan 23 23:52:39.834529 waagent[2033]: 2026-01-23T23:52:39.834484Z INFO ExtHandler Fetch goal state completed Jan 23 23:52:39.848215 waagent[2033]: 2026-01-23T23:52:39.848168Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2033 Jan 23 23:52:39.848361 waagent[2033]: 2026-01-23T23:52:39.848329Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 23:52:39.849919 waagent[2033]: 2026-01-23T23:52:39.849877Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 23:52:39.850268 waagent[2033]: 2026-01-23T23:52:39.850235Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 23:52:39.886752 waagent[2033]: 2026-01-23T23:52:39.886710Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 23:52:39.886957 waagent[2033]: 2026-01-23T23:52:39.886916Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 23:52:39.893216 waagent[2033]: 2026-01-23T23:52:39.893177Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 23:52:39.899335 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit waagent.service)... Jan 23 23:52:39.899583 systemd[1]: Reloading... Jan 23 23:52:39.957878 zram_generator::config[2079]: No configuration found. Jan 23 23:52:40.078257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:40.157290 systemd[1]: Reloading finished in 257 ms. Jan 23 23:52:40.177244 waagent[2033]: 2026-01-23T23:52:40.177155Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 23 23:52:40.184031 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit waagent.service)... Jan 23 23:52:40.184045 systemd[1]: Reloading... Jan 23 23:52:40.252888 zram_generator::config[2177]: No configuration found. Jan 23 23:52:40.352641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:40.425985 systemd[1]: Reloading finished in 241 ms. Jan 23 23:52:40.448331 waagent[2033]: 2026-01-23T23:52:40.448204Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 23:52:40.448423 waagent[2033]: 2026-01-23T23:52:40.448369Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 23:52:40.760385 waagent[2033]: 2026-01-23T23:52:40.760253Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 23:52:40.760906 waagent[2033]: 2026-01-23T23:52:40.760858Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 23 23:52:40.761667 waagent[2033]: 2026-01-23T23:52:40.761593Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 23:52:40.762048 waagent[2033]: 2026-01-23T23:52:40.761949Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 23:52:40.762327 waagent[2033]: 2026-01-23T23:52:40.762229Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 23:52:40.762467 waagent[2033]: 2026-01-23T23:52:40.762327Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 23:52:40.763449 waagent[2033]: 2026-01-23T23:52:40.762606Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:52:40.763449 waagent[2033]: 2026-01-23T23:52:40.762703Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:52:40.763449 waagent[2033]: 2026-01-23T23:52:40.762862Z INFO EnvHandler ExtHandler Configure routes Jan 23 23:52:40.763449 waagent[2033]: 2026-01-23T23:52:40.762940Z INFO EnvHandler ExtHandler Gateway:None Jan 23 23:52:40.763449 waagent[2033]: 2026-01-23T23:52:40.762987Z INFO EnvHandler ExtHandler Routes:None Jan 23 23:52:40.763759 waagent[2033]: 2026-01-23T23:52:40.763696Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 23:52:40.763923 waagent[2033]: 2026-01-23T23:52:40.763872Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 23:52:40.765134 waagent[2033]: 2026-01-23T23:52:40.765079Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 23:52:40.766960 waagent[2033]: 2026-01-23T23:52:40.766927Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:52:40.767556 waagent[2033]: 2026-01-23T23:52:40.767518Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:52:40.767881 waagent[2033]: 2026-01-23T23:52:40.767839Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 23:52:40.768292 waagent[2033]: 2026-01-23T23:52:40.768166Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 23:52:40.768292 waagent[2033]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 23:52:40.768292 waagent[2033]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 23:52:40.768292 waagent[2033]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 23:52:40.768292 waagent[2033]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:52:40.768292 waagent[2033]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:52:40.768292 waagent[2033]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:52:40.772899 waagent[2033]: 2026-01-23T23:52:40.772859Z INFO ExtHandler ExtHandler Jan 23 23:52:40.773075 waagent[2033]: 2026-01-23T23:52:40.773039Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a2f567ab-1b0a-438d-9368-fd03339b6783 correlation cef49254-3829-4158-9543-bb8de3d64675 created: 2026-01-23T23:51:40.757553Z] Jan 23 23:52:40.773518 waagent[2033]: 2026-01-23T23:52:40.773477Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:52:40.774456 waagent[2033]: 2026-01-23T23:52:40.774421Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 23:52:40.803753 waagent[2033]: 2026-01-23T23:52:40.803692Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2D8E85EE-FEC7-44C6-A85A-2AF524992DF0;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 23 23:52:40.845833 waagent[2033]: 2026-01-23T23:52:40.845467Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 23:52:40.845833 waagent[2033]: Executing ['ip', '-a', '-o', 'link']: Jan 23 23:52:40.845833 waagent[2033]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 23:52:40.845833 waagent[2033]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:14:8c brd ff:ff:ff:ff:ff:ff Jan 23 23:52:40.845833 waagent[2033]: 3: enP9159s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d0:14:8c brd ff:ff:ff:ff:ff:ff\ altname enP9159p0s2 Jan 23 23:52:40.845833 waagent[2033]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 23:52:40.845833 waagent[2033]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 23:52:40.845833 waagent[2033]: 2: eth0 inet 10.200.20.22/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 23:52:40.845833 waagent[2033]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 23:52:40.845833 waagent[2033]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 23:52:40.845833 waagent[2033]: 2: eth0 inet6 fe80::7eed:8dff:fed0:148c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 23:52:40.871515 waagent[2033]: 2026-01-23T23:52:40.871051Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 23 23:52:40.871515 waagent[2033]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:52:40.871515 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:52:40.871515 waagent[2033]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:52:40.871515 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:52:40.871515 waagent[2033]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:52:40.871515 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:52:40.871515 waagent[2033]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:52:40.871515 waagent[2033]: 2 303 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:52:40.871515 waagent[2033]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:52:40.875094 waagent[2033]: 2026-01-23T23:52:40.875032Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 23:52:40.875094 waagent[2033]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:52:40.875094 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:52:40.875094 waagent[2033]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:52:40.875094 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:52:40.875094 waagent[2033]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:52:40.875094 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:52:40.875094 waagent[2033]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:52:40.875094 waagent[2033]: 12 1405 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:52:40.875094 waagent[2033]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:52:40.875617 waagent[2033]: 2026-01-23T23:52:40.875585Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 23:52:44.851236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:52:44.861039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:44.962961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:44.966599 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:52:45.088218 kubelet[2276]: E0123 23:52:45.088167 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:52:45.090935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:52:45.091081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:52:55.101414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:52:55.110035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:55.415982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:55.420062 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:52:55.455788 kubelet[2296]: E0123 23:52:55.455704 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:52:55.458104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:52:55.458284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:52:56.836846 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:52:56.848144 systemd[1]: Started sshd@0-10.200.20.22:22-10.200.16.10:32918.service - OpenSSH per-connection server daemon (10.200.16.10:32918). Jan 23 23:52:57.002515 chronyd[1782]: Selected source PHC0 Jan 23 23:52:57.335080 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 32918 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:52:57.336440 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:52:57.340808 systemd-logind[1803]: New session 3 of user core. Jan 23 23:52:57.346129 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:52:57.738076 systemd[1]: Started sshd@1-10.200.20.22:22-10.200.16.10:32922.service - OpenSSH per-connection server daemon (10.200.16.10:32922). Jan 23 23:52:58.198775 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 32922 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:52:58.200097 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:52:58.203659 systemd-logind[1803]: New session 4 of user core. Jan 23 23:52:58.211166 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:52:58.525997 sshd[2308]: pam_unix(sshd:session): session closed for user core Jan 23 23:52:58.529358 systemd[1]: sshd@1-10.200.20.22:22-10.200.16.10:32922.service: Deactivated successfully. Jan 23 23:52:58.532433 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:52:58.533490 systemd-logind[1803]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:52:58.534270 systemd-logind[1803]: Removed session 4. Jan 23 23:52:58.631109 systemd[1]: Started sshd@2-10.200.20.22:22-10.200.16.10:32930.service - OpenSSH per-connection server daemon (10.200.16.10:32930). Jan 23 23:52:59.118571 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 32930 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:52:59.119899 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:52:59.124698 systemd-logind[1803]: New session 5 of user core. Jan 23 23:52:59.130049 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:52:59.469990 sshd[2316]: pam_unix(sshd:session): session closed for user core Jan 23 23:52:59.473415 systemd[1]: sshd@2-10.200.20.22:22-10.200.16.10:32930.service: Deactivated successfully. Jan 23 23:52:59.476064 systemd-logind[1803]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:52:59.476581 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:52:59.477433 systemd-logind[1803]: Removed session 5. Jan 23 23:52:59.555014 systemd[1]: Started sshd@3-10.200.20.22:22-10.200.16.10:40682.service - OpenSSH per-connection server daemon (10.200.16.10:40682). Jan 23 23:53:00.040833 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 40682 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:00.042132 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:00.045675 systemd-logind[1803]: New session 6 of user core. Jan 23 23:53:00.053011 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:53:00.394992 sshd[2324]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:00.398079 systemd[1]: sshd@3-10.200.20.22:22-10.200.16.10:40682.service: Deactivated successfully. Jan 23 23:53:00.400892 systemd-logind[1803]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:53:00.401478 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:53:00.402326 systemd-logind[1803]: Removed session 6. Jan 23 23:53:00.494044 systemd[1]: Started sshd@4-10.200.20.22:22-10.200.16.10:40698.service - OpenSSH per-connection server daemon (10.200.16.10:40698). Jan 23 23:53:00.976378 sshd[2332]: Accepted publickey for core from 10.200.16.10 port 40698 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:00.977658 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:00.982507 systemd-logind[1803]: New session 7 of user core. Jan 23 23:53:00.988125 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:53:01.361378 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:53:01.361648 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:01.375886 sudo[2336]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:01.454163 sshd[2332]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:01.457790 systemd[1]: sshd@4-10.200.20.22:22-10.200.16.10:40698.service: Deactivated successfully. Jan 23 23:53:01.460462 systemd-logind[1803]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:53:01.460504 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:53:01.462188 systemd-logind[1803]: Removed session 7. Jan 23 23:53:01.540021 systemd[1]: Started sshd@5-10.200.20.22:22-10.200.16.10:40710.service - OpenSSH per-connection server daemon (10.200.16.10:40710). Jan 23 23:53:02.023203 sshd[2341]: Accepted publickey for core from 10.200.16.10 port 40710 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:02.024580 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:02.028433 systemd-logind[1803]: New session 8 of user core. Jan 23 23:53:02.038172 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:53:02.297526 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:53:02.297795 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:02.301111 sudo[2346]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:02.305387 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:53:02.305632 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:02.315992 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:02.318652 auditctl[2349]: No rules Jan 23 23:53:02.319068 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:53:02.319278 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:02.326242 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:53:02.344034 augenrules[2368]: No rules Jan 23 23:53:02.346168 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:53:02.347440 sudo[2345]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:02.425000 sshd[2341]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:02.427516 systemd-logind[1803]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:53:02.427654 systemd[1]: sshd@5-10.200.20.22:22-10.200.16.10:40710.service: Deactivated successfully. Jan 23 23:53:02.430408 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:53:02.431573 systemd-logind[1803]: Removed session 8. Jan 23 23:53:02.498042 systemd[1]: Started sshd@6-10.200.20.22:22-10.200.16.10:40726.service - OpenSSH per-connection server daemon (10.200.16.10:40726). Jan 23 23:53:02.946281 sshd[2377]: Accepted publickey for core from 10.200.16.10 port 40726 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:02.947584 sshd[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:02.951648 systemd-logind[1803]: New session 9 of user core. Jan 23 23:53:02.960076 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:53:03.203091 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:53:03.203366 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:53:04.317023 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:53:04.317241 (dockerd)[2396]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:53:04.849661 dockerd[2396]: time="2026-01-23T23:53:04.849609344Z" level=info msg="Starting up" Jan 23 23:53:05.166011 dockerd[2396]: time="2026-01-23T23:53:05.165979309Z" level=info msg="Loading containers: start." Jan 23 23:53:05.293822 kernel: Initializing XFRM netlink socket Jan 23 23:53:05.448928 systemd-networkd[1399]: docker0: Link UP Jan 23 23:53:05.468936 dockerd[2396]: time="2026-01-23T23:53:05.468900672Z" level=info msg="Loading containers: done." Jan 23 23:53:05.481010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3241781038-merged.mount: Deactivated successfully. Jan 23 23:53:05.482351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:53:05.486946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:05.490746 dockerd[2396]: time="2026-01-23T23:53:05.488933435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:53:05.490746 dockerd[2396]: time="2026-01-23T23:53:05.489043475Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:53:05.490746 dockerd[2396]: time="2026-01-23T23:53:05.489145475Z" level=info msg="Daemon has completed initialization" Jan 23 23:53:05.722487 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:53:05.723400 dockerd[2396]: time="2026-01-23T23:53:05.722292091Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:53:05.757052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:05.758888 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:05.796174 kubelet[2539]: E0123 23:53:05.796047 2539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:05.799974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:05.800152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:06.608266 containerd[1834]: time="2026-01-23T23:53:06.608038932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:53:07.375433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139249125.mount: Deactivated successfully. Jan 23 23:53:08.370838 containerd[1834]: time="2026-01-23T23:53:08.370035075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:08.373461 containerd[1834]: time="2026-01-23T23:53:08.373430042Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:53:08.376026 containerd[1834]: time="2026-01-23T23:53:08.375952287Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:08.380717 containerd[1834]: time="2026-01-23T23:53:08.380680417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:08.381959 containerd[1834]: time="2026-01-23T23:53:08.381764100Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.773686688s" Jan 23 23:53:08.381959 containerd[1834]: time="2026-01-23T23:53:08.381797580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:53:08.382756 containerd[1834]: time="2026-01-23T23:53:08.382725102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:53:09.461213 containerd[1834]: time="2026-01-23T23:53:09.461163977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:09.463420 containerd[1834]: time="2026-01-23T23:53:09.463383500Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:53:09.466982 containerd[1834]: time="2026-01-23T23:53:09.466934904Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:09.473088 containerd[1834]: time="2026-01-23T23:53:09.471968311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:09.473088 containerd[1834]: time="2026-01-23T23:53:09.472965673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.090107291s" Jan 23 23:53:09.473088 containerd[1834]: time="2026-01-23T23:53:09.472993633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:53:09.474382 containerd[1834]: time="2026-01-23T23:53:09.474354394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:53:10.498635 containerd[1834]: time="2026-01-23T23:53:10.498574690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:10.500946 containerd[1834]: time="2026-01-23T23:53:10.500730333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:53:10.503210 containerd[1834]: time="2026-01-23T23:53:10.503168376Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:10.507740 containerd[1834]: time="2026-01-23T23:53:10.507692662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:10.508854 containerd[1834]: time="2026-01-23T23:53:10.508814864Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.03441179s" Jan 23 23:53:10.508981 containerd[1834]: time="2026-01-23T23:53:10.508855104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:53:10.510093 containerd[1834]: time="2026-01-23T23:53:10.509926785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:53:11.546814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415703755.mount: Deactivated successfully. Jan 23 23:53:11.876291 containerd[1834]: time="2026-01-23T23:53:11.876240860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:11.878160 containerd[1834]: time="2026-01-23T23:53:11.878130463Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:53:11.880876 containerd[1834]: time="2026-01-23T23:53:11.880851066Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:11.884579 containerd[1834]: time="2026-01-23T23:53:11.884534671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:11.885264 containerd[1834]: time="2026-01-23T23:53:11.885097432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.375138127s" Jan 23 23:53:11.885264 containerd[1834]: time="2026-01-23T23:53:11.885133952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:53:11.885880 containerd[1834]: time="2026-01-23T23:53:11.885855553Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:53:12.579018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840504988.mount: Deactivated successfully. Jan 23 23:53:13.815613 containerd[1834]: time="2026-01-23T23:53:13.815568618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:13.817523 containerd[1834]: time="2026-01-23T23:53:13.817495941Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:53:13.820123 containerd[1834]: time="2026-01-23T23:53:13.820083506Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:13.827825 containerd[1834]: time="2026-01-23T23:53:13.826869279Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.940982846s" Jan 23 23:53:13.827825 containerd[1834]: time="2026-01-23T23:53:13.826909039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:53:13.827825 containerd[1834]: time="2026-01-23T23:53:13.827587920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:13.828281 containerd[1834]: time="2026-01-23T23:53:13.828108081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:53:14.373279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407850147.mount: Deactivated successfully. Jan 23 23:53:14.392043 containerd[1834]: time="2026-01-23T23:53:14.391997307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:14.394393 containerd[1834]: time="2026-01-23T23:53:14.394229231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:53:14.396823 containerd[1834]: time="2026-01-23T23:53:14.396760396Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:14.400940 containerd[1834]: time="2026-01-23T23:53:14.400870564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:14.402051 containerd[1834]: time="2026-01-23T23:53:14.401513245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 573.375764ms" Jan 23 23:53:14.402051 containerd[1834]: time="2026-01-23T23:53:14.401545845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:53:14.402160 containerd[1834]: time="2026-01-23T23:53:14.402032206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:53:15.000526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379094015.mount: Deactivated successfully. Jan 23 23:53:15.419076 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 23:53:15.851198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 23:53:15.856962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:17.355972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:17.359888 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:53:17.394017 kubelet[2743]: E0123 23:53:17.393966 2743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:53:17.396056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:53:17.396190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:53:18.231721 update_engine[1808]: I20260123 23:53:18.231556 1808 update_attempter.cc:509] Updating boot flags... Jan 23 23:53:18.289225 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2767) Jan 23 23:53:18.682392 containerd[1834]: time="2026-01-23T23:53:18.682346944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:18.684955 containerd[1834]: time="2026-01-23T23:53:18.684925107Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:53:18.687385 containerd[1834]: time="2026-01-23T23:53:18.687336751Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:18.691607 containerd[1834]: time="2026-01-23T23:53:18.691563677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:18.693020 containerd[1834]: time="2026-01-23T23:53:18.692771479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.290675913s" Jan 23 23:53:18.693020 containerd[1834]: time="2026-01-23T23:53:18.692812799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:53:22.956226 waagent[2033]: 2026-01-23T23:53:22.956166Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 23:53:22.964463 waagent[2033]: 2026-01-23T23:53:22.964406Z INFO ExtHandler Jan 23 23:53:22.964547 waagent[2033]: 2026-01-23T23:53:22.964503Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 23:53:23.027174 waagent[2033]: 2026-01-23T23:53:23.027127Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:53:23.094810 waagent[2033]: 2026-01-23T23:53:23.093156Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC46AE8EDB9292E37B510F226AA69595BB4A0A1D', 'hasPrivateKey': True} Jan 23 23:53:23.094810 waagent[2033]: 2026-01-23T23:53:23.093695Z INFO ExtHandler Fetch goal state completed Jan 23 23:53:23.094810 waagent[2033]: 2026-01-23T23:53:23.094092Z INFO ExtHandler ExtHandler Jan 23 23:53:23.094810 waagent[2033]: 2026-01-23T23:53:23.094162Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 207950cd-6180-4354-9bdb-895c4254fd04 correlation cef49254-3829-4158-9543-bb8de3d64675 created: 2026-01-23T23:53:18.648796Z] Jan 23 23:53:23.094810 waagent[2033]: 2026-01-23T23:53:23.094503Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:53:23.095053 waagent[2033]: 2026-01-23T23:53:23.095014Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 23:53:25.395791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:25.402998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:25.432340 systemd[1]: Reloading requested from client PID 2829 ('systemctl') (unit session-9.scope)... Jan 23 23:53:25.432353 systemd[1]: Reloading... Jan 23 23:53:25.521853 zram_generator::config[2875]: No configuration found. Jan 23 23:53:25.625813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:25.702423 systemd[1]: Reloading finished in 269 ms. Jan 23 23:53:25.753080 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:53:25.753332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:25.759234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:25.924500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:25.938096 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:53:25.969149 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:25.969149 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:53:25.969149 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:25.969482 kubelet[2949]: I0123 23:53:25.969376 2949 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:53:26.660824 kubelet[2949]: I0123 23:53:26.659514 2949 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:53:26.660824 kubelet[2949]: I0123 23:53:26.659543 2949 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:53:26.660824 kubelet[2949]: I0123 23:53:26.659822 2949 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:53:26.685208 kubelet[2949]: I0123 23:53:26.685178 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:53:26.685426 kubelet[2949]: E0123 23:53:26.685398 2949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:26.694539 kubelet[2949]: E0123 23:53:26.694453 2949 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:53:26.694539 kubelet[2949]: I0123 23:53:26.694537 2949 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:53:26.697267 kubelet[2949]: I0123 23:53:26.697250 2949 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:53:26.698166 kubelet[2949]: I0123 23:53:26.698132 2949 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:53:26.698327 kubelet[2949]: I0123 23:53:26.698168 2949 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2167bbe937","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:53:26.698407 kubelet[2949]: I0123 23:53:26.698336 2949 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:53:26.698407 kubelet[2949]: I0123 23:53:26.698345 2949 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:53:26.698485 kubelet[2949]: I0123 23:53:26.698470 2949 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:26.701167 kubelet[2949]: I0123 23:53:26.701151 2949 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:53:26.701212 kubelet[2949]: I0123 23:53:26.701170 2949 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:53:26.701212 kubelet[2949]: I0123 23:53:26.701189 2949 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:53:26.701212 kubelet[2949]: I0123 23:53:26.701199 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:53:26.706024 kubelet[2949]: W0123 23:53:26.704962 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:26.706024 kubelet[2949]: E0123 23:53:26.705019 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:26.706024 kubelet[2949]: W0123 23:53:26.705080 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2167bbe937&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:26.706024 kubelet[2949]: E0123 23:53:26.705106 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2167bbe937&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:26.706442 kubelet[2949]: I0123 23:53:26.706285 2949 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:53:26.707829 kubelet[2949]: I0123 23:53:26.707244 2949 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:53:26.707829 kubelet[2949]: W0123 23:53:26.707304 2949 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:53:26.707829 kubelet[2949]: I0123 23:53:26.707829 2949 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:53:26.707956 kubelet[2949]: I0123 23:53:26.707855 2949 server.go:1287] "Started kubelet" Jan 23 23:53:26.712825 kubelet[2949]: I0123 23:53:26.712008 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:53:26.712825 kubelet[2949]: I0123 23:53:26.712102 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:53:26.712825 kubelet[2949]: I0123 23:53:26.712343 2949 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:53:26.712825 kubelet[2949]: I0123 23:53:26.712403 2949 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:53:26.713176 kubelet[2949]: I0123 23:53:26.713148 2949 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:53:26.718795 kubelet[2949]: I0123 23:53:26.718771 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:53:26.720640 kubelet[2949]: E0123 23:53:26.720117 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2167bbe937.188d814f6a08de84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2167bbe937,UID:ci-4081.3.6-n-2167bbe937,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2167bbe937,},FirstTimestamp:2026-01-23 23:53:26.707838596 +0000 UTC m=+0.766978318,LastTimestamp:2026-01-23 23:53:26.707838596 +0000 UTC m=+0.766978318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2167bbe937,}" Jan 23 23:53:26.720640 kubelet[2949]: E0123 23:53:26.720475 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2167bbe937\" not found" Jan 23 23:53:26.720640 kubelet[2949]: I0123 23:53:26.720506 2949 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:53:26.720640 kubelet[2949]: I0123 23:53:26.720634 2949 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:53:26.720823 kubelet[2949]: I0123 23:53:26.720687 2949 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:53:26.721790 kubelet[2949]: W0123 23:53:26.721030 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:26.721790 kubelet[2949]: E0123 23:53:26.721073 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:26.721790 kubelet[2949]: E0123 23:53:26.721522 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2167bbe937?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="200ms" Jan 23 23:53:26.722005 kubelet[2949]: I0123 23:53:26.721928 2949 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:53:26.722049 kubelet[2949]: I0123 23:53:26.722007 2949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:53:26.723167 kubelet[2949]: I0123 23:53:26.723142 2949 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:53:26.737809 kubelet[2949]: E0123 23:53:26.737775 2949 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:53:26.762952 kubelet[2949]: I0123 23:53:26.762906 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:53:26.764211 kubelet[2949]: I0123 23:53:26.764192 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:53:26.764389 kubelet[2949]: I0123 23:53:26.764275 2949 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:53:26.764389 kubelet[2949]: I0123 23:53:26.764297 2949 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:53:26.764389 kubelet[2949]: I0123 23:53:26.764304 2949 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:53:26.764505 kubelet[2949]: E0123 23:53:26.764488 2949 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:53:26.766724 kubelet[2949]: W0123 23:53:26.766703 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:26.766983 kubelet[2949]: E0123 23:53:26.766944 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:26.779979 kubelet[2949]: I0123 23:53:26.779957 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:53:26.779979 kubelet[2949]: I0123 23:53:26.779973 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:53:26.780098 kubelet[2949]: I0123 23:53:26.779990 2949 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:26.785312 kubelet[2949]: I0123 23:53:26.785291 2949 policy_none.go:49] "None policy: Start" Jan 23 23:53:26.785312 kubelet[2949]: I0123 23:53:26.785315 2949 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:53:26.785398 kubelet[2949]: I0123 23:53:26.785325 2949 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:53:26.792268 kubelet[2949]: I0123 23:53:26.792246 2949 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:53:26.792437 kubelet[2949]: I0123 23:53:26.792421 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:53:26.792468 kubelet[2949]: I0123 23:53:26.792436 2949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:53:26.793703 kubelet[2949]: I0123 23:53:26.793684 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:53:26.796552 kubelet[2949]: E0123 23:53:26.796533 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:53:26.796618 kubelet[2949]: E0123 23:53:26.796575 2949 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-2167bbe937\" not found" Jan 23 23:53:26.870841 kubelet[2949]: E0123 23:53:26.870654 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.872519 kubelet[2949]: E0123 23:53:26.872498 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.874807 kubelet[2949]: E0123 23:53:26.874780 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.894121 kubelet[2949]: I0123 23:53:26.894102 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.894611 kubelet[2949]: E0123 23:53:26.894582 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.922829 kubelet[2949]: E0123 23:53:26.921903 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2167bbe937?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="400ms" Jan 23 23:53:26.922829 kubelet[2949]: I0123 23:53:26.922001 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0feafe46c3daed1ff814596687764798-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" (UID: \"0feafe46c3daed1ff814596687764798\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.922829 kubelet[2949]: I0123 23:53:26.922024 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0feafe46c3daed1ff814596687764798-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" (UID: \"0feafe46c3daed1ff814596687764798\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.922829 kubelet[2949]: I0123 23:53:26.922044 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0feafe46c3daed1ff814596687764798-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" (UID: \"0feafe46c3daed1ff814596687764798\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.922829 kubelet[2949]: I0123 23:53:26.922061 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.923010 kubelet[2949]: I0123 23:53:26.922077 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.923010 kubelet[2949]: I0123 23:53:26.922091 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:26.923010 kubelet[2949]: I0123 23:53:26.922107 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.022605 kubelet[2949]: I0123 23:53:27.022519 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.022605 kubelet[2949]: I0123 23:53:27.022558 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f78377dee19413f1991ec3730f15a116-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2167bbe937\" (UID: \"f78377dee19413f1991ec3730f15a116\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.096928 kubelet[2949]: I0123 23:53:27.096574 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.096928 kubelet[2949]: E0123 23:53:27.096881 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.172411 containerd[1834]: time="2026-01-23T23:53:27.172372615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2167bbe937,Uid:0feafe46c3daed1ff814596687764798,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:27.174455 containerd[1834]: time="2026-01-23T23:53:27.174280857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2167bbe937,Uid:68313a60484f64fef2453e5582ab187b,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:27.175706 containerd[1834]: time="2026-01-23T23:53:27.175679819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2167bbe937,Uid:f78377dee19413f1991ec3730f15a116,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:27.322419 kubelet[2949]: E0123 23:53:27.322378 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2167bbe937?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="800ms" Jan 23 23:53:27.498856 kubelet[2949]: I0123 23:53:27.498421 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.498856 kubelet[2949]: E0123 23:53:27.498697 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:27.745871 kubelet[2949]: W0123 23:53:27.745815 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2167bbe937&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:27.746004 kubelet[2949]: E0123 23:53:27.745880 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2167bbe937&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:27.783239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588014012.mount: Deactivated successfully. Jan 23 23:53:27.804340 containerd[1834]: time="2026-01-23T23:53:27.804293115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:27.806496 containerd[1834]: time="2026-01-23T23:53:27.806463318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:53:27.808525 containerd[1834]: time="2026-01-23T23:53:27.808494761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:27.811297 containerd[1834]: time="2026-01-23T23:53:27.810579884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:27.812852 containerd[1834]: time="2026-01-23T23:53:27.812813607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:53:27.815465 containerd[1834]: time="2026-01-23T23:53:27.815429891Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:27.817253 containerd[1834]: time="2026-01-23T23:53:27.817195613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:53:27.820549 containerd[1834]: time="2026-01-23T23:53:27.820506538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:53:27.821653 containerd[1834]: time="2026-01-23T23:53:27.821362819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 647.020642ms" Jan 23 23:53:27.822359 containerd[1834]: time="2026-01-23T23:53:27.822327660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 649.880045ms" Jan 23 23:53:27.829948 containerd[1834]: time="2026-01-23T23:53:27.829795190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.064531ms" Jan 23 23:53:28.070082 kubelet[2949]: E0123 23:53:28.069910 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2167bbe937.188d814f6a08de84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2167bbe937,UID:ci-4081.3.6-n-2167bbe937,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2167bbe937,},FirstTimestamp:2026-01-23 23:53:26.707838596 +0000 UTC m=+0.766978318,LastTimestamp:2026-01-23 23:53:26.707838596 +0000 UTC m=+0.766978318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2167bbe937,}" Jan 23 23:53:28.073112 kubelet[2949]: W0123 23:53:28.073063 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:28.073198 kubelet[2949]: E0123 23:53:28.073123 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:28.122869 kubelet[2949]: E0123 23:53:28.122831 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2167bbe937?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="1.6s" Jan 23 23:53:28.165654 kubelet[2949]: W0123 23:53:28.165604 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:28.165749 kubelet[2949]: E0123 23:53:28.165664 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:28.300412 kubelet[2949]: I0123 23:53:28.300376 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:28.300727 kubelet[2949]: E0123 23:53:28.300688 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:28.358486 kubelet[2949]: W0123 23:53:28.358450 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.22:6443: connect: connection refused Jan 23 23:53:28.358615 kubelet[2949]: E0123 23:53:28.358494 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:53:28.402604 containerd[1834]: time="2026-01-23T23:53:28.402304971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:28.402604 containerd[1834]: time="2026-01-23T23:53:28.402367251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:28.402604 containerd[1834]: time="2026-01-23T23:53:28.402381771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:28.403133 containerd[1834]: time="2026-01-23T23:53:28.402500692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:28.404037 containerd[1834]: time="2026-01-23T23:53:28.403718213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:28.404037 containerd[1834]: time="2026-01-23T23:53:28.403763813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:28.404037 containerd[1834]: time="2026-01-23T23:53:28.403997334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:28.404681 containerd[1834]: time="2026-01-23T23:53:28.404487294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:28.407014 containerd[1834]: time="2026-01-23T23:53:28.406330937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:28.408828 containerd[1834]: time="2026-01-23T23:53:28.408622500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:28.408828 containerd[1834]: time="2026-01-23T23:53:28.408649860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:28.408828 containerd[1834]: time="2026-01-23T23:53:28.408743100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:28.457634 containerd[1834]: time="2026-01-23T23:53:28.457578327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2167bbe937,Uid:0feafe46c3daed1ff814596687764798,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d736d50995086db34e6ce9fc94b70bafaf91e3c573ac31be6dc631f1db18553\"" Jan 23 23:53:28.464826 containerd[1834]: time="2026-01-23T23:53:28.464416776Z" level=info msg="CreateContainer within sandbox \"3d736d50995086db34e6ce9fc94b70bafaf91e3c573ac31be6dc631f1db18553\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:53:28.486361 containerd[1834]: time="2026-01-23T23:53:28.486184406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2167bbe937,Uid:f78377dee19413f1991ec3730f15a116,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e05d6c586debc67df29d5dcad7f46796e930012d9fa24fe1a0de731dd63de78\"" Jan 23 23:53:28.489478 containerd[1834]: time="2026-01-23T23:53:28.489279690Z" level=info msg="CreateContainer within sandbox \"9e05d6c586debc67df29d5dcad7f46796e930012d9fa24fe1a0de731dd63de78\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:53:28.492718 containerd[1834]: time="2026-01-23T23:53:28.492683175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2167bbe937,Uid:68313a60484f64fef2453e5582ab187b,Namespace:kube-system,Attempt:0,} returns sandbox id \"51a6129bf5ef405395bdbca309a65e26312d858a0c2652c82c4627157d6a3ef0\"" Jan 23 23:53:28.495253 containerd[1834]: time="2026-01-23T23:53:28.495203418Z" level=info msg="CreateContainer within sandbox \"51a6129bf5ef405395bdbca309a65e26312d858a0c2652c82c4627157d6a3ef0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:53:28.501428 containerd[1834]: time="2026-01-23T23:53:28.501391066Z" level=info msg="CreateContainer within sandbox \"3d736d50995086db34e6ce9fc94b70bafaf91e3c573ac31be6dc631f1db18553\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca85edc4e8ea7cd17194b4d3c05c2d9b6bf67f55fbb35b0885a647cbc61f80b0\"" Jan 23 23:53:28.503025 containerd[1834]: time="2026-01-23T23:53:28.501997427Z" level=info msg="StartContainer for \"ca85edc4e8ea7cd17194b4d3c05c2d9b6bf67f55fbb35b0885a647cbc61f80b0\"" Jan 23 23:53:28.541898 containerd[1834]: time="2026-01-23T23:53:28.541834642Z" level=info msg="CreateContainer within sandbox \"51a6129bf5ef405395bdbca309a65e26312d858a0c2652c82c4627157d6a3ef0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc19c9856232b5c39e894e778a285aa55ea663d80a765f58f66f5d69799ba091\"" Jan 23 23:53:28.542759 containerd[1834]: time="2026-01-23T23:53:28.542730123Z" level=info msg="StartContainer for \"cc19c9856232b5c39e894e778a285aa55ea663d80a765f58f66f5d69799ba091\"" Jan 23 23:53:28.544972 containerd[1834]: time="2026-01-23T23:53:28.544936526Z" level=info msg="CreateContainer within sandbox \"9e05d6c586debc67df29d5dcad7f46796e930012d9fa24fe1a0de731dd63de78\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5bd06a5ef45c635583fc8dda9cf46ca1bfdd4772ef09d38bc764c71fce1b3023\"" Jan 23 23:53:28.546387 containerd[1834]: time="2026-01-23T23:53:28.546358288Z" level=info msg="StartContainer for \"5bd06a5ef45c635583fc8dda9cf46ca1bfdd4772ef09d38bc764c71fce1b3023\"" Jan 23 23:53:28.573513 containerd[1834]: time="2026-01-23T23:53:28.572635844Z" level=info msg="StartContainer for \"ca85edc4e8ea7cd17194b4d3c05c2d9b6bf67f55fbb35b0885a647cbc61f80b0\" returns successfully" Jan 23 23:53:28.634572 containerd[1834]: time="2026-01-23T23:53:28.634375688Z" level=info msg="StartContainer for \"cc19c9856232b5c39e894e778a285aa55ea663d80a765f58f66f5d69799ba091\" returns successfully" Jan 23 23:53:28.652290 containerd[1834]: time="2026-01-23T23:53:28.652250552Z" level=info msg="StartContainer for \"5bd06a5ef45c635583fc8dda9cf46ca1bfdd4772ef09d38bc764c71fce1b3023\" returns successfully" Jan 23 23:53:28.784572 kubelet[2949]: E0123 23:53:28.784181 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:28.784572 kubelet[2949]: E0123 23:53:28.784221 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:28.791240 kubelet[2949]: E0123 23:53:28.791218 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:29.120369 waagent[2033]: 2026-01-23T23:53:29.119201Z INFO ExtHandler Jan 23 23:53:29.120369 waagent[2033]: 2026-01-23T23:53:29.119320Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 36fe5125-008e-46d7-9988-7a561af93784 eTag: 7481803515655240463 source: Fabric] Jan 23 23:53:29.120369 waagent[2033]: 2026-01-23T23:53:29.119672Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:53:29.792182 kubelet[2949]: E0123 23:53:29.792139 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:29.794736 kubelet[2949]: E0123 23:53:29.793310 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:29.794736 kubelet[2949]: E0123 23:53:29.793578 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:29.904867 kubelet[2949]: I0123 23:53:29.902813 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:30.792557 kubelet[2949]: E0123 23:53:30.792407 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.088109 kubelet[2949]: E0123 23:53:31.088006 2949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-2167bbe937\" not found" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.173368 kubelet[2949]: I0123 23:53:31.173114 2949 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.221486 kubelet[2949]: I0123 23:53:31.221449 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.227785 kubelet[2949]: E0123 23:53:31.227756 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.227785 kubelet[2949]: I0123 23:53:31.227783 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.229253 kubelet[2949]: E0123 23:53:31.229230 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2167bbe937\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.229253 kubelet[2949]: I0123 23:53:31.229254 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.231692 kubelet[2949]: E0123 23:53:31.231654 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:31.705355 kubelet[2949]: I0123 23:53:31.705060 2949 apiserver.go:52] "Watching apiserver" Jan 23 23:53:31.721455 kubelet[2949]: I0123 23:53:31.721431 2949 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:53:33.483412 systemd[1]: Reloading requested from client PID 3223 ('systemctl') (unit session-9.scope)... Jan 23 23:53:33.483426 systemd[1]: Reloading... Jan 23 23:53:33.567828 zram_generator::config[3266]: No configuration found. Jan 23 23:53:33.681751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:53:33.782002 systemd[1]: Reloading finished in 298 ms. Jan 23 23:53:33.812966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:33.830675 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:53:33.831059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:33.842012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:53:34.012598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:53:34.022148 (kubelet)[3337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:53:34.069462 kubelet[3337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:34.069462 kubelet[3337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:53:34.069462 kubelet[3337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:53:34.069462 kubelet[3337]: I0123 23:53:34.069209 3337 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:53:34.078255 kubelet[3337]: I0123 23:53:34.076137 3337 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:53:34.078255 kubelet[3337]: I0123 23:53:34.076161 3337 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:53:34.078255 kubelet[3337]: I0123 23:53:34.076583 3337 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:53:34.083692 kubelet[3337]: I0123 23:53:34.083150 3337 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:53:34.086152 kubelet[3337]: I0123 23:53:34.086130 3337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:53:34.092704 kubelet[3337]: E0123 23:53:34.092679 3337 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:53:34.092829 kubelet[3337]: I0123 23:53:34.092811 3337 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:53:34.096684 kubelet[3337]: I0123 23:53:34.096663 3337 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:53:34.097344 kubelet[3337]: I0123 23:53:34.097314 3337 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:53:34.097623 kubelet[3337]: I0123 23:53:34.097424 3337 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2167bbe937","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:53:34.097761 kubelet[3337]: I0123 23:53:34.097750 3337 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:53:34.097824 kubelet[3337]: I0123 23:53:34.097816 3337 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:53:34.097920 kubelet[3337]: I0123 23:53:34.097912 3337 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:34.098114 kubelet[3337]: I0123 23:53:34.098104 3337 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:53:34.098696 kubelet[3337]: I0123 23:53:34.098681 3337 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:53:34.098846 kubelet[3337]: I0123 23:53:34.098836 3337 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:53:34.098958 kubelet[3337]: I0123 23:53:34.098948 3337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:53:34.106894 kubelet[3337]: I0123 23:53:34.106203 3337 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:53:34.106894 kubelet[3337]: I0123 23:53:34.106658 3337 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:53:34.109462 kubelet[3337]: I0123 23:53:34.107054 3337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:53:34.109462 kubelet[3337]: I0123 23:53:34.107089 3337 server.go:1287] "Started kubelet" Jan 23 23:53:34.113051 kubelet[3337]: I0123 23:53:34.113034 3337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:53:34.122463 kubelet[3337]: I0123 23:53:34.122410 3337 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:53:34.123635 kubelet[3337]: I0123 23:53:34.123575 3337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:53:34.125117 kubelet[3337]: I0123 23:53:34.125093 3337 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:53:34.125838 kubelet[3337]: I0123 23:53:34.125823 3337 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:53:34.127053 kubelet[3337]: I0123 23:53:34.127026 3337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:53:34.131349 kubelet[3337]: I0123 23:53:34.131320 3337 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:53:34.137700 kubelet[3337]: I0123 23:53:34.137681 3337 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:53:34.137921 kubelet[3337]: I0123 23:53:34.137901 3337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:53:34.138188 kubelet[3337]: I0123 23:53:34.138163 3337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:53:34.138321 kubelet[3337]: I0123 23:53:34.138308 3337 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:53:34.140847 kubelet[3337]: I0123 23:53:34.140128 3337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:53:34.141907 kubelet[3337]: I0123 23:53:34.141886 3337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:53:34.141989 kubelet[3337]: I0123 23:53:34.141911 3337 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:53:34.141989 kubelet[3337]: I0123 23:53:34.141931 3337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:53:34.141989 kubelet[3337]: I0123 23:53:34.141938 3337 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:53:34.142081 kubelet[3337]: E0123 23:53:34.141998 3337 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:53:34.144563 kubelet[3337]: E0123 23:53:34.144543 3337 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:53:34.152487 kubelet[3337]: I0123 23:53:34.152423 3337 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:53:34.212329 kubelet[3337]: I0123 23:53:34.212305 3337 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:53:34.212452 kubelet[3337]: I0123 23:53:34.212440 3337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:53:34.212527 kubelet[3337]: I0123 23:53:34.212519 3337 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:53:34.212728 kubelet[3337]: I0123 23:53:34.212716 3337 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:53:34.212794 kubelet[3337]: I0123 23:53:34.212773 3337 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:53:34.212868 kubelet[3337]: I0123 23:53:34.212860 3337 policy_none.go:49] "None policy: Start" Jan 23 23:53:34.212918 kubelet[3337]: I0123 23:53:34.212911 3337 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:53:34.212976 kubelet[3337]: I0123 23:53:34.212968 3337 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:53:34.213124 kubelet[3337]: I0123 23:53:34.213115 3337 state_mem.go:75] "Updated machine memory state" Jan 23 23:53:34.214208 kubelet[3337]: I0123 23:53:34.214191 3337 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:53:34.214431 kubelet[3337]: I0123 23:53:34.214419 3337 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:53:34.214513 kubelet[3337]: I0123 23:53:34.214487 3337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:53:34.215417 kubelet[3337]: I0123 23:53:34.215296 3337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:53:34.218896 kubelet[3337]: E0123 23:53:34.218384 3337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:53:34.242586 kubelet[3337]: I0123 23:53:34.242554 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.242971 kubelet[3337]: I0123 23:53:34.242558 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.242971 kubelet[3337]: I0123 23:53:34.242652 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.252336 kubelet[3337]: W0123 23:53:34.252303 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:53:34.257296 kubelet[3337]: W0123 23:53:34.257277 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:53:34.257654 kubelet[3337]: W0123 23:53:34.257459 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:53:34.320140 kubelet[3337]: I0123 23:53:34.320037 3337 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.336446 kubelet[3337]: I0123 23:53:34.336413 3337 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.336572 kubelet[3337]: I0123 23:53:34.336505 3337 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340080 kubelet[3337]: I0123 23:53:34.340031 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0feafe46c3daed1ff814596687764798-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" (UID: \"0feafe46c3daed1ff814596687764798\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340080 kubelet[3337]: I0123 23:53:34.340079 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340844 kubelet[3337]: I0123 23:53:34.340298 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340844 kubelet[3337]: I0123 23:53:34.340327 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340844 kubelet[3337]: I0123 23:53:34.340394 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0feafe46c3daed1ff814596687764798-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" (UID: \"0feafe46c3daed1ff814596687764798\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340844 kubelet[3337]: I0123 23:53:34.340417 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0feafe46c3daed1ff814596687764798-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" (UID: \"0feafe46c3daed1ff814596687764798\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.340844 kubelet[3337]: I0123 23:53:34.340468 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f78377dee19413f1991ec3730f15a116-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2167bbe937\" (UID: \"f78377dee19413f1991ec3730f15a116\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.341188 kubelet[3337]: I0123 23:53:34.341073 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.341188 kubelet[3337]: I0123 23:53:34.341146 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68313a60484f64fef2453e5582ab187b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2167bbe937\" (UID: \"68313a60484f64fef2453e5582ab187b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:34.701697 sudo[3368]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 23:53:34.702383 sudo[3368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 23:53:35.103830 kubelet[3337]: I0123 23:53:35.103785 3337 apiserver.go:52] "Watching apiserver" Jan 23 23:53:35.138354 kubelet[3337]: I0123 23:53:35.138309 3337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:53:35.142553 sudo[3368]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:35.188673 kubelet[3337]: I0123 23:53:35.188633 3337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:35.203295 kubelet[3337]: W0123 23:53:35.203161 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:53:35.203888 kubelet[3337]: E0123 23:53:35.203639 3337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2167bbe937\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" Jan 23 23:53:35.231747 kubelet[3337]: I0123 23:53:35.231043 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2167bbe937" podStartSLOduration=1.231025083 podStartE2EDuration="1.231025083s" podCreationTimestamp="2026-01-23 23:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:35.219029065 +0000 UTC m=+1.193907634" watchObservedRunningTime="2026-01-23 23:53:35.231025083 +0000 UTC m=+1.205903652" Jan 23 23:53:35.249189 kubelet[3337]: I0123 23:53:35.248904 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2167bbe937" podStartSLOduration=1.248877228 podStartE2EDuration="1.248877228s" podCreationTimestamp="2026-01-23 23:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:35.231972804 +0000 UTC m=+1.206851413" watchObservedRunningTime="2026-01-23 23:53:35.248877228 +0000 UTC m=+1.223755797" Jan 23 23:53:35.249189 kubelet[3337]: I0123 23:53:35.249059 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2167bbe937" podStartSLOduration=1.249052908 podStartE2EDuration="1.249052908s" podCreationTimestamp="2026-01-23 23:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:35.248681588 +0000 UTC m=+1.223560157" watchObservedRunningTime="2026-01-23 23:53:35.249052908 +0000 UTC m=+1.223931477" Jan 23 23:53:37.464995 sudo[2381]: pam_unix(sudo:session): session closed for user root Jan 23 23:53:37.537239 sshd[2377]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:37.540552 systemd[1]: sshd@6-10.200.20.22:22-10.200.16.10:40726.service: Deactivated successfully. Jan 23 23:53:37.543628 systemd-logind[1803]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:53:37.545172 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:53:37.547258 systemd-logind[1803]: Removed session 9. Jan 23 23:53:38.262953 kubelet[3337]: I0123 23:53:38.262894 3337 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:53:38.263741 containerd[1834]: time="2026-01-23T23:53:38.263603445Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:53:38.264090 kubelet[3337]: I0123 23:53:38.263751 3337 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:53:39.469758 kubelet[3337]: I0123 23:53:39.469614 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d63ec028-902a-4b5c-86cd-2f04ba525937-lib-modules\") pod \"kube-proxy-xs5hd\" (UID: \"d63ec028-902a-4b5c-86cd-2f04ba525937\") " pod="kube-system/kube-proxy-xs5hd" Jan 23 23:53:39.469758 kubelet[3337]: I0123 23:53:39.469704 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2jl\" (UniqueName: \"kubernetes.io/projected/d63ec028-902a-4b5c-86cd-2f04ba525937-kube-api-access-tn2jl\") pod \"kube-proxy-xs5hd\" (UID: \"d63ec028-902a-4b5c-86cd-2f04ba525937\") " pod="kube-system/kube-proxy-xs5hd" Jan 23 23:53:39.469758 kubelet[3337]: I0123 23:53:39.469738 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-clustermesh-secrets\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472576 kubelet[3337]: I0123 23:53:39.471424 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hubble-tls\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472576 kubelet[3337]: I0123 23:53:39.471476 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-run\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472576 kubelet[3337]: I0123 23:53:39.471494 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-bpf-maps\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472576 kubelet[3337]: I0123 23:53:39.471511 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-lib-modules\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472576 kubelet[3337]: I0123 23:53:39.471528 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6qp5\" (UniqueName: \"kubernetes.io/projected/a44ac17c-d79d-43b9-9a86-473e3ca90d65-kube-api-access-n6qp5\") pod \"cilium-operator-6c4d7847fc-hbldg\" (UID: \"a44ac17c-d79d-43b9-9a86-473e3ca90d65\") " pod="kube-system/cilium-operator-6c4d7847fc-hbldg" Jan 23 23:53:39.472760 kubelet[3337]: I0123 23:53:39.471553 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-cgroup\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472760 kubelet[3337]: I0123 23:53:39.471569 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cni-path\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472760 kubelet[3337]: I0123 23:53:39.471586 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-kernel\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472760 kubelet[3337]: I0123 23:53:39.471601 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-etc-cni-netd\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472760 kubelet[3337]: I0123 23:53:39.471623 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-xtables-lock\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.472760 kubelet[3337]: I0123 23:53:39.471640 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hostproc\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.473522 kubelet[3337]: I0123 23:53:39.471658 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d63ec028-902a-4b5c-86cd-2f04ba525937-kube-proxy\") pod \"kube-proxy-xs5hd\" (UID: \"d63ec028-902a-4b5c-86cd-2f04ba525937\") " pod="kube-system/kube-proxy-xs5hd" Jan 23 23:53:39.473522 kubelet[3337]: I0123 23:53:39.471674 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d63ec028-902a-4b5c-86cd-2f04ba525937-xtables-lock\") pod \"kube-proxy-xs5hd\" (UID: \"d63ec028-902a-4b5c-86cd-2f04ba525937\") " pod="kube-system/kube-proxy-xs5hd" Jan 23 23:53:39.473522 kubelet[3337]: I0123 23:53:39.471839 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-config-path\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.473522 kubelet[3337]: I0123 23:53:39.471865 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-net\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.473522 kubelet[3337]: I0123 23:53:39.471885 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmn7\" (UniqueName: \"kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-kube-api-access-cwmn7\") pod \"cilium-nmvdh\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " pod="kube-system/cilium-nmvdh" Jan 23 23:53:39.473738 kubelet[3337]: I0123 23:53:39.473310 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a44ac17c-d79d-43b9-9a86-473e3ca90d65-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hbldg\" (UID: \"a44ac17c-d79d-43b9-9a86-473e3ca90d65\") " pod="kube-system/cilium-operator-6c4d7847fc-hbldg" Jan 23 23:53:39.680526 containerd[1834]: time="2026-01-23T23:53:39.680488850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xs5hd,Uid:d63ec028-902a-4b5c-86cd-2f04ba525937,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:39.688788 containerd[1834]: time="2026-01-23T23:53:39.688528741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nmvdh,Uid:d2f573b0-3811-4279-bc7d-3b16e6d8f5f6,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:39.726282 containerd[1834]: time="2026-01-23T23:53:39.726091357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:39.726282 containerd[1834]: time="2026-01-23T23:53:39.726145477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:39.726282 containerd[1834]: time="2026-01-23T23:53:39.726156197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:39.726282 containerd[1834]: time="2026-01-23T23:53:39.726230397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:39.749890 containerd[1834]: time="2026-01-23T23:53:39.749676951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hbldg,Uid:a44ac17c-d79d-43b9-9a86-473e3ca90d65,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:39.756209 containerd[1834]: time="2026-01-23T23:53:39.755812760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:39.756209 containerd[1834]: time="2026-01-23T23:53:39.755881481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:39.756209 containerd[1834]: time="2026-01-23T23:53:39.755901081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:39.756209 containerd[1834]: time="2026-01-23T23:53:39.755987881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:39.765543 containerd[1834]: time="2026-01-23T23:53:39.765505695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xs5hd,Uid:d63ec028-902a-4b5c-86cd-2f04ba525937,Namespace:kube-system,Attempt:0,} returns sandbox id \"03e91da42bff2185decff704ad40e03a8fc7d19c305d72430c9438929464fec5\"" Jan 23 23:53:39.770852 containerd[1834]: time="2026-01-23T23:53:39.770815262Z" level=info msg="CreateContainer within sandbox \"03e91da42bff2185decff704ad40e03a8fc7d19c305d72430c9438929464fec5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:53:39.803682 containerd[1834]: time="2026-01-23T23:53:39.803635871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nmvdh,Uid:d2f573b0-3811-4279-bc7d-3b16e6d8f5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\"" Jan 23 23:53:39.803885 containerd[1834]: time="2026-01-23T23:53:39.803081190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:39.803885 containerd[1834]: time="2026-01-23T23:53:39.803430310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:39.803885 containerd[1834]: time="2026-01-23T23:53:39.803450911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:39.803885 containerd[1834]: time="2026-01-23T23:53:39.803574391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:39.805688 containerd[1834]: time="2026-01-23T23:53:39.805659434Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 23:53:39.811206 containerd[1834]: time="2026-01-23T23:53:39.811167042Z" level=info msg="CreateContainer within sandbox \"03e91da42bff2185decff704ad40e03a8fc7d19c305d72430c9438929464fec5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4c4fdfba6be2a5df3cca72a89ffa223917431d392657199ce50e1e4531be235\"" Jan 23 23:53:39.812238 containerd[1834]: time="2026-01-23T23:53:39.812127963Z" level=info msg="StartContainer for \"e4c4fdfba6be2a5df3cca72a89ffa223917431d392657199ce50e1e4531be235\"" Jan 23 23:53:39.870074 containerd[1834]: time="2026-01-23T23:53:39.870019008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hbldg,Uid:a44ac17c-d79d-43b9-9a86-473e3ca90d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0\"" Jan 23 23:53:39.881887 containerd[1834]: time="2026-01-23T23:53:39.880639304Z" level=info msg="StartContainer for \"e4c4fdfba6be2a5df3cca72a89ffa223917431d392657199ce50e1e4531be235\" returns successfully" Jan 23 23:53:40.216624 kubelet[3337]: I0123 23:53:40.216560 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xs5hd" podStartSLOduration=1.216391638 podStartE2EDuration="1.216391638s" podCreationTimestamp="2026-01-23 23:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:40.215518077 +0000 UTC m=+6.190396646" watchObservedRunningTime="2026-01-23 23:53:40.216391638 +0000 UTC m=+6.191270207" Jan 23 23:53:44.860764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602457490.mount: Deactivated successfully. Jan 23 23:53:46.729107 containerd[1834]: time="2026-01-23T23:53:46.729053771Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:46.731402 containerd[1834]: time="2026-01-23T23:53:46.731243014Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 23:53:46.733872 containerd[1834]: time="2026-01-23T23:53:46.733839218Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:46.735505 containerd[1834]: time="2026-01-23T23:53:46.735393500Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.928458504s" Jan 23 23:53:46.735505 containerd[1834]: time="2026-01-23T23:53:46.735426900Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 23:53:46.738786 containerd[1834]: time="2026-01-23T23:53:46.737779663Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 23:53:46.739055 containerd[1834]: time="2026-01-23T23:53:46.739027465Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:53:46.766859 containerd[1834]: time="2026-01-23T23:53:46.766814782Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\"" Jan 23 23:53:46.768200 containerd[1834]: time="2026-01-23T23:53:46.768012384Z" level=info msg="StartContainer for \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\"" Jan 23 23:53:46.816218 containerd[1834]: time="2026-01-23T23:53:46.815718848Z" level=info msg="StartContainer for \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\" returns successfully" Jan 23 23:53:47.759504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02-rootfs.mount: Deactivated successfully. Jan 23 23:53:48.671034 containerd[1834]: time="2026-01-23T23:53:48.670031583Z" level=info msg="shim disconnected" id=f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02 namespace=k8s.io Jan 23 23:53:48.671034 containerd[1834]: time="2026-01-23T23:53:48.670603063Z" level=warning msg="cleaning up after shim disconnected" id=f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02 namespace=k8s.io Jan 23 23:53:48.671034 containerd[1834]: time="2026-01-23T23:53:48.670619023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:53:49.228613 containerd[1834]: time="2026-01-23T23:53:49.228562654Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:53:49.263899 containerd[1834]: time="2026-01-23T23:53:49.263857421Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\"" Jan 23 23:53:49.264545 containerd[1834]: time="2026-01-23T23:53:49.264515822Z" level=info msg="StartContainer for \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\"" Jan 23 23:53:49.314452 containerd[1834]: time="2026-01-23T23:53:49.314416889Z" level=info msg="StartContainer for \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\" returns successfully" Jan 23 23:53:49.322042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:53:49.322750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:49.322838 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:53:49.331099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:53:49.349131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:49.354760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106-rootfs.mount: Deactivated successfully. Jan 23 23:53:49.362714 containerd[1834]: time="2026-01-23T23:53:49.362658234Z" level=info msg="shim disconnected" id=e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106 namespace=k8s.io Jan 23 23:53:49.362714 containerd[1834]: time="2026-01-23T23:53:49.362712754Z" level=warning msg="cleaning up after shim disconnected" id=e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106 namespace=k8s.io Jan 23 23:53:49.362865 containerd[1834]: time="2026-01-23T23:53:49.362722554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:53:50.151372 containerd[1834]: time="2026-01-23T23:53:50.150649215Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:50.152916 containerd[1834]: time="2026-01-23T23:53:50.152892018Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 23:53:50.155343 containerd[1834]: time="2026-01-23T23:53:50.155318821Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:53:50.156678 containerd[1834]: time="2026-01-23T23:53:50.156649023Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.41882052s" Jan 23 23:53:50.156784 containerd[1834]: time="2026-01-23T23:53:50.156768343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 23:53:50.159172 containerd[1834]: time="2026-01-23T23:53:50.159143226Z" level=info msg="CreateContainer within sandbox \"f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 23:53:50.182016 containerd[1834]: time="2026-01-23T23:53:50.181944577Z" level=info msg="CreateContainer within sandbox \"f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\"" Jan 23 23:53:50.182988 containerd[1834]: time="2026-01-23T23:53:50.182879258Z" level=info msg="StartContainer for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\"" Jan 23 23:53:50.231375 containerd[1834]: time="2026-01-23T23:53:50.229544361Z" level=info msg="StartContainer for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" returns successfully" Jan 23 23:53:50.250832 containerd[1834]: time="2026-01-23T23:53:50.247614065Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:53:50.288529 containerd[1834]: time="2026-01-23T23:53:50.288455640Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\"" Jan 23 23:53:50.291107 containerd[1834]: time="2026-01-23T23:53:50.291083483Z" level=info msg="StartContainer for \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\"" Jan 23 23:53:50.295321 kubelet[3337]: I0123 23:53:50.295044 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hbldg" podStartSLOduration=1.009123436 podStartE2EDuration="11.295024529s" podCreationTimestamp="2026-01-23 23:53:39 +0000 UTC" firstStartedPulling="2026-01-23 23:53:39.871676931 +0000 UTC m=+5.846555500" lastFinishedPulling="2026-01-23 23:53:50.157578024 +0000 UTC m=+16.132456593" observedRunningTime="2026-01-23 23:53:50.262820405 +0000 UTC m=+16.237698974" watchObservedRunningTime="2026-01-23 23:53:50.295024529 +0000 UTC m=+16.269903098" Jan 23 23:53:50.414889 containerd[1834]: time="2026-01-23T23:53:50.413595168Z" level=info msg="StartContainer for \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\" returns successfully" Jan 23 23:53:50.762165 containerd[1834]: time="2026-01-23T23:53:50.761880877Z" level=info msg="shim disconnected" id=2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160 namespace=k8s.io Jan 23 23:53:50.762165 containerd[1834]: time="2026-01-23T23:53:50.762089677Z" level=warning msg="cleaning up after shim disconnected" id=2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160 namespace=k8s.io Jan 23 23:53:50.762165 containerd[1834]: time="2026-01-23T23:53:50.762100157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:53:51.244585 containerd[1834]: time="2026-01-23T23:53:51.244458926Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:53:51.250738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160-rootfs.mount: Deactivated successfully. Jan 23 23:53:51.280896 containerd[1834]: time="2026-01-23T23:53:51.280701575Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\"" Jan 23 23:53:51.281448 containerd[1834]: time="2026-01-23T23:53:51.281367136Z" level=info msg="StartContainer for \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\"" Jan 23 23:53:51.326149 containerd[1834]: time="2026-01-23T23:53:51.326109396Z" level=info msg="StartContainer for \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\" returns successfully" Jan 23 23:53:51.343352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945-rootfs.mount: Deactivated successfully. Jan 23 23:53:51.352980 containerd[1834]: time="2026-01-23T23:53:51.352908192Z" level=info msg="shim disconnected" id=e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945 namespace=k8s.io Jan 23 23:53:51.352980 containerd[1834]: time="2026-01-23T23:53:51.352976312Z" level=warning msg="cleaning up after shim disconnected" id=e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945 namespace=k8s.io Jan 23 23:53:51.352980 containerd[1834]: time="2026-01-23T23:53:51.352985272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:53:52.248232 containerd[1834]: time="2026-01-23T23:53:52.248188996Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:53:52.284970 containerd[1834]: time="2026-01-23T23:53:52.284855006Z" level=info msg="CreateContainer within sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\"" Jan 23 23:53:52.287884 containerd[1834]: time="2026-01-23T23:53:52.286876328Z" level=info msg="StartContainer for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\"" Jan 23 23:53:52.340152 containerd[1834]: time="2026-01-23T23:53:52.340112120Z" level=info msg="StartContainer for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" returns successfully" Jan 23 23:53:52.406026 kubelet[3337]: I0123 23:53:52.405996 3337 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:53:52.558080 kubelet[3337]: I0123 23:53:52.557734 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4564667a-1eed-4869-aaf9-9bcf7c416aa0-config-volume\") pod \"coredns-668d6bf9bc-qrspj\" (UID: \"4564667a-1eed-4869-aaf9-9bcf7c416aa0\") " pod="kube-system/coredns-668d6bf9bc-qrspj" Jan 23 23:53:52.559824 kubelet[3337]: I0123 23:53:52.558442 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9rw\" (UniqueName: \"kubernetes.io/projected/4564667a-1eed-4869-aaf9-9bcf7c416aa0-kube-api-access-mj9rw\") pod \"coredns-668d6bf9bc-qrspj\" (UID: \"4564667a-1eed-4869-aaf9-9bcf7c416aa0\") " pod="kube-system/coredns-668d6bf9bc-qrspj" Jan 23 23:53:52.559824 kubelet[3337]: I0123 23:53:52.558671 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a64a02-3796-42ac-9179-98a38495226a-config-volume\") pod \"coredns-668d6bf9bc-l4tzh\" (UID: \"b6a64a02-3796-42ac-9179-98a38495226a\") " pod="kube-system/coredns-668d6bf9bc-l4tzh" Jan 23 23:53:52.559824 kubelet[3337]: I0123 23:53:52.558695 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8p2c\" (UniqueName: \"kubernetes.io/projected/b6a64a02-3796-42ac-9179-98a38495226a-kube-api-access-c8p2c\") pod \"coredns-668d6bf9bc-l4tzh\" (UID: \"b6a64a02-3796-42ac-9179-98a38495226a\") " pod="kube-system/coredns-668d6bf9bc-l4tzh" Jan 23 23:53:52.765894 containerd[1834]: time="2026-01-23T23:53:52.765848293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrspj,Uid:4564667a-1eed-4869-aaf9-9bcf7c416aa0,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:52.768512 containerd[1834]: time="2026-01-23T23:53:52.768298656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l4tzh,Uid:b6a64a02-3796-42ac-9179-98a38495226a,Namespace:kube-system,Attempt:0,}" Jan 23 23:53:54.648131 systemd-networkd[1399]: cilium_host: Link UP Jan 23 23:53:54.649185 systemd-networkd[1399]: cilium_net: Link UP Jan 23 23:53:54.650630 systemd-networkd[1399]: cilium_net: Gained carrier Jan 23 23:53:54.652066 systemd-networkd[1399]: cilium_host: Gained carrier Jan 23 23:53:54.652210 systemd-networkd[1399]: cilium_net: Gained IPv6LL Jan 23 23:53:54.652331 systemd-networkd[1399]: cilium_host: Gained IPv6LL Jan 23 23:53:54.786462 systemd-networkd[1399]: cilium_vxlan: Link UP Jan 23 23:53:54.786468 systemd-networkd[1399]: cilium_vxlan: Gained carrier Jan 23 23:53:55.085066 kernel: NET: Registered PF_ALG protocol family Jan 23 23:53:55.803146 systemd-networkd[1399]: lxc_health: Link UP Jan 23 23:53:55.813249 systemd-networkd[1399]: lxc_health: Gained carrier Jan 23 23:53:56.329482 systemd-networkd[1399]: lxc285a86fc8e9e: Link UP Jan 23 23:53:56.337296 systemd-networkd[1399]: lxcc7542c697484: Link UP Jan 23 23:53:56.345875 kernel: eth0: renamed from tmpda37a Jan 23 23:53:56.354837 kernel: eth0: renamed from tmpc5c66 Jan 23 23:53:56.363560 systemd-networkd[1399]: lxc285a86fc8e9e: Gained carrier Jan 23 23:53:56.368561 systemd-networkd[1399]: lxcc7542c697484: Gained carrier Jan 23 23:53:56.397973 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Jan 23 23:53:57.614888 systemd-networkd[1399]: lxc_health: Gained IPv6LL Jan 23 23:53:57.712620 kubelet[3337]: I0123 23:53:57.712554 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nmvdh" podStartSLOduration=11.781188777 podStartE2EDuration="18.712537525s" podCreationTimestamp="2026-01-23 23:53:39 +0000 UTC" firstStartedPulling="2026-01-23 23:53:39.805057633 +0000 UTC m=+5.779936202" lastFinishedPulling="2026-01-23 23:53:46.736406421 +0000 UTC m=+12.711284950" observedRunningTime="2026-01-23 23:53:53.270234244 +0000 UTC m=+19.245112773" watchObservedRunningTime="2026-01-23 23:53:57.712537525 +0000 UTC m=+23.687416094" Jan 23 23:53:58.126957 systemd-networkd[1399]: lxcc7542c697484: Gained IPv6LL Jan 23 23:53:58.253972 systemd-networkd[1399]: lxc285a86fc8e9e: Gained IPv6LL Jan 23 23:53:59.923824 containerd[1834]: time="2026-01-23T23:53:59.923393692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:59.925218 containerd[1834]: time="2026-01-23T23:53:59.924075413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:59.925218 containerd[1834]: time="2026-01-23T23:53:59.924094933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:59.925218 containerd[1834]: time="2026-01-23T23:53:59.924168453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:59.959655 containerd[1834]: time="2026-01-23T23:53:59.958263577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:53:59.959655 containerd[1834]: time="2026-01-23T23:53:59.958479298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:53:59.959655 containerd[1834]: time="2026-01-23T23:53:59.958493978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:53:59.959655 containerd[1834]: time="2026-01-23T23:53:59.958578858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:00.010823 containerd[1834]: time="2026-01-23T23:54:00.010090005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l4tzh,Uid:b6a64a02-3796-42ac-9179-98a38495226a,Namespace:kube-system,Attempt:0,} returns sandbox id \"da37a610818e5922831671c4d976a73090bd7d6a28ec36677549f5e7cf8f883e\"" Jan 23 23:54:00.035262 containerd[1834]: time="2026-01-23T23:54:00.033792276Z" level=info msg="CreateContainer within sandbox \"da37a610818e5922831671c4d976a73090bd7d6a28ec36677549f5e7cf8f883e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:54:00.049421 containerd[1834]: time="2026-01-23T23:54:00.049368296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrspj,Uid:4564667a-1eed-4869-aaf9-9bcf7c416aa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5c66b6a44d2376f26a516952ffc7ef67d7dbc01fa2b38384a6933353a0b052e\"" Jan 23 23:54:00.056466 containerd[1834]: time="2026-01-23T23:54:00.056408745Z" level=info msg="CreateContainer within sandbox \"c5c66b6a44d2376f26a516952ffc7ef67d7dbc01fa2b38384a6933353a0b052e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:54:00.089871 containerd[1834]: time="2026-01-23T23:54:00.089822669Z" level=info msg="CreateContainer within sandbox \"da37a610818e5922831671c4d976a73090bd7d6a28ec36677549f5e7cf8f883e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a4150c9856922e37cab5376f5ae0f9ac7de70480c5b65902d1fc8d725a71e8a\"" Jan 23 23:54:00.092990 containerd[1834]: time="2026-01-23T23:54:00.091886472Z" level=info msg="StartContainer for \"0a4150c9856922e37cab5376f5ae0f9ac7de70480c5b65902d1fc8d725a71e8a\"" Jan 23 23:54:00.100218 containerd[1834]: time="2026-01-23T23:54:00.100186843Z" level=info msg="CreateContainer within sandbox \"c5c66b6a44d2376f26a516952ffc7ef67d7dbc01fa2b38384a6933353a0b052e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"330c29206233c5870a82d0bf2ad2d6a51cea916fc1fa8462e2e2de7affa1a917\"" Jan 23 23:54:00.100781 containerd[1834]: time="2026-01-23T23:54:00.100759323Z" level=info msg="StartContainer for \"330c29206233c5870a82d0bf2ad2d6a51cea916fc1fa8462e2e2de7affa1a917\"" Jan 23 23:54:00.157063 containerd[1834]: time="2026-01-23T23:54:00.157013437Z" level=info msg="StartContainer for \"0a4150c9856922e37cab5376f5ae0f9ac7de70480c5b65902d1fc8d725a71e8a\" returns successfully" Jan 23 23:54:00.172021 containerd[1834]: time="2026-01-23T23:54:00.171899256Z" level=info msg="StartContainer for \"330c29206233c5870a82d0bf2ad2d6a51cea916fc1fa8462e2e2de7affa1a917\" returns successfully" Jan 23 23:54:00.312221 kubelet[3337]: I0123 23:54:00.312086 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qrspj" podStartSLOduration=21.312069679 podStartE2EDuration="21.312069679s" podCreationTimestamp="2026-01-23 23:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:54:00.310123317 +0000 UTC m=+26.285001886" watchObservedRunningTime="2026-01-23 23:54:00.312069679 +0000 UTC m=+26.286948248" Jan 23 23:54:00.929826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044859944.mount: Deactivated successfully. Jan 23 23:55:07.087172 systemd[1]: Started sshd@7-10.200.20.22:22-10.200.16.10:60092.service - OpenSSH per-connection server daemon (10.200.16.10:60092). Jan 23 23:55:07.572259 sshd[4714]: Accepted publickey for core from 10.200.16.10 port 60092 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:07.574057 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:07.577464 systemd-logind[1803]: New session 10 of user core. Jan 23 23:55:07.589185 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:55:07.992035 sshd[4714]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:07.994821 systemd-logind[1803]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:55:07.994968 systemd[1]: sshd@7-10.200.20.22:22-10.200.16.10:60092.service: Deactivated successfully. Jan 23 23:55:07.998533 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:55:08.001992 systemd-logind[1803]: Removed session 10. Jan 23 23:55:13.070044 systemd[1]: Started sshd@8-10.200.20.22:22-10.200.16.10:40106.service - OpenSSH per-connection server daemon (10.200.16.10:40106). Jan 23 23:55:13.520318 sshd[4731]: Accepted publickey for core from 10.200.16.10 port 40106 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:13.521642 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:13.525577 systemd-logind[1803]: New session 11 of user core. Jan 23 23:55:13.529640 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:55:13.911028 sshd[4731]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:13.914248 systemd[1]: sshd@8-10.200.20.22:22-10.200.16.10:40106.service: Deactivated successfully. Jan 23 23:55:13.914537 systemd-logind[1803]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:55:13.918639 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:55:13.920181 systemd-logind[1803]: Removed session 11. Jan 23 23:55:18.986035 systemd[1]: Started sshd@9-10.200.20.22:22-10.200.16.10:40108.service - OpenSSH per-connection server daemon (10.200.16.10:40108). Jan 23 23:55:19.434592 sshd[4746]: Accepted publickey for core from 10.200.16.10 port 40108 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:19.436059 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:19.439664 systemd-logind[1803]: New session 12 of user core. Jan 23 23:55:19.448018 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:55:19.820368 sshd[4746]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:19.823391 systemd-logind[1803]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:55:19.825549 systemd[1]: sshd@9-10.200.20.22:22-10.200.16.10:40108.service: Deactivated successfully. Jan 23 23:55:19.828399 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:55:19.830306 systemd-logind[1803]: Removed session 12. Jan 23 23:55:24.912055 systemd[1]: Started sshd@10-10.200.20.22:22-10.200.16.10:43354.service - OpenSSH per-connection server daemon (10.200.16.10:43354). Jan 23 23:55:25.400176 sshd[4760]: Accepted publickey for core from 10.200.16.10 port 43354 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:25.401510 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:25.407037 systemd-logind[1803]: New session 13 of user core. Jan 23 23:55:25.411032 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:55:25.809594 sshd[4760]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:25.813498 systemd[1]: sshd@10-10.200.20.22:22-10.200.16.10:43354.service: Deactivated successfully. Jan 23 23:55:25.816260 systemd-logind[1803]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:55:25.817002 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:55:25.818081 systemd-logind[1803]: Removed session 13. Jan 23 23:55:30.882039 systemd[1]: Started sshd@11-10.200.20.22:22-10.200.16.10:37094.service - OpenSSH per-connection server daemon (10.200.16.10:37094). Jan 23 23:55:31.293470 sshd[4774]: Accepted publickey for core from 10.200.16.10 port 37094 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:31.295162 sshd[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:31.299159 systemd-logind[1803]: New session 14 of user core. Jan 23 23:55:31.305094 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:55:31.661012 sshd[4774]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:31.664509 systemd[1]: sshd@11-10.200.20.22:22-10.200.16.10:37094.service: Deactivated successfully. Jan 23 23:55:31.667615 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:55:31.669600 systemd-logind[1803]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:55:31.670593 systemd-logind[1803]: Removed session 14. Jan 23 23:55:31.735079 systemd[1]: Started sshd@12-10.200.20.22:22-10.200.16.10:37104.service - OpenSSH per-connection server daemon (10.200.16.10:37104). Jan 23 23:55:32.144730 sshd[4789]: Accepted publickey for core from 10.200.16.10 port 37104 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:32.146180 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:32.150008 systemd-logind[1803]: New session 15 of user core. Jan 23 23:55:32.153016 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:55:32.541686 sshd[4789]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:32.546709 systemd-logind[1803]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:55:32.547284 systemd[1]: sshd@12-10.200.20.22:22-10.200.16.10:37104.service: Deactivated successfully. Jan 23 23:55:32.549336 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:55:32.551386 systemd-logind[1803]: Removed session 15. Jan 23 23:55:32.627132 systemd[1]: Started sshd@13-10.200.20.22:22-10.200.16.10:37112.service - OpenSSH per-connection server daemon (10.200.16.10:37112). Jan 23 23:55:33.082568 sshd[4801]: Accepted publickey for core from 10.200.16.10 port 37112 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:33.085087 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:33.089593 systemd-logind[1803]: New session 16 of user core. Jan 23 23:55:33.097145 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:55:33.471024 sshd[4801]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:33.474154 systemd[1]: sshd@13-10.200.20.22:22-10.200.16.10:37112.service: Deactivated successfully. Jan 23 23:55:33.478333 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:55:33.479347 systemd-logind[1803]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:55:33.480129 systemd-logind[1803]: Removed session 16. Jan 23 23:55:38.556184 systemd[1]: Started sshd@14-10.200.20.22:22-10.200.16.10:37118.service - OpenSSH per-connection server daemon (10.200.16.10:37118). Jan 23 23:55:39.042826 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 37118 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:39.044224 sshd[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:39.048109 systemd-logind[1803]: New session 17 of user core. Jan 23 23:55:39.056121 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:55:39.456017 sshd[4816]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:39.458614 systemd-logind[1803]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:55:39.459591 systemd[1]: sshd@14-10.200.20.22:22-10.200.16.10:37118.service: Deactivated successfully. Jan 23 23:55:39.463501 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:55:39.466079 systemd-logind[1803]: Removed session 17. Jan 23 23:55:44.534204 systemd[1]: Started sshd@15-10.200.20.22:22-10.200.16.10:52132.service - OpenSSH per-connection server daemon (10.200.16.10:52132). Jan 23 23:55:44.980292 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 52132 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:44.981671 sshd[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:44.985766 systemd-logind[1803]: New session 18 of user core. Jan 23 23:55:44.995068 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:55:45.371020 sshd[4832]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:45.374196 systemd[1]: sshd@15-10.200.20.22:22-10.200.16.10:52132.service: Deactivated successfully. Jan 23 23:55:45.378267 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:55:45.379128 systemd-logind[1803]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:55:45.380180 systemd-logind[1803]: Removed session 18. Jan 23 23:55:45.449079 systemd[1]: Started sshd@16-10.200.20.22:22-10.200.16.10:52146.service - OpenSSH per-connection server daemon (10.200.16.10:52146). Jan 23 23:55:45.897908 sshd[4846]: Accepted publickey for core from 10.200.16.10 port 52146 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:45.899247 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:45.902902 systemd-logind[1803]: New session 19 of user core. Jan 23 23:55:45.907016 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:55:46.321151 sshd[4846]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:46.324064 systemd-logind[1803]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:55:46.324290 systemd[1]: sshd@16-10.200.20.22:22-10.200.16.10:52146.service: Deactivated successfully. Jan 23 23:55:46.327721 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:55:46.328480 systemd-logind[1803]: Removed session 19. Jan 23 23:55:46.397043 systemd[1]: Started sshd@17-10.200.20.22:22-10.200.16.10:52150.service - OpenSSH per-connection server daemon (10.200.16.10:52150). Jan 23 23:55:46.808011 sshd[4858]: Accepted publickey for core from 10.200.16.10 port 52150 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:46.809381 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:46.814638 systemd-logind[1803]: New session 20 of user core. Jan 23 23:55:46.821055 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:55:47.716316 sshd[4858]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:47.719010 systemd[1]: sshd@17-10.200.20.22:22-10.200.16.10:52150.service: Deactivated successfully. Jan 23 23:55:47.722891 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:55:47.724312 systemd-logind[1803]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:55:47.725604 systemd-logind[1803]: Removed session 20. Jan 23 23:55:47.795540 systemd[1]: Started sshd@18-10.200.20.22:22-10.200.16.10:52154.service - OpenSSH per-connection server daemon (10.200.16.10:52154). Jan 23 23:55:48.244746 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 52154 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:48.246181 sshd[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:48.249870 systemd-logind[1803]: New session 21 of user core. Jan 23 23:55:48.256032 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:55:48.742671 sshd[4877]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:48.745512 systemd-logind[1803]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:55:48.745662 systemd[1]: sshd@18-10.200.20.22:22-10.200.16.10:52154.service: Deactivated successfully. Jan 23 23:55:48.750569 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:55:48.751879 systemd-logind[1803]: Removed session 21. Jan 23 23:55:48.819040 systemd[1]: Started sshd@19-10.200.20.22:22-10.200.16.10:52166.service - OpenSSH per-connection server daemon (10.200.16.10:52166). Jan 23 23:55:49.263998 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 52166 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:49.264786 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:49.269674 systemd-logind[1803]: New session 22 of user core. Jan 23 23:55:49.275087 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:55:49.640015 sshd[4889]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:49.643948 systemd-logind[1803]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:55:49.644689 systemd[1]: sshd@19-10.200.20.22:22-10.200.16.10:52166.service: Deactivated successfully. Jan 23 23:55:49.647494 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:55:49.649557 systemd-logind[1803]: Removed session 22. Jan 23 23:55:54.732299 systemd[1]: Started sshd@20-10.200.20.22:22-10.200.16.10:60970.service - OpenSSH per-connection server daemon (10.200.16.10:60970). Jan 23 23:55:55.219026 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 60970 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:55.220423 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:55.224128 systemd-logind[1803]: New session 23 of user core. Jan 23 23:55:55.228007 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:55:55.625009 sshd[4904]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:55.628450 systemd-logind[1803]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:55:55.629147 systemd[1]: sshd@20-10.200.20.22:22-10.200.16.10:60970.service: Deactivated successfully. Jan 23 23:55:55.630687 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:55:55.634724 systemd-logind[1803]: Removed session 23. Jan 23 23:56:00.710120 systemd[1]: Started sshd@21-10.200.20.22:22-10.200.16.10:56688.service - OpenSSH per-connection server daemon (10.200.16.10:56688). Jan 23 23:56:01.197306 sshd[4918]: Accepted publickey for core from 10.200.16.10 port 56688 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:01.198745 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:01.205029 systemd-logind[1803]: New session 24 of user core. Jan 23 23:56:01.209102 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:56:01.604823 sshd[4918]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:01.608213 systemd[1]: sshd@21-10.200.20.22:22-10.200.16.10:56688.service: Deactivated successfully. Jan 23 23:56:01.610944 systemd-logind[1803]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:56:01.611441 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:56:01.613066 systemd-logind[1803]: Removed session 24. Jan 23 23:56:06.694090 systemd[1]: Started sshd@22-10.200.20.22:22-10.200.16.10:56698.service - OpenSSH per-connection server daemon (10.200.16.10:56698). Jan 23 23:56:07.179550 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 56698 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:07.181252 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:07.185641 systemd-logind[1803]: New session 25 of user core. Jan 23 23:56:07.187053 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:56:07.584104 sshd[4931]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:07.587938 systemd-logind[1803]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:56:07.588104 systemd[1]: sshd@22-10.200.20.22:22-10.200.16.10:56698.service: Deactivated successfully. Jan 23 23:56:07.593138 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:56:07.594033 systemd-logind[1803]: Removed session 25. Jan 23 23:56:07.667018 systemd[1]: Started sshd@23-10.200.20.22:22-10.200.16.10:56706.service - OpenSSH per-connection server daemon (10.200.16.10:56706). Jan 23 23:56:08.152298 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 56706 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:08.153111 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:08.156725 systemd-logind[1803]: New session 26 of user core. Jan 23 23:56:08.166052 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:56:10.035521 kubelet[3337]: I0123 23:56:10.035442 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l4tzh" podStartSLOduration=151.034127274 podStartE2EDuration="2m31.034127274s" podCreationTimestamp="2026-01-23 23:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:54:00.371716837 +0000 UTC m=+26.346595446" watchObservedRunningTime="2026-01-23 23:56:10.034127274 +0000 UTC m=+156.009005843" Jan 23 23:56:10.048657 containerd[1834]: time="2026-01-23T23:56:10.048035574Z" level=info msg="StopContainer for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" with timeout 30 (s)" Jan 23 23:56:10.048657 containerd[1834]: time="2026-01-23T23:56:10.048603094Z" level=info msg="Stop container \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" with signal terminated" Jan 23 23:56:10.080082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb-rootfs.mount: Deactivated successfully. Jan 23 23:56:10.082693 containerd[1834]: time="2026-01-23T23:56:10.082241780Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:56:10.088272 containerd[1834]: time="2026-01-23T23:56:10.088096109Z" level=info msg="StopContainer for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" with timeout 2 (s)" Jan 23 23:56:10.089047 containerd[1834]: time="2026-01-23T23:56:10.088749909Z" level=info msg="Stop container \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" with signal terminated" Jan 23 23:56:10.095286 systemd-networkd[1399]: lxc_health: Link DOWN Jan 23 23:56:10.095291 systemd-networkd[1399]: lxc_health: Lost carrier Jan 23 23:56:10.101908 containerd[1834]: time="2026-01-23T23:56:10.101698447Z" level=info msg="shim disconnected" id=e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb namespace=k8s.io Jan 23 23:56:10.101908 containerd[1834]: time="2026-01-23T23:56:10.101750287Z" level=warning msg="cleaning up after shim disconnected" id=e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb namespace=k8s.io Jan 23 23:56:10.101908 containerd[1834]: time="2026-01-23T23:56:10.101760047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:10.119136 containerd[1834]: time="2026-01-23T23:56:10.119094271Z" level=info msg="StopContainer for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" returns successfully" Jan 23 23:56:10.121633 containerd[1834]: time="2026-01-23T23:56:10.121603914Z" level=info msg="StopPodSandbox for \"f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0\"" Jan 23 23:56:10.123836 containerd[1834]: time="2026-01-23T23:56:10.121643395Z" level=info msg="Container to stop \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:10.123442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0-shm.mount: Deactivated successfully. Jan 23 23:56:10.140269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12-rootfs.mount: Deactivated successfully. Jan 23 23:56:10.156599 containerd[1834]: time="2026-01-23T23:56:10.156510842Z" level=info msg="shim disconnected" id=f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12 namespace=k8s.io Jan 23 23:56:10.156599 containerd[1834]: time="2026-01-23T23:56:10.156597282Z" level=warning msg="cleaning up after shim disconnected" id=f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12 namespace=k8s.io Jan 23 23:56:10.156846 containerd[1834]: time="2026-01-23T23:56:10.156636762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:10.163980 containerd[1834]: time="2026-01-23T23:56:10.163795252Z" level=info msg="shim disconnected" id=f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0 namespace=k8s.io Jan 23 23:56:10.163980 containerd[1834]: time="2026-01-23T23:56:10.163854972Z" level=warning msg="cleaning up after shim disconnected" id=f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0 namespace=k8s.io Jan 23 23:56:10.163980 containerd[1834]: time="2026-01-23T23:56:10.163867372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:10.171243 containerd[1834]: time="2026-01-23T23:56:10.171197582Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:56:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:56:10.179111 containerd[1834]: time="2026-01-23T23:56:10.178723393Z" level=info msg="StopContainer for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" returns successfully" Jan 23 23:56:10.179946 containerd[1834]: time="2026-01-23T23:56:10.179914034Z" level=info msg="StopPodSandbox for \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\"" Jan 23 23:56:10.180916 containerd[1834]: time="2026-01-23T23:56:10.180039395Z" level=info msg="Container to stop \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:10.180916 containerd[1834]: time="2026-01-23T23:56:10.180055435Z" level=info msg="Container to stop \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:10.180916 containerd[1834]: time="2026-01-23T23:56:10.180065275Z" level=info msg="Container to stop \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:10.180916 containerd[1834]: time="2026-01-23T23:56:10.180074635Z" level=info msg="Container to stop \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:10.180916 containerd[1834]: time="2026-01-23T23:56:10.180083555Z" level=info msg="Container to stop \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:10.183129 containerd[1834]: time="2026-01-23T23:56:10.183102479Z" level=info msg="TearDown network for sandbox \"f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0\" successfully" Jan 23 23:56:10.183243 containerd[1834]: time="2026-01-23T23:56:10.183229479Z" level=info msg="StopPodSandbox for \"f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0\" returns successfully" Jan 23 23:56:10.221313 containerd[1834]: time="2026-01-23T23:56:10.221250091Z" level=info msg="shim disconnected" id=54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0 namespace=k8s.io Jan 23 23:56:10.221313 containerd[1834]: time="2026-01-23T23:56:10.221305531Z" level=warning msg="cleaning up after shim disconnected" id=54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0 namespace=k8s.io Jan 23 23:56:10.221313 containerd[1834]: time="2026-01-23T23:56:10.221313691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:10.231632 containerd[1834]: time="2026-01-23T23:56:10.231584265Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:56:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:56:10.232621 containerd[1834]: time="2026-01-23T23:56:10.232597427Z" level=info msg="TearDown network for sandbox \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" successfully" Jan 23 23:56:10.232666 containerd[1834]: time="2026-01-23T23:56:10.232621227Z" level=info msg="StopPodSandbox for \"54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0\" returns successfully" Jan 23 23:56:10.312912 kubelet[3337]: I0123 23:56:10.311795 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cni-path\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.312912 kubelet[3337]: I0123 23:56:10.311852 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a44ac17c-d79d-43b9-9a86-473e3ca90d65-cilium-config-path\") pod \"a44ac17c-d79d-43b9-9a86-473e3ca90d65\" (UID: \"a44ac17c-d79d-43b9-9a86-473e3ca90d65\") " Jan 23 23:56:10.312912 kubelet[3337]: I0123 23:56:10.311869 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-lib-modules\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.312912 kubelet[3337]: I0123 23:56:10.311882 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-run\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.312912 kubelet[3337]: I0123 23:56:10.311888 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.312912 kubelet[3337]: I0123 23:56:10.311898 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-cgroup\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313147 kubelet[3337]: I0123 23:56:10.311919 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.313147 kubelet[3337]: I0123 23:56:10.311952 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwmn7\" (UniqueName: \"kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-kube-api-access-cwmn7\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313147 kubelet[3337]: I0123 23:56:10.311973 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-clustermesh-secrets\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313147 kubelet[3337]: I0123 23:56:10.311992 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hubble-tls\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313147 kubelet[3337]: I0123 23:56:10.312006 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hostproc\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313147 kubelet[3337]: I0123 23:56:10.312021 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-bpf-maps\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313272 kubelet[3337]: I0123 23:56:10.312038 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6qp5\" (UniqueName: \"kubernetes.io/projected/a44ac17c-d79d-43b9-9a86-473e3ca90d65-kube-api-access-n6qp5\") pod \"a44ac17c-d79d-43b9-9a86-473e3ca90d65\" (UID: \"a44ac17c-d79d-43b9-9a86-473e3ca90d65\") " Jan 23 23:56:10.313272 kubelet[3337]: I0123 23:56:10.312053 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-kernel\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313272 kubelet[3337]: I0123 23:56:10.312071 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-etc-cni-netd\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313272 kubelet[3337]: I0123 23:56:10.312088 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-config-path\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313272 kubelet[3337]: I0123 23:56:10.312103 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-net\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313272 kubelet[3337]: I0123 23:56:10.312119 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-xtables-lock\") pod \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\" (UID: \"d2f573b0-3811-4279-bc7d-3b16e6d8f5f6\") " Jan 23 23:56:10.313395 kubelet[3337]: I0123 23:56:10.312156 3337 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cni-path\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.313395 kubelet[3337]: I0123 23:56:10.312165 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-cgroup\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.313395 kubelet[3337]: I0123 23:56:10.312184 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.316027 kubelet[3337]: I0123 23:56:10.315606 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a44ac17c-d79d-43b9-9a86-473e3ca90d65-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a44ac17c-d79d-43b9-9a86-473e3ca90d65" (UID: "a44ac17c-d79d-43b9-9a86-473e3ca90d65"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:56:10.316027 kubelet[3337]: I0123 23:56:10.315669 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.316027 kubelet[3337]: I0123 23:56:10.315693 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.316027 kubelet[3337]: I0123 23:56:10.315749 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-kube-api-access-cwmn7" (OuterVolumeSpecName: "kube-api-access-cwmn7") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "kube-api-access-cwmn7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:10.316027 kubelet[3337]: I0123 23:56:10.315787 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.316197 kubelet[3337]: I0123 23:56:10.315816 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.316197 kubelet[3337]: I0123 23:56:10.315832 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.318463 kubelet[3337]: I0123 23:56:10.318351 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.318463 kubelet[3337]: I0123 23:56:10.318435 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:10.318590 kubelet[3337]: I0123 23:56:10.318565 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:56:10.318653 kubelet[3337]: I0123 23:56:10.318626 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44ac17c-d79d-43b9-9a86-473e3ca90d65-kube-api-access-n6qp5" (OuterVolumeSpecName: "kube-api-access-n6qp5") pod "a44ac17c-d79d-43b9-9a86-473e3ca90d65" (UID: "a44ac17c-d79d-43b9-9a86-473e3ca90d65"). InnerVolumeSpecName "kube-api-access-n6qp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:10.319366 kubelet[3337]: I0123 23:56:10.319341 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:56:10.319619 kubelet[3337]: I0123 23:56:10.319600 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" (UID: "d2f573b0-3811-4279-bc7d-3b16e6d8f5f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413012 3337 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hubble-tls\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413046 3337 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-hostproc\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413054 3337 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-bpf-maps\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413063 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n6qp5\" (UniqueName: \"kubernetes.io/projected/a44ac17c-d79d-43b9-9a86-473e3ca90d65-kube-api-access-n6qp5\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413073 3337 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-kernel\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413082 3337 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-xtables-lock\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413090 3337 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-etc-cni-netd\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413173 kubelet[3337]: I0123 23:56:10.413100 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-config-path\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413456 kubelet[3337]: I0123 23:56:10.413109 3337 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-host-proc-sys-net\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413456 kubelet[3337]: I0123 23:56:10.413117 3337 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-lib-modules\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413456 kubelet[3337]: I0123 23:56:10.413125 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a44ac17c-d79d-43b9-9a86-473e3ca90d65-cilium-config-path\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413456 kubelet[3337]: I0123 23:56:10.413135 3337 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-clustermesh-secrets\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413456 kubelet[3337]: I0123 23:56:10.413145 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-cilium-run\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.413456 kubelet[3337]: I0123 23:56:10.413153 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cwmn7\" (UniqueName: \"kubernetes.io/projected/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6-kube-api-access-cwmn7\") on node \"ci-4081.3.6-n-2167bbe937\" DevicePath \"\"" Jan 23 23:56:10.492214 kubelet[3337]: I0123 23:56:10.489753 3337 scope.go:117] "RemoveContainer" containerID="e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb" Jan 23 23:56:10.494997 containerd[1834]: time="2026-01-23T23:56:10.494961106Z" level=info msg="RemoveContainer for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\"" Jan 23 23:56:10.507634 containerd[1834]: time="2026-01-23T23:56:10.507515284Z" level=info msg="RemoveContainer for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" returns successfully" Jan 23 23:56:10.507772 kubelet[3337]: I0123 23:56:10.507753 3337 scope.go:117] "RemoveContainer" containerID="e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb" Jan 23 23:56:10.508397 containerd[1834]: time="2026-01-23T23:56:10.508271965Z" level=error msg="ContainerStatus for \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\": not found" Jan 23 23:56:10.508469 kubelet[3337]: E0123 23:56:10.508434 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\": not found" containerID="e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb" Jan 23 23:56:10.508555 kubelet[3337]: I0123 23:56:10.508462 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb"} err="failed to get container status \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6c2c1e6e6e515c4bdfbde5301e3617ad5d6a58ebf32c705a43997695dd1edbb\": not found" Jan 23 23:56:10.508555 kubelet[3337]: I0123 23:56:10.508548 3337 scope.go:117] "RemoveContainer" containerID="f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12" Jan 23 23:56:10.509841 containerd[1834]: time="2026-01-23T23:56:10.509598807Z" level=info msg="RemoveContainer for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\"" Jan 23 23:56:10.515960 containerd[1834]: time="2026-01-23T23:56:10.515912095Z" level=info msg="RemoveContainer for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" returns successfully" Jan 23 23:56:10.516242 kubelet[3337]: I0123 23:56:10.516222 3337 scope.go:117] "RemoveContainer" containerID="e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945" Jan 23 23:56:10.519664 containerd[1834]: time="2026-01-23T23:56:10.518957059Z" level=info msg="RemoveContainer for \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\"" Jan 23 23:56:10.524681 containerd[1834]: time="2026-01-23T23:56:10.524657867Z" level=info msg="RemoveContainer for \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\" returns successfully" Jan 23 23:56:10.524905 kubelet[3337]: I0123 23:56:10.524886 3337 scope.go:117] "RemoveContainer" containerID="2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160" Jan 23 23:56:10.526162 containerd[1834]: time="2026-01-23T23:56:10.525740469Z" level=info msg="RemoveContainer for \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\"" Jan 23 23:56:10.536758 containerd[1834]: time="2026-01-23T23:56:10.536716364Z" level=info msg="RemoveContainer for \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\" returns successfully" Jan 23 23:56:10.537097 kubelet[3337]: I0123 23:56:10.537075 3337 scope.go:117] "RemoveContainer" containerID="e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106" Jan 23 23:56:10.541622 containerd[1834]: time="2026-01-23T23:56:10.541559770Z" level=info msg="RemoveContainer for \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\"" Jan 23 23:56:10.549251 containerd[1834]: time="2026-01-23T23:56:10.549215541Z" level=info msg="RemoveContainer for \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\" returns successfully" Jan 23 23:56:10.549733 kubelet[3337]: I0123 23:56:10.549631 3337 scope.go:117] "RemoveContainer" containerID="f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02" Jan 23 23:56:10.551432 containerd[1834]: time="2026-01-23T23:56:10.551182664Z" level=info msg="RemoveContainer for \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\"" Jan 23 23:56:10.559324 containerd[1834]: time="2026-01-23T23:56:10.559225675Z" level=info msg="RemoveContainer for \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\" returns successfully" Jan 23 23:56:10.559448 kubelet[3337]: I0123 23:56:10.559431 3337 scope.go:117] "RemoveContainer" containerID="f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12" Jan 23 23:56:10.559702 containerd[1834]: time="2026-01-23T23:56:10.559633995Z" level=error msg="ContainerStatus for \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\": not found" Jan 23 23:56:10.559755 kubelet[3337]: E0123 23:56:10.559730 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\": not found" containerID="f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12" Jan 23 23:56:10.559785 kubelet[3337]: I0123 23:56:10.559755 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12"} err="failed to get container status \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\": rpc error: code = NotFound desc = an error occurred when try to find container \"f74caa77db9f1cd0fd1bdf7991510e7a975243f72c2e1c2f566adfa56aab0b12\": not found" Jan 23 23:56:10.559785 kubelet[3337]: I0123 23:56:10.559773 3337 scope.go:117] "RemoveContainer" containerID="e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945" Jan 23 23:56:10.560061 kubelet[3337]: E0123 23:56:10.560024 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\": not found" containerID="e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945" Jan 23 23:56:10.560061 kubelet[3337]: I0123 23:56:10.560042 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945"} err="failed to get container status \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\": not found" Jan 23 23:56:10.560061 kubelet[3337]: I0123 23:56:10.560066 3337 scope.go:117] "RemoveContainer" containerID="2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160" Jan 23 23:56:10.560245 containerd[1834]: time="2026-01-23T23:56:10.559919516Z" level=error msg="ContainerStatus for \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3964ab0a2fbf5ccf6b208dbd760b9213da17bfc85de65fe7b17190011947945\": not found" Jan 23 23:56:10.560329 containerd[1834]: time="2026-01-23T23:56:10.560305876Z" level=error msg="ContainerStatus for \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\": not found" Jan 23 23:56:10.560482 kubelet[3337]: E0123 23:56:10.560462 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\": not found" containerID="2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160" Jan 23 23:56:10.560533 kubelet[3337]: I0123 23:56:10.560485 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160"} err="failed to get container status \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a20e960ccdb438346fa268d2e2a5c080b4e2362726c32827771728ae4661160\": not found" Jan 23 23:56:10.560533 kubelet[3337]: I0123 23:56:10.560519 3337 scope.go:117] "RemoveContainer" containerID="e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106" Jan 23 23:56:10.560769 kubelet[3337]: E0123 23:56:10.560719 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\": not found" containerID="e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106" Jan 23 23:56:10.560769 kubelet[3337]: I0123 23:56:10.560733 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106"} err="failed to get container status \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\": rpc error: code = NotFound desc = an error occurred when try to find container \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\": not found" Jan 23 23:56:10.560769 kubelet[3337]: I0123 23:56:10.560767 3337 scope.go:117] "RemoveContainer" containerID="f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02" Jan 23 23:56:10.560872 containerd[1834]: time="2026-01-23T23:56:10.560642677Z" level=error msg="ContainerStatus for \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e62e9b80a47eb6b776fd265c3aaa3fe674c697ead6654093f07bc03482ccf106\": not found" Jan 23 23:56:10.561074 containerd[1834]: time="2026-01-23T23:56:10.561013157Z" level=error msg="ContainerStatus for \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\": not found" Jan 23 23:56:10.561258 kubelet[3337]: E0123 23:56:10.561233 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\": not found" containerID="f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02" Jan 23 23:56:10.561310 kubelet[3337]: I0123 23:56:10.561255 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02"} err="failed to get container status \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\": rpc error: code = NotFound desc = an error occurred when try to find container \"f19dd630637955f60c2a8d7bfc5eea00688d6b90e36a6001062a78e2b4e2fb02\": not found" Jan 23 23:56:11.052928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f67f04483ae5a79bc4e1d3da7246fbeccbeda35462c2a98ab76b7468917c73f0-rootfs.mount: Deactivated successfully. Jan 23 23:56:11.053068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0-rootfs.mount: Deactivated successfully. Jan 23 23:56:11.053146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54bdd3d7413c8dd2619d78ff284fcc3eef5315c0deefa2f04174bc34a3859ef0-shm.mount: Deactivated successfully. Jan 23 23:56:11.053238 systemd[1]: var-lib-kubelet-pods-d2f573b0\x2d3811\x2d4279\x2dbc7d\x2d3b16e6d8f5f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcwmn7.mount: Deactivated successfully. Jan 23 23:56:11.053317 systemd[1]: var-lib-kubelet-pods-a44ac17c\x2dd79d\x2d43b9\x2d9a86\x2d473e3ca90d65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn6qp5.mount: Deactivated successfully. Jan 23 23:56:11.053401 systemd[1]: var-lib-kubelet-pods-d2f573b0\x2d3811\x2d4279\x2dbc7d\x2d3b16e6d8f5f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 23:56:11.053477 systemd[1]: var-lib-kubelet-pods-d2f573b0\x2d3811\x2d4279\x2dbc7d\x2d3b16e6d8f5f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 23:56:12.056033 sshd[4944]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:12.058779 systemd[1]: sshd@23-10.200.20.22:22-10.200.16.10:56706.service: Deactivated successfully. Jan 23 23:56:12.062346 systemd-logind[1803]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:56:12.063436 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:56:12.064887 systemd-logind[1803]: Removed session 26. Jan 23 23:56:12.141094 systemd[1]: Started sshd@24-10.200.20.22:22-10.200.16.10:35870.service - OpenSSH per-connection server daemon (10.200.16.10:35870). Jan 23 23:56:12.145757 kubelet[3337]: I0123 23:56:12.145718 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44ac17c-d79d-43b9-9a86-473e3ca90d65" path="/var/lib/kubelet/pods/a44ac17c-d79d-43b9-9a86-473e3ca90d65/volumes" Jan 23 23:56:12.146858 kubelet[3337]: I0123 23:56:12.146500 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" path="/var/lib/kubelet/pods/d2f573b0-3811-4279-bc7d-3b16e6d8f5f6/volumes" Jan 23 23:56:12.629434 sshd[5112]: Accepted publickey for core from 10.200.16.10 port 35870 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:12.630777 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:12.635172 systemd-logind[1803]: New session 27 of user core. Jan 23 23:56:12.638057 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 23:56:13.633075 kubelet[3337]: I0123 23:56:13.629111 3337 memory_manager.go:355] "RemoveStaleState removing state" podUID="a44ac17c-d79d-43b9-9a86-473e3ca90d65" containerName="cilium-operator" Jan 23 23:56:13.633075 kubelet[3337]: I0123 23:56:13.629141 3337 memory_manager.go:355] "RemoveStaleState removing state" podUID="d2f573b0-3811-4279-bc7d-3b16e6d8f5f6" containerName="cilium-agent" Jan 23 23:56:13.657926 sshd[5112]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:13.660540 systemd-logind[1803]: Session 27 logged out. Waiting for processes to exit. Jan 23 23:56:13.662202 systemd[1]: sshd@24-10.200.20.22:22-10.200.16.10:35870.service: Deactivated successfully. Jan 23 23:56:13.664952 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 23:56:13.666767 systemd-logind[1803]: Removed session 27. Jan 23 23:56:13.731121 kubelet[3337]: I0123 23:56:13.731032 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-lib-modules\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731121 kubelet[3337]: I0123 23:56:13.731093 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-xtables-lock\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731121 kubelet[3337]: I0123 23:56:13.731114 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-cilium-cgroup\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731463 kubelet[3337]: I0123 23:56:13.731144 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8746e2f-b717-4b4d-abd2-e016674963ad-clustermesh-secrets\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731463 kubelet[3337]: I0123 23:56:13.731161 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8746e2f-b717-4b4d-abd2-e016674963ad-cilium-ipsec-secrets\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731463 kubelet[3337]: I0123 23:56:13.731177 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-cni-path\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731463 kubelet[3337]: I0123 23:56:13.731192 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8746e2f-b717-4b4d-abd2-e016674963ad-hubble-tls\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731463 kubelet[3337]: I0123 23:56:13.731214 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqk9\" (UniqueName: \"kubernetes.io/projected/a8746e2f-b717-4b4d-abd2-e016674963ad-kube-api-access-7wqk9\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731463 kubelet[3337]: I0123 23:56:13.731232 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-hostproc\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731598 kubelet[3337]: I0123 23:56:13.731246 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-host-proc-sys-net\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731598 kubelet[3337]: I0123 23:56:13.731262 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8746e2f-b717-4b4d-abd2-e016674963ad-cilium-config-path\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731598 kubelet[3337]: I0123 23:56:13.731283 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-host-proc-sys-kernel\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731598 kubelet[3337]: I0123 23:56:13.731299 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-cilium-run\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731598 kubelet[3337]: I0123 23:56:13.731313 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-bpf-maps\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.731598 kubelet[3337]: I0123 23:56:13.731345 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8746e2f-b717-4b4d-abd2-e016674963ad-etc-cni-netd\") pod \"cilium-27dvk\" (UID: \"a8746e2f-b717-4b4d-abd2-e016674963ad\") " pod="kube-system/cilium-27dvk" Jan 23 23:56:13.762027 systemd[1]: Started sshd@25-10.200.20.22:22-10.200.16.10:35874.service - OpenSSH per-connection server daemon (10.200.16.10:35874). Jan 23 23:56:13.935682 containerd[1834]: time="2026-01-23T23:56:13.935277105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27dvk,Uid:a8746e2f-b717-4b4d-abd2-e016674963ad,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:13.966394 containerd[1834]: time="2026-01-23T23:56:13.966286787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:13.966394 containerd[1834]: time="2026-01-23T23:56:13.966346987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:13.966394 containerd[1834]: time="2026-01-23T23:56:13.966357587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:13.966645 containerd[1834]: time="2026-01-23T23:56:13.966461067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:13.995138 containerd[1834]: time="2026-01-23T23:56:13.995027426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27dvk,Uid:a8746e2f-b717-4b4d-abd2-e016674963ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\"" Jan 23 23:56:13.999977 containerd[1834]: time="2026-01-23T23:56:13.999904233Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:56:14.026302 containerd[1834]: time="2026-01-23T23:56:14.026257149Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15b8870cf45c7c83677693130e086e96c2e36a42cc3c21ef805319dcde57c434\"" Jan 23 23:56:14.028122 containerd[1834]: time="2026-01-23T23:56:14.027384511Z" level=info msg="StartContainer for \"15b8870cf45c7c83677693130e086e96c2e36a42cc3c21ef805319dcde57c434\"" Jan 23 23:56:14.070625 containerd[1834]: time="2026-01-23T23:56:14.070508170Z" level=info msg="StartContainer for \"15b8870cf45c7c83677693130e086e96c2e36a42cc3c21ef805319dcde57c434\" returns successfully" Jan 23 23:56:14.155996 containerd[1834]: time="2026-01-23T23:56:14.155940127Z" level=info msg="shim disconnected" id=15b8870cf45c7c83677693130e086e96c2e36a42cc3c21ef805319dcde57c434 namespace=k8s.io Jan 23 23:56:14.156216 containerd[1834]: time="2026-01-23T23:56:14.156200767Z" level=warning msg="cleaning up after shim disconnected" id=15b8870cf45c7c83677693130e086e96c2e36a42cc3c21ef805319dcde57c434 namespace=k8s.io Jan 23 23:56:14.156277 containerd[1834]: time="2026-01-23T23:56:14.156266008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:14.246182 sshd[5125]: Accepted publickey for core from 10.200.16.10 port 35874 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:14.248071 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:14.251964 kubelet[3337]: E0123 23:56:14.251739 3337 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:56:14.252647 systemd-logind[1803]: New session 28 of user core. Jan 23 23:56:14.261231 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 23:56:14.514184 containerd[1834]: time="2026-01-23T23:56:14.513976978Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:56:14.551029 containerd[1834]: time="2026-01-23T23:56:14.550978629Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"756afbcdea9010e15b771ca8cbb40bd890269b502edf79c7d502c6cd07350cc4\"" Jan 23 23:56:14.551493 containerd[1834]: time="2026-01-23T23:56:14.551430350Z" level=info msg="StartContainer for \"756afbcdea9010e15b771ca8cbb40bd890269b502edf79c7d502c6cd07350cc4\"" Jan 23 23:56:14.596008 sshd[5125]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:14.603221 systemd[1]: sshd@25-10.200.20.22:22-10.200.16.10:35874.service: Deactivated successfully. Jan 23 23:56:14.609289 systemd-logind[1803]: Session 28 logged out. Waiting for processes to exit. Jan 23 23:56:14.609826 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 23:56:14.614937 systemd-logind[1803]: Removed session 28. Jan 23 23:56:14.635893 containerd[1834]: time="2026-01-23T23:56:14.635856305Z" level=info msg="StartContainer for \"756afbcdea9010e15b771ca8cbb40bd890269b502edf79c7d502c6cd07350cc4\" returns successfully" Jan 23 23:56:14.673743 containerd[1834]: time="2026-01-23T23:56:14.673685357Z" level=info msg="shim disconnected" id=756afbcdea9010e15b771ca8cbb40bd890269b502edf79c7d502c6cd07350cc4 namespace=k8s.io Jan 23 23:56:14.673743 containerd[1834]: time="2026-01-23T23:56:14.673739237Z" level=warning msg="cleaning up after shim disconnected" id=756afbcdea9010e15b771ca8cbb40bd890269b502edf79c7d502c6cd07350cc4 namespace=k8s.io Jan 23 23:56:14.673963 containerd[1834]: time="2026-01-23T23:56:14.673748437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:14.691054 systemd[1]: Started sshd@26-10.200.20.22:22-10.200.16.10:35888.service - OpenSSH per-connection server daemon (10.200.16.10:35888). Jan 23 23:56:15.176853 sshd[5298]: Accepted publickey for core from 10.200.16.10 port 35888 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:56:15.178156 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:15.182745 systemd-logind[1803]: New session 29 of user core. Jan 23 23:56:15.187007 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 23:56:15.522871 containerd[1834]: time="2026-01-23T23:56:15.522699161Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:56:15.558814 containerd[1834]: time="2026-01-23T23:56:15.556958968Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36fdbc6f5a59be12208cc4b73e90ab7872cc6234af79c2dc4d6734509e02b099\"" Jan 23 23:56:15.558814 containerd[1834]: time="2026-01-23T23:56:15.557621569Z" level=info msg="StartContainer for \"36fdbc6f5a59be12208cc4b73e90ab7872cc6234af79c2dc4d6734509e02b099\"" Jan 23 23:56:15.610144 containerd[1834]: time="2026-01-23T23:56:15.610096881Z" level=info msg="StartContainer for \"36fdbc6f5a59be12208cc4b73e90ab7872cc6234af79c2dc4d6734509e02b099\" returns successfully" Jan 23 23:56:15.639776 containerd[1834]: time="2026-01-23T23:56:15.639623522Z" level=info msg="shim disconnected" id=36fdbc6f5a59be12208cc4b73e90ab7872cc6234af79c2dc4d6734509e02b099 namespace=k8s.io Jan 23 23:56:15.640646 containerd[1834]: time="2026-01-23T23:56:15.640503083Z" level=warning msg="cleaning up after shim disconnected" id=36fdbc6f5a59be12208cc4b73e90ab7872cc6234af79c2dc4d6734509e02b099 namespace=k8s.io Jan 23 23:56:15.640764 containerd[1834]: time="2026-01-23T23:56:15.640748523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:15.842988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36fdbc6f5a59be12208cc4b73e90ab7872cc6234af79c2dc4d6734509e02b099-rootfs.mount: Deactivated successfully. Jan 23 23:56:16.521837 containerd[1834]: time="2026-01-23T23:56:16.521663931Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:56:16.554424 containerd[1834]: time="2026-01-23T23:56:16.553963096Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5\"" Jan 23 23:56:16.555061 containerd[1834]: time="2026-01-23T23:56:16.555023417Z" level=info msg="StartContainer for \"51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5\"" Jan 23 23:56:16.616108 containerd[1834]: time="2026-01-23T23:56:16.616013781Z" level=info msg="StartContainer for \"51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5\" returns successfully" Jan 23 23:56:16.636667 containerd[1834]: time="2026-01-23T23:56:16.636594489Z" level=info msg="shim disconnected" id=51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5 namespace=k8s.io Jan 23 23:56:16.636667 containerd[1834]: time="2026-01-23T23:56:16.636659729Z" level=warning msg="cleaning up after shim disconnected" id=51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5 namespace=k8s.io Jan 23 23:56:16.636667 containerd[1834]: time="2026-01-23T23:56:16.636668409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:16.843057 systemd[1]: run-containerd-runc-k8s.io-51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5-runc.l3yzgi.mount: Deactivated successfully. Jan 23 23:56:16.843189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51adcac61a5759a028478a8e2d66277de459b8f8969c80358d3439a27119ebc5-rootfs.mount: Deactivated successfully. Jan 23 23:56:17.527646 containerd[1834]: time="2026-01-23T23:56:17.527231299Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:56:17.565132 containerd[1834]: time="2026-01-23T23:56:17.565073869Z" level=info msg="CreateContainer within sandbox \"dc85d4c95a9b0f94623855e11d2f46b68682621d759648714843293cbf1fb67c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c18b8ff76d3b4bd5d58d42ca6e093672c8d1a1fd0c4cd21d027108279914ac25\"" Jan 23 23:56:17.569365 containerd[1834]: time="2026-01-23T23:56:17.567619153Z" level=info msg="StartContainer for \"c18b8ff76d3b4bd5d58d42ca6e093672c8d1a1fd0c4cd21d027108279914ac25\"" Jan 23 23:56:17.631144 containerd[1834]: time="2026-01-23T23:56:17.630991517Z" level=info msg="StartContainer for \"c18b8ff76d3b4bd5d58d42ca6e093672c8d1a1fd0c4cd21d027108279914ac25\" returns successfully" Jan 23 23:56:17.985850 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 23:56:18.027777 kubelet[3337]: I0123 23:56:18.026237 3337 setters.go:602] "Node became not ready" node="ci-4081.3.6-n-2167bbe937" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T23:56:18Z","lastTransitionTime":"2026-01-23T23:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236124 1808 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236166 1808 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236303 1808 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236634 1808 omaha_request_params.cc:62] Current group set to lts Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236716 1808 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236722 1808 update_attempter.cc:643] Scheduling an action processor start. Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236737 1808 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236760 1808 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236814 1808 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236824 1808 omaha_request_action.cc:272] Request: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: Jan 23 23:56:20.237652 update_engine[1808]: I20260123 23:56:20.236829 1808 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 23:56:20.238661 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 23:56:20.239357 update_engine[1808]: I20260123 23:56:20.238965 1808 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 23:56:20.239357 update_engine[1808]: I20260123 23:56:20.239243 1808 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 23:56:20.394217 update_engine[1808]: E20260123 23:56:20.394110 1808 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 23:56:20.394378 update_engine[1808]: I20260123 23:56:20.394345 1808 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 23:56:20.712226 systemd-networkd[1399]: lxc_health: Link UP Jan 23 23:56:20.720917 systemd-networkd[1399]: lxc_health: Gained carrier Jan 23 23:56:21.870946 systemd-networkd[1399]: lxc_health: Gained IPv6LL Jan 23 23:56:21.960819 kubelet[3337]: I0123 23:56:21.960736 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-27dvk" podStartSLOduration=8.960716948 podStartE2EDuration="8.960716948s" podCreationTimestamp="2026-01-23 23:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:18.548945816 +0000 UTC m=+164.523824465" watchObservedRunningTime="2026-01-23 23:56:21.960716948 +0000 UTC m=+167.935595517" Jan 23 23:56:23.942936 systemd[1]: run-containerd-runc-k8s.io-c18b8ff76d3b4bd5d58d42ca6e093672c8d1a1fd0c4cd21d027108279914ac25-runc.YI4JIV.mount: Deactivated successfully. Jan 23 23:56:23.987357 kubelet[3337]: E0123 23:56:23.987309 3337 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:38640->127.0.0.1:46427: read tcp 127.0.0.1:38640->127.0.0.1:46427: read: connection reset by peer Jan 23 23:56:26.209386 sshd[5298]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:26.212897 systemd[1]: sshd@26-10.200.20.22:22-10.200.16.10:35888.service: Deactivated successfully. Jan 23 23:56:26.217227 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 23:56:26.219022 systemd-logind[1803]: Session 29 logged out. Waiting for processes to exit. Jan 23 23:56:26.220236 systemd-logind[1803]: Removed session 29.