Mar 13 00:44:29.972476 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:44:29.972500 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:44:29.972510 kernel: BIOS-provided physical RAM map: Mar 13 00:44:29.972517 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:44:29.972523 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 13 00:44:29.972529 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Mar 13 00:44:29.972536 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Mar 13 00:44:29.972543 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Mar 13 00:44:29.972549 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Mar 13 00:44:29.972557 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 13 00:44:29.972563 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 13 00:44:29.972569 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 13 00:44:29.972575 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 13 00:44:29.972581 kernel: printk: legacy bootconsole [earlyser0] enabled Mar 13 00:44:29.972589 kernel: NX (Execute Disable) protection: active Mar 13 00:44:29.972597 kernel: APIC: Static calls initialized Mar 13 00:44:29.972603 kernel: efi: EFI v2.7 by Microsoft Mar 13 00:44:29.972610 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac2298 RNG=0x3ffd2018 Mar 13 00:44:29.972617 kernel: random: crng init done Mar 13 00:44:29.972624 kernel: secureboot: Secure boot disabled Mar 13 00:44:29.972630 kernel: SMBIOS 3.1.0 present. Mar 13 00:44:29.972637 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Mar 13 00:44:29.972643 kernel: DMI: Memory slots populated: 2/2 Mar 13 00:44:29.972650 kernel: Hypervisor detected: Microsoft Hyper-V Mar 13 00:44:29.972656 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Mar 13 00:44:29.972663 kernel: Hyper-V: Nested features: 0x3e0101 Mar 13 00:44:29.972671 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 13 00:44:29.972677 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 13 00:44:29.972684 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 13 00:44:29.972691 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 13 00:44:29.972697 kernel: tsc: Detected 2299.999 MHz processor Mar 13 00:44:29.972704 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:44:29.972711 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:44:29.972719 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Mar 13 00:44:29.972726 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:44:29.972733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:44:29.972741 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Mar 13 00:44:29.972748 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Mar 13 00:44:29.972754 kernel: Using GB pages for direct mapping Mar 13 00:44:29.972761 kernel: ACPI: Early table checksum verification disabled Mar 13 00:44:29.972771 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 13 00:44:29.972778 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:44:29.972787 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:44:29.972794 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 00:44:29.972800 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 13 00:44:29.972808 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:44:29.972815 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:44:29.972822 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:44:29.972829 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Mar 13 00:44:29.972837 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Mar 13 00:44:29.972844 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 00:44:29.972851 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 13 00:44:29.972858 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 13 00:44:29.972894 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 13 00:44:29.972901 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 13 00:44:29.972907 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 13 00:44:29.972914 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 13 00:44:29.972921 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 13 00:44:29.972929 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Mar 13 00:44:29.972936 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 13 00:44:29.972942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 13 00:44:29.972949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Mar 13 00:44:29.972957 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Mar 13 00:44:29.972965 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Mar 13 00:44:29.972974 kernel: Zone ranges: Mar 13 00:44:29.972983 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:44:29.972991 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:44:29.973001 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 13 00:44:29.973009 kernel: Device empty Mar 13 00:44:29.973017 kernel: Movable zone start for each node Mar 13 00:44:29.973026 kernel: Early memory node ranges Mar 13 00:44:29.973034 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:44:29.973043 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Mar 13 00:44:29.973051 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Mar 13 00:44:29.973059 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 13 00:44:29.973068 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 13 00:44:29.973078 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 13 00:44:29.973086 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:44:29.973095 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:44:29.973103 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 13 00:44:29.973111 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 13 00:44:29.973119 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 13 00:44:29.973128 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 13 00:44:29.973136 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:44:29.973145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:44:29.973154 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:44:29.973162 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 13 00:44:29.973170 kernel: TSC deadline timer available Mar 13 00:44:29.973179 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:44:29.973187 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:44:29.973195 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:44:29.973202 kernel: CPU topo: Max. threads per core: 2 Mar 13 00:44:29.973209 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:44:29.973215 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:44:29.973222 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:44:29.973231 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 13 00:44:29.973237 kernel: Booting paravirtualized kernel on Hyper-V Mar 13 00:44:29.973244 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:44:29.973251 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:44:29.973258 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:44:29.973265 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:44:29.973272 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:44:29.973278 kernel: Hyper-V: PV spinlocks enabled Mar 13 00:44:29.973287 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:44:29.973295 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:44:29.973303 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 13 00:44:29.973309 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:44:29.973316 kernel: Fallback order for Node 0: 0 Mar 13 00:44:29.973323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Mar 13 00:44:29.973329 kernel: Policy zone: Normal Mar 13 00:44:29.973336 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:44:29.973343 kernel: software IO TLB: area num 2. Mar 13 00:44:29.973352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:44:29.973359 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:44:29.973365 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:44:29.973372 kernel: Dynamic Preempt: voluntary Mar 13 00:44:29.973379 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:44:29.973387 kernel: rcu: RCU event tracing is enabled. Mar 13 00:44:29.973399 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:44:29.973408 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:44:29.973415 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:44:29.973423 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:44:29.973430 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:44:29.973439 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:44:29.973447 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:44:29.973454 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:44:29.973462 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:44:29.973469 kernel: Using NULL legacy PIC Mar 13 00:44:29.973477 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 13 00:44:29.973485 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:44:29.973492 kernel: Console: colour dummy device 80x25 Mar 13 00:44:29.973499 kernel: printk: legacy console [tty1] enabled Mar 13 00:44:29.973507 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:44:29.973514 kernel: printk: legacy bootconsole [earlyser0] disabled Mar 13 00:44:29.973521 kernel: ACPI: Core revision 20240827 Mar 13 00:44:29.973528 kernel: Failed to register legacy timer interrupt Mar 13 00:44:29.973536 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:44:29.973544 kernel: x2apic enabled Mar 13 00:44:29.973552 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:44:29.973559 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 13 00:44:29.973567 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 00:44:29.973574 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Mar 13 00:44:29.973582 kernel: Hyper-V: Using IPI hypercalls Mar 13 00:44:29.973589 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 13 00:44:29.973596 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 13 00:44:29.973604 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 13 00:44:29.973613 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 13 00:44:29.973620 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 13 00:44:29.973627 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 13 00:44:29.973635 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Mar 13 00:44:29.973641 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Mar 13 00:44:29.973647 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:44:29.973654 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 13 00:44:29.973660 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 13 00:44:29.973666 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:44:29.973675 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:44:29.973682 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:44:29.973689 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 13 00:44:29.973696 kernel: RETBleed: Vulnerable Mar 13 00:44:29.973702 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:44:29.973710 kernel: active return thunk: its_return_thunk Mar 13 00:44:29.973720 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:44:29.973731 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:44:29.973738 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:44:29.973744 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:44:29.973751 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 13 00:44:29.973760 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 13 00:44:29.973768 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 13 00:44:29.973774 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Mar 13 00:44:29.973784 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Mar 13 00:44:29.973792 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Mar 13 00:44:29.973799 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:44:29.973806 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 13 00:44:29.973814 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 13 00:44:29.973821 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 13 00:44:29.973829 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Mar 13 00:44:29.973837 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Mar 13 00:44:29.973846 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Mar 13 00:44:29.973853 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Mar 13 00:44:29.973881 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:44:29.973889 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:44:29.973897 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:44:29.973904 kernel: landlock: Up and running. Mar 13 00:44:29.973912 kernel: SELinux: Initializing. Mar 13 00:44:29.973919 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:44:29.973927 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:44:29.973934 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Mar 13 00:44:29.973942 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Mar 13 00:44:29.973949 kernel: signal: max sigframe size: 11952 Mar 13 00:44:29.973958 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:44:29.973966 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:44:29.973974 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:44:29.973981 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:44:29.973988 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:44:29.973996 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:44:29.974003 kernel: .... node #0, CPUs: #1 Mar 13 00:44:29.974010 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:44:29.974018 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 13 00:44:29.974027 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 308180K reserved, 0K cma-reserved) Mar 13 00:44:29.974035 kernel: devtmpfs: initialized Mar 13 00:44:29.974042 kernel: x86/mm: Memory block size: 128MB Mar 13 00:44:29.974049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 13 00:44:29.974057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:44:29.974064 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:44:29.974072 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:44:29.974079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:44:29.974086 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:44:29.974095 kernel: audit: type=2000 audit(1773362667.111:1): state=initialized audit_enabled=0 res=1 Mar 13 00:44:29.974103 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:44:29.974110 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:44:29.974118 kernel: cpuidle: using governor menu Mar 13 00:44:29.974125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:44:29.974132 kernel: dca service started, version 1.12.1 Mar 13 00:44:29.974140 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Mar 13 00:44:29.974147 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 13 00:44:29.974156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:44:29.974164 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:44:29.974171 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:44:29.974178 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:44:29.974186 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:44:29.974193 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:44:29.974201 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:44:29.974208 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:44:29.974216 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:44:29.974224 kernel: ACPI: Interpreter enabled Mar 13 00:44:29.974232 kernel: ACPI: PM: (supports S0 S5) Mar 13 00:44:29.974239 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:44:29.974247 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:44:29.974254 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 13 00:44:29.974262 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 13 00:44:29.974269 kernel: iommu: Default domain type: Translated Mar 13 00:44:29.974276 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:44:29.974283 kernel: efivars: Registered efivars operations Mar 13 00:44:29.974292 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:44:29.974299 kernel: PCI: System does not support PCI Mar 13 00:44:29.974306 kernel: vgaarb: loaded Mar 13 00:44:29.974314 kernel: clocksource: Switched to clocksource tsc-early Mar 13 00:44:29.974321 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:44:29.974328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:44:29.974336 kernel: pnp: PnP ACPI init Mar 13 00:44:29.974343 kernel: pnp: PnP ACPI: found 3 devices Mar 13 00:44:29.974350 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:44:29.974358 kernel: NET: Registered PF_INET protocol family Mar 13 00:44:29.974367 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 00:44:29.974374 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 13 00:44:29.974381 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:44:29.974389 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:44:29.974396 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 13 00:44:29.974404 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 13 00:44:29.974411 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:44:29.974419 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:44:29.974427 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:44:29.974434 kernel: NET: Registered PF_XDP protocol family Mar 13 00:44:29.974442 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:44:29.974449 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:44:29.974457 kernel: software IO TLB: mapped [mem 0x000000003a9a9000-0x000000003e9a9000] (64MB) Mar 13 00:44:29.974464 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Mar 13 00:44:29.974472 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Mar 13 00:44:29.974480 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Mar 13 00:44:29.974486 kernel: clocksource: Switched to clocksource tsc Mar 13 00:44:29.974495 kernel: Initialise system trusted keyrings Mar 13 00:44:29.974504 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 13 00:44:29.974514 kernel: Key type asymmetric registered Mar 13 00:44:29.974522 kernel: Asymmetric key parser 'x509' registered Mar 13 00:44:29.974530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:44:29.974538 kernel: io scheduler mq-deadline registered Mar 13 00:44:29.974548 kernel: io scheduler kyber registered Mar 13 00:44:29.974555 kernel: io scheduler bfq registered Mar 13 00:44:29.974563 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:44:29.974572 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:44:29.974580 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:44:29.974588 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 13 00:44:29.974595 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:44:29.974603 kernel: i8042: PNP: No PS/2 controller found. Mar 13 00:44:29.974749 kernel: rtc_cmos 00:02: registered as rtc0 Mar 13 00:44:29.974817 kernel: rtc_cmos 00:02: setting system clock to 2026-03-13T00:44:29 UTC (1773362669) Mar 13 00:44:29.974894 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 13 00:44:29.974906 kernel: intel_pstate: Intel P-state driver initializing Mar 13 00:44:29.974914 kernel: efifb: probing for efifb Mar 13 00:44:29.974921 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 00:44:29.974929 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 00:44:29.974937 kernel: efifb: scrolling: redraw Mar 13 00:44:29.974945 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:44:29.974952 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 00:44:29.974960 kernel: fb0: EFI VGA frame buffer device Mar 13 00:44:29.974968 kernel: pstore: Using crash dump compression: deflate Mar 13 00:44:29.974976 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:44:29.974984 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:44:29.974992 kernel: Segment Routing with IPv6 Mar 13 00:44:29.975000 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:44:29.975008 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:44:29.975016 kernel: Key type dns_resolver registered Mar 13 00:44:29.975023 kernel: IPI shorthand broadcast: enabled Mar 13 00:44:29.975031 kernel: sched_clock: Marking stable (2964004192, 87275419)->(3329393827, -278114216) Mar 13 00:44:29.975038 kernel: registered taskstats version 1 Mar 13 00:44:29.975047 kernel: Loading compiled-in X.509 certificates Mar 13 00:44:29.975055 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:44:29.975063 kernel: Demotion targets for Node 0: null Mar 13 00:44:29.975070 kernel: Key type .fscrypt registered Mar 13 00:44:29.975077 kernel: Key type fscrypt-provisioning registered Mar 13 00:44:29.975085 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:44:29.975092 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:44:29.975100 kernel: ima: No architecture policies found Mar 13 00:44:29.975107 kernel: clk: Disabling unused clocks Mar 13 00:44:29.975116 kernel: Warning: unable to open an initial console. Mar 13 00:44:29.975124 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:44:29.975131 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:44:29.975139 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:44:29.975147 kernel: Run /init as init process Mar 13 00:44:29.975154 kernel: with arguments: Mar 13 00:44:29.975162 kernel: /init Mar 13 00:44:29.975169 kernel: with environment: Mar 13 00:44:29.975176 kernel: HOME=/ Mar 13 00:44:29.975185 kernel: TERM=linux Mar 13 00:44:29.975194 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:44:29.975205 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:44:29.975214 systemd[1]: Detected virtualization microsoft. Mar 13 00:44:29.975221 systemd[1]: Detected architecture x86-64. Mar 13 00:44:29.975229 systemd[1]: Running in initrd. Mar 13 00:44:29.975237 systemd[1]: No hostname configured, using default hostname. Mar 13 00:44:29.975247 systemd[1]: Hostname set to . Mar 13 00:44:29.975255 systemd[1]: Initializing machine ID from random generator. Mar 13 00:44:29.975263 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:44:29.975271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:44:29.975279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:44:29.975288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:44:29.975296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:44:29.975304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:44:29.975314 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:44:29.975323 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:44:29.975332 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:44:29.975340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:44:29.975348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:44:29.975356 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:44:29.975364 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:44:29.975373 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:44:29.975381 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:44:29.975389 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:44:29.975397 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:44:29.975406 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:44:29.975414 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:44:29.975422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:44:29.975430 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:44:29.975438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:44:29.975447 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:44:29.975455 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:44:29.975463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:44:29.975471 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:44:29.975480 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:44:29.975488 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:44:29.975496 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:44:29.975505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:44:29.975521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:29.975544 systemd-journald[186]: Collecting audit messages is disabled. Mar 13 00:44:29.975565 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:44:29.975575 systemd-journald[186]: Journal started Mar 13 00:44:29.975598 systemd-journald[186]: Runtime Journal (/run/log/journal/cfd8a52a04c44ac4933b1365ed5ca6c8) is 8M, max 158.6M, 150.6M free. Mar 13 00:44:29.972442 systemd-modules-load[187]: Inserted module 'overlay' Mar 13 00:44:29.984389 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:44:29.986973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:44:29.993332 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:44:29.997504 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:30.006959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:44:30.012482 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:44:30.012503 kernel: Bridge firewalling registered Mar 13 00:44:30.012604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:44:30.013377 systemd-modules-load[187]: Inserted module 'br_netfilter' Mar 13 00:44:30.016579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:44:30.024333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:44:30.031573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:44:30.037725 systemd-tmpfiles[202]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:44:30.041505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:44:30.046617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:44:30.048959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:44:30.057791 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:44:30.059500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:44:30.073063 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:44:30.078324 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:44:30.084763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:44:30.096132 systemd-resolved[211]: Positive Trust Anchors: Mar 13 00:44:30.097837 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:44:30.097887 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:44:30.114514 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:44:30.127363 systemd-resolved[211]: Defaulting to hostname 'linux'. Mar 13 00:44:30.129739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:44:30.131921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:44:30.172879 kernel: SCSI subsystem initialized Mar 13 00:44:30.179875 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:44:30.188877 kernel: iscsi: registered transport (tcp) Mar 13 00:44:30.205083 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:44:30.205128 kernel: QLogic iSCSI HBA Driver Mar 13 00:44:30.218034 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:44:30.226814 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:44:30.231200 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:44:30.263614 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:44:30.267020 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:44:30.310879 kernel: raid6: avx512x4 gen() 44926 MB/s Mar 13 00:44:30.328877 kernel: raid6: avx512x2 gen() 44028 MB/s Mar 13 00:44:30.345872 kernel: raid6: avx512x1 gen() 26748 MB/s Mar 13 00:44:30.363872 kernel: raid6: avx2x4 gen() 39325 MB/s Mar 13 00:44:30.380873 kernel: raid6: avx2x2 gen() 41330 MB/s Mar 13 00:44:30.398792 kernel: raid6: avx2x1 gen() 31669 MB/s Mar 13 00:44:30.398811 kernel: raid6: using algorithm avx512x4 gen() 44926 MB/s Mar 13 00:44:30.417445 kernel: raid6: .... xor() 7668 MB/s, rmw enabled Mar 13 00:44:30.417471 kernel: raid6: using avx512x2 recovery algorithm Mar 13 00:44:30.434879 kernel: xor: automatically using best checksumming function avx Mar 13 00:44:30.549878 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:44:30.554609 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:44:30.558601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:44:30.573745 systemd-udevd[435]: Using default interface naming scheme 'v255'. Mar 13 00:44:30.577619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:44:30.584965 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:44:30.601355 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 13 00:44:30.619028 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:44:30.621976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:44:30.652902 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:44:30.660011 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:44:30.698881 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:44:30.725355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:30.728933 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 00:44:30.725485 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:30.732465 kernel: AES CTR mode by8 optimization enabled Mar 13 00:44:30.732276 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:30.741952 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 00:44:30.741984 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 00:44:30.742537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:30.753504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:30.755732 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:30.762889 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 00:44:30.764011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:30.769878 kernel: PTP clock support registered Mar 13 00:44:30.776223 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 13 00:44:30.788903 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 00:44:30.797905 kernel: hv_vmbus: registering driver hv_pci Mar 13 00:44:30.802907 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 00:44:30.802941 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 00:44:30.807059 kernel: scsi host0: storvsc_host_t Mar 13 00:44:30.809714 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 00:44:30.809772 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Mar 13 00:44:30.812950 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 00:44:30.814193 kernel: hv_vmbus: registering driver hv_utils Mar 13 00:44:30.815893 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 00:44:30.821666 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 00:44:30.821704 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 00:44:31.260714 systemd-resolved[211]: Clock change detected. Flushing caches. Mar 13 00:44:31.264255 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:31.273329 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Mar 13 00:44:31.273461 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 00:44:31.273473 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 13 00:44:31.286166 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 00:44:31.286338 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Mar 13 00:44:31.286468 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5236ef5e (unnamed net_device) (uninitialized): VF slot 1 added Mar 13 00:44:31.297789 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 00:44:31.306136 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Mar 13 00:44:31.306185 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Mar 13 00:44:31.309315 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 00:44:31.309454 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:44:31.312126 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 00:44:31.325869 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 00:44:31.326018 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Mar 13 00:44:31.332131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 00:44:31.349124 kernel: nvme nvme0: pci function c05b:00:00.0 Mar 13 00:44:31.349269 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Mar 13 00:44:31.354533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 00:44:31.505137 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 13 00:44:31.510147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:44:31.747136 kernel: nvme nvme0: using unchecked data buffer Mar 13 00:44:31.903475 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Mar 13 00:44:31.947524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Mar 13 00:44:31.984997 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Mar 13 00:44:31.986940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Mar 13 00:44:31.997097 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Mar 13 00:44:32.002403 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:44:32.003318 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:44:32.009300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:44:32.009352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:44:32.013668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:44:32.024055 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:44:32.033134 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:44:32.040132 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:44:32.046802 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:44:32.311390 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Mar 13 00:44:32.311572 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Mar 13 00:44:32.314127 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Mar 13 00:44:32.315402 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 00:44:32.321208 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Mar 13 00:44:32.324191 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Mar 13 00:44:32.329620 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Mar 13 00:44:32.329644 kernel: pci 7870:00:00.0: enabling Extended Tags Mar 13 00:44:32.344476 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 00:44:32.344648 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Mar 13 00:44:32.346132 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Mar 13 00:44:32.360420 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Mar 13 00:44:32.370127 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Mar 13 00:44:32.373167 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5236ef5e eth0: VF registering: eth1 Mar 13 00:44:32.373328 kernel: mana 7870:00:00.0 eth1: joined to eth0 Mar 13 00:44:32.376137 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Mar 13 00:44:33.045908 disk-uuid[652]: The operation has completed successfully. Mar 13 00:44:33.048827 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 13 00:44:33.122656 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:44:33.122747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:44:33.146138 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:44:33.161045 sh[695]: Success Mar 13 00:44:33.189635 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:44:33.189679 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:44:33.190880 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:44:33.199156 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:44:33.406597 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:44:33.413942 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:44:33.426392 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:44:33.440157 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (708) Mar 13 00:44:33.440298 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:44:33.442571 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:44:33.696608 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:44:33.696665 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:44:33.696677 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:44:33.727346 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:44:33.727755 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:44:33.727990 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:44:33.730225 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:44:33.735743 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:44:33.768000 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (743) Mar 13 00:44:33.768040 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:44:33.769523 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:44:33.789803 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:44:33.789836 kernel: BTRFS info (device nvme0n1p6): turning on async discard Mar 13 00:44:33.789847 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:44:33.795132 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:44:33.795746 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:44:33.800412 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:44:33.821434 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:44:33.824227 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:44:33.849993 systemd-networkd[877]: lo: Link UP Mar 13 00:44:33.850000 systemd-networkd[877]: lo: Gained carrier Mar 13 00:44:33.850946 systemd-networkd[877]: Enumeration completed Mar 13 00:44:33.856367 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Mar 13 00:44:33.851332 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:33.851335 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:44:33.851858 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:44:33.867199 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Mar 13 00:44:33.858235 systemd[1]: Reached target network.target - Network. Mar 13 00:44:33.872291 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5236ef5e eth0: Data path switched to VF: enP30832s1 Mar 13 00:44:33.872040 systemd-networkd[877]: enP30832s1: Link UP Mar 13 00:44:33.872109 systemd-networkd[877]: eth0: Link UP Mar 13 00:44:33.872227 systemd-networkd[877]: eth0: Gained carrier Mar 13 00:44:33.872238 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:33.884260 systemd-networkd[877]: enP30832s1: Gained carrier Mar 13 00:44:33.895143 systemd-networkd[877]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 13 00:44:34.701588 ignition[839]: Ignition 2.22.0 Mar 13 00:44:34.701600 ignition[839]: Stage: fetch-offline Mar 13 00:44:34.704336 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:44:34.701697 ignition[839]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:34.709894 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:44:34.701703 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:34.702133 ignition[839]: parsed url from cmdline: "" Mar 13 00:44:34.702138 ignition[839]: no config URL provided Mar 13 00:44:34.702144 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:44:34.702153 ignition[839]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:44:34.702158 ignition[839]: failed to fetch config: resource requires networking Mar 13 00:44:34.702411 ignition[839]: Ignition finished successfully Mar 13 00:44:34.733514 ignition[887]: Ignition 2.22.0 Mar 13 00:44:34.733523 ignition[887]: Stage: fetch Mar 13 00:44:34.733729 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:34.733737 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:34.733814 ignition[887]: parsed url from cmdline: "" Mar 13 00:44:34.733817 ignition[887]: no config URL provided Mar 13 00:44:34.733823 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:44:34.733829 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:44:34.733849 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 00:44:34.814568 ignition[887]: GET result: OK Mar 13 00:44:34.814632 ignition[887]: config has been read from IMDS userdata Mar 13 00:44:34.814656 ignition[887]: parsing config with SHA512: fd7560f1cd58f974fec087c7e89d2c91b1610b39553152abc419f31de28391e3ecca91c2feb26195c1c9a621a453200244915527418ed24ebd074db0cf84e5a6 Mar 13 00:44:34.819472 unknown[887]: fetched base config from "system" Mar 13 00:44:34.820356 ignition[887]: fetch: fetch complete Mar 13 00:44:34.819483 unknown[887]: fetched base config from "system" Mar 13 00:44:34.820368 ignition[887]: fetch: fetch passed Mar 13 00:44:34.819489 unknown[887]: fetched user config from "azure" Mar 13 00:44:34.820416 ignition[887]: Ignition finished successfully Mar 13 00:44:34.822023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:44:34.824352 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:44:34.851630 ignition[893]: Ignition 2.22.0 Mar 13 00:44:34.851640 ignition[893]: Stage: kargs Mar 13 00:44:34.851810 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:34.854598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:44:34.851822 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:34.858428 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:44:34.852734 ignition[893]: kargs: kargs passed Mar 13 00:44:34.852767 ignition[893]: Ignition finished successfully Mar 13 00:44:34.880691 ignition[900]: Ignition 2.22.0 Mar 13 00:44:34.880701 ignition[900]: Stage: disks Mar 13 00:44:34.882853 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:44:34.880888 ignition[900]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:34.885082 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:44:34.880895 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:34.881610 ignition[900]: disks: disks passed Mar 13 00:44:34.881640 ignition[900]: Ignition finished successfully Mar 13 00:44:34.895170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:44:34.895480 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:44:34.908166 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:44:34.908655 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:44:34.909379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:44:34.972013 systemd-fsck[909]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 13 00:44:34.976547 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:44:34.981188 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:44:35.225771 systemd-networkd[877]: eth0: Gained IPv6LL Mar 13 00:44:35.227239 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:44:35.227156 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:44:35.229406 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:44:35.250724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:44:35.259188 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:44:35.262578 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 00:44:35.267848 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:44:35.267890 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:44:35.270406 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (918) Mar 13 00:44:35.270441 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:44:35.272844 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:44:35.280411 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:44:35.285952 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:44:35.285986 kernel: BTRFS info (device nvme0n1p6): turning on async discard Mar 13 00:44:35.285998 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:44:35.286872 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:44:35.291270 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:44:35.721447 coreos-metadata[920]: Mar 13 00:44:35.721 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 00:44:35.725209 coreos-metadata[920]: Mar 13 00:44:35.724 INFO Fetch successful Mar 13 00:44:35.725209 coreos-metadata[920]: Mar 13 00:44:35.724 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 00:44:35.732996 coreos-metadata[920]: Mar 13 00:44:35.732 INFO Fetch successful Mar 13 00:44:35.747163 coreos-metadata[920]: Mar 13 00:44:35.747 INFO wrote hostname ci-4459.2.4-n-0cdf9482ba to /sysroot/etc/hostname Mar 13 00:44:35.749964 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:44:35.871245 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:44:35.908924 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:44:35.913343 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:44:35.917604 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:44:36.780475 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:44:36.785384 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:44:36.802231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:44:36.810446 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:44:36.813064 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:44:36.837377 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:44:36.842301 ignition[1037]: INFO : Ignition 2.22.0 Mar 13 00:44:36.842301 ignition[1037]: INFO : Stage: mount Mar 13 00:44:36.847868 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:36.847868 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:36.847868 ignition[1037]: INFO : mount: mount passed Mar 13 00:44:36.847868 ignition[1037]: INFO : Ignition finished successfully Mar 13 00:44:36.845064 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:44:36.847970 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:44:36.863259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:44:36.880141 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1050) Mar 13 00:44:36.882397 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:44:36.882485 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:44:36.887879 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 13 00:44:36.887914 kernel: BTRFS info (device nvme0n1p6): turning on async discard Mar 13 00:44:36.889296 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 13 00:44:36.891354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:44:36.921002 ignition[1066]: INFO : Ignition 2.22.0 Mar 13 00:44:36.921002 ignition[1066]: INFO : Stage: files Mar 13 00:44:36.925194 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:36.925194 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:36.925194 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:44:36.935446 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:44:36.935446 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:44:36.955502 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:44:36.957791 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:44:36.957791 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:44:36.955967 unknown[1066]: wrote ssh authorized keys file for user: core Mar 13 00:44:36.987445 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:44:36.990050 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:44:37.028719 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:44:37.085902 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:44:37.085902 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:44:37.092407 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 00:44:37.399943 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 00:44:37.695716 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:44:37.695716 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:44:37.702159 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:44:37.719325 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:44:37.719325 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:44:37.719325 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:44:37.719325 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:44:37.719325 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:44:37.719325 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:44:37.800288 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 00:44:38.530172 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:44:38.530172 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 00:44:38.557437 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:44:38.566756 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:44:38.566756 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 00:44:38.574193 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:44:38.574193 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:44:38.574193 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:44:38.574193 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:44:38.574193 ignition[1066]: INFO : files: files passed Mar 13 00:44:38.574193 ignition[1066]: INFO : Ignition finished successfully Mar 13 00:44:38.572231 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:44:38.579963 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:44:38.589223 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:44:38.601191 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:44:38.601262 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:44:38.620234 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:44:38.620234 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:44:38.629173 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:44:38.623208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:44:38.625613 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:44:38.631986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:44:38.675451 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:44:38.675516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:44:38.678356 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:44:38.682182 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:44:38.684923 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:44:38.687231 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:44:38.702260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:44:38.706517 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:44:38.720898 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:44:38.727270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:44:38.730261 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:44:38.734287 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:44:38.734409 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:44:38.734880 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:44:38.735464 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:44:38.735770 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:44:38.736277 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:44:38.736532 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:44:38.736819 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:44:38.745790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:44:38.748912 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:44:38.754266 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:44:38.758265 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:44:38.762263 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:44:38.765029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:44:38.765165 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:44:38.773676 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:44:38.778238 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:44:38.785895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:44:38.786230 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:44:38.789579 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:44:38.791907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:44:38.799063 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:44:38.799184 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:44:38.803138 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:44:38.803224 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:44:38.805470 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 00:44:38.805575 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 00:44:38.828294 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:44:38.832180 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:44:38.832339 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:44:38.839297 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:44:38.844188 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:44:38.844758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:44:38.849657 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:44:38.849749 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:44:38.856246 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:44:38.856346 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:44:38.879127 ignition[1121]: INFO : Ignition 2.22.0 Mar 13 00:44:38.879127 ignition[1121]: INFO : Stage: umount Mar 13 00:44:38.879127 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:44:38.879127 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 00:44:38.879127 ignition[1121]: INFO : umount: umount passed Mar 13 00:44:38.879127 ignition[1121]: INFO : Ignition finished successfully Mar 13 00:44:38.880650 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:44:38.880711 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:44:38.889532 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:44:38.890299 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:44:38.890344 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:44:38.894202 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:44:38.894242 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:44:38.897122 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:44:38.897154 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:44:38.899286 systemd[1]: Stopped target network.target - Network. Mar 13 00:44:38.913224 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:44:38.913278 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:44:38.919232 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:44:38.919720 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:44:38.924155 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:44:38.927503 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:44:38.927755 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:44:38.934183 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:44:38.934224 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:44:38.936583 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:44:38.936614 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:44:38.940167 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:44:38.940214 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:44:38.942294 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:44:38.942326 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:44:38.946332 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:44:38.950209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:44:38.954374 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:44:38.954441 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:44:38.956839 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:44:38.956924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:44:38.961462 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:44:38.961554 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:44:38.965645 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:44:38.965801 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:44:38.965892 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:44:38.970958 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:44:38.971909 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:44:38.975480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:44:38.975503 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:44:38.982808 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:44:38.986124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:44:39.026603 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5236ef5e eth0: Data path switched from VF: enP30832s1 Mar 13 00:44:39.026768 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Mar 13 00:44:38.986172 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:44:38.986420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:44:38.986451 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:44:38.987638 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:44:38.987668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:44:38.997675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:44:38.997752 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:44:39.000592 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:44:39.006972 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:44:39.007012 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:44:39.016390 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:44:39.017225 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:44:39.022575 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:44:39.022775 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:44:39.028226 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:44:39.028258 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:44:39.033182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:44:39.033229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:44:39.040151 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:44:39.040191 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:44:39.042813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:44:39.042850 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:44:39.046772 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:44:39.055029 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:44:39.055084 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:44:39.057233 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:44:39.057266 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:44:39.062809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:39.062857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:39.069749 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:44:39.069788 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:44:39.069813 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:44:39.070043 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:44:39.070133 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:44:39.073361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:44:39.073434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:44:39.074034 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:44:39.081225 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:44:39.104676 systemd[1]: Switching root. Mar 13 00:44:39.200809 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Mar 13 00:44:39.200848 systemd-journald[186]: Journal stopped Mar 13 00:44:43.369144 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:44:43.369175 kernel: SELinux: policy capability open_perms=1 Mar 13 00:44:43.369188 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:44:43.369196 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:44:43.369204 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:44:43.369212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:44:43.369221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:44:43.369229 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:44:43.369237 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:44:43.369246 kernel: audit: type=1403 audit(1773362680.605:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:44:43.369256 systemd[1]: Successfully loaded SELinux policy in 124.077ms. Mar 13 00:44:43.369266 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.065ms. Mar 13 00:44:43.369276 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:44:43.369286 systemd[1]: Detected virtualization microsoft. Mar 13 00:44:43.369297 systemd[1]: Detected architecture x86-64. Mar 13 00:44:43.369306 systemd[1]: Detected first boot. Mar 13 00:44:43.369317 systemd[1]: Hostname set to . Mar 13 00:44:43.369325 systemd[1]: Initializing machine ID from random generator. Mar 13 00:44:43.369336 zram_generator::config[1165]: No configuration found. Mar 13 00:44:43.369347 kernel: Guest personality initialized and is inactive Mar 13 00:44:43.369358 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Mar 13 00:44:43.369369 kernel: Initialized host personality Mar 13 00:44:43.369378 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:44:43.369388 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:44:43.369399 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:44:43.369408 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:44:43.369418 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:44:43.369427 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:44:43.369439 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:44:43.369450 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:44:43.369461 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:44:43.369471 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:44:43.369481 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:44:43.369491 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:44:43.369502 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:44:43.369513 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:44:43.369523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:44:43.369534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:44:43.369543 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:44:43.369553 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:44:43.369564 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:44:43.369572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:44:43.369580 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:44:43.369590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:44:43.369598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:44:43.369606 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:44:43.369614 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:44:43.369623 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:44:43.369631 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:44:43.369640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:44:43.369648 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:44:43.369659 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:44:43.369667 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:44:43.369676 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:44:43.369685 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:44:43.369695 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:44:43.369706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:44:43.369715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:44:43.369724 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:44:43.369733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:44:43.369742 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:44:43.369751 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:44:43.369760 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:44:43.369769 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:43.369780 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:44:43.369788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:44:43.369796 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:44:43.369805 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:44:43.369815 systemd[1]: Reached target machines.target - Containers. Mar 13 00:44:43.369824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:44:43.369833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:43.369842 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:44:43.369852 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:44:43.369861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:44:43.369869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:44:43.369878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:44:43.369888 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:44:43.369897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:44:43.369906 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:44:43.369916 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:44:43.369927 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:44:43.369935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:44:43.369944 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:44:43.369954 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:43.369964 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:44:43.369978 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:44:43.369988 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:44:43.369996 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:44:43.370005 kernel: loop: module loaded Mar 13 00:44:43.370015 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:44:43.370024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:44:43.370034 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:44:43.370062 systemd-journald[1248]: Collecting audit messages is disabled. Mar 13 00:44:43.370085 systemd[1]: Stopped verity-setup.service. Mar 13 00:44:43.370095 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:43.370104 systemd-journald[1248]: Journal started Mar 13 00:44:43.374250 systemd-journald[1248]: Runtime Journal (/run/log/journal/556d0db306824c4f9ed380d8de462580) is 8M, max 158.6M, 150.6M free. Mar 13 00:44:42.968269 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:44:42.979526 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 13 00:44:42.979864 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:44:43.378137 kernel: fuse: init (API version 7.41) Mar 13 00:44:43.384381 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:44:43.386688 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:44:43.388759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:44:43.392047 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:44:43.394369 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:44:43.400262 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:44:43.401803 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:44:43.403666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:44:43.406502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:44:43.406645 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:44:43.409398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:44:43.409540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:44:43.412472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:44:43.412621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:44:43.416366 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:44:43.416502 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:44:43.417903 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:44:43.418142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:44:43.422404 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:44:43.426603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:44:43.430771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:44:43.432367 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:44:43.442716 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:44:43.448190 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:44:43.453508 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:44:43.455793 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:44:43.455893 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:44:43.460966 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:44:43.466391 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:44:43.468689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:43.480131 kernel: ACPI: bus type drm_connector registered Mar 13 00:44:43.484310 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:44:43.491312 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:44:43.493864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:44:43.495359 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:44:43.499230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:44:43.500385 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:44:43.505239 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:44:43.508789 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:44:43.514353 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:44:43.515375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:44:43.520027 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:44:43.522633 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:44:43.525427 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:44:43.528068 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:44:43.534097 systemd-journald[1248]: Time spent on flushing to /var/log/journal/556d0db306824c4f9ed380d8de462580 is 33.705ms for 995 entries. Mar 13 00:44:43.534097 systemd-journald[1248]: System Journal (/var/log/journal/556d0db306824c4f9ed380d8de462580) is 8M, max 2.6G, 2.6G free. Mar 13 00:44:43.657860 systemd-journald[1248]: Received client request to flush runtime journal. Mar 13 00:44:43.657907 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:44:43.535484 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:44:43.543853 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:44:43.566633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:44:43.580365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:44:43.660144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:44:43.717420 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:44:43.727490 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:44:43.732307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:44:43.779235 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 13 00:44:43.779254 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 13 00:44:43.781495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:44:43.981313 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:44:44.079138 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:44:44.120137 kernel: loop1: detected capacity change from 0 to 219192 Mar 13 00:44:44.210060 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:44:44.214776 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:44:44.220132 kernel: loop2: detected capacity change from 0 to 128560 Mar 13 00:44:44.240166 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Mar 13 00:44:44.469449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:44:44.476998 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:44:44.510107 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:44:44.566197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 00:44:44.631945 kernel: hv_vmbus: registering driver hyperv_fb Mar 13 00:44:44.631996 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:44:44.643135 kernel: hv_vmbus: registering driver hv_balloon Mar 13 00:44:44.652297 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:44:44.656210 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 13 00:44:44.660538 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 13 00:44:44.660585 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 13 00:44:44.665399 kernel: Console: switching to colour dummy device 80x25 Mar 13 00:44:44.670143 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 00:44:44.764015 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:44:44.829365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:44.844540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:44.844717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:44.849226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:44.858138 kernel: loop3: detected capacity change from 0 to 27936 Mar 13 00:44:44.860968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:44:44.861252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:44.868214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:44:44.883218 systemd-networkd[1343]: lo: Link UP Mar 13 00:44:44.883224 systemd-networkd[1343]: lo: Gained carrier Mar 13 00:44:44.885404 systemd-networkd[1343]: Enumeration completed Mar 13 00:44:44.885471 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:44:44.890297 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:44:44.891315 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:44.891322 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:44:44.902430 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Mar 13 00:44:44.899010 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:44:44.911131 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Mar 13 00:44:44.916170 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5236ef5e eth0: Data path switched to VF: enP30832s1 Mar 13 00:44:44.919431 systemd-networkd[1343]: enP30832s1: Link UP Mar 13 00:44:44.919500 systemd-networkd[1343]: eth0: Link UP Mar 13 00:44:44.919502 systemd-networkd[1343]: eth0: Gained carrier Mar 13 00:44:44.919518 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:44.928399 systemd-networkd[1343]: enP30832s1: Gained carrier Mar 13 00:44:44.932580 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:44:44.939191 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 13 00:44:45.016145 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 13 00:44:45.019545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Mar 13 00:44:45.022233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:44:45.094997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:44:45.176160 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 00:44:45.188138 kernel: loop5: detected capacity change from 0 to 219192 Mar 13 00:44:45.203160 kernel: loop6: detected capacity change from 0 to 128560 Mar 13 00:44:45.215682 kernel: loop7: detected capacity change from 0 to 27936 Mar 13 00:44:45.213222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:44:45.224795 (sd-merge)[1430]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 13 00:44:45.225182 (sd-merge)[1430]: Merged extensions into '/usr'. Mar 13 00:44:45.228042 systemd[1]: Reload requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:44:45.228054 systemd[1]: Reloading... Mar 13 00:44:45.287147 zram_generator::config[1463]: No configuration found. Mar 13 00:44:45.460134 systemd[1]: Reloading finished in 231 ms. Mar 13 00:44:45.486803 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:44:45.499929 systemd[1]: Starting ensure-sysext.service... Mar 13 00:44:45.504220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:44:45.513566 systemd[1]: Reload requested from client PID 1519 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:44:45.513583 systemd[1]: Reloading... Mar 13 00:44:45.519548 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:44:45.519573 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:44:45.519738 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:44:45.519900 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:44:45.520420 systemd-tmpfiles[1520]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:44:45.520613 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Mar 13 00:44:45.520654 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Mar 13 00:44:45.524231 systemd-tmpfiles[1520]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:44:45.524242 systemd-tmpfiles[1520]: Skipping /boot Mar 13 00:44:45.529842 systemd-tmpfiles[1520]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:44:45.529857 systemd-tmpfiles[1520]: Skipping /boot Mar 13 00:44:45.579135 zram_generator::config[1551]: No configuration found. Mar 13 00:44:45.745054 systemd[1]: Reloading finished in 231 ms. Mar 13 00:44:45.770733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:44:45.781932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:45.782838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:44:45.793852 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:44:45.796407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:45.799329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:44:45.803990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:44:45.808293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:44:45.810248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:45.810818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:45.818062 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:44:45.823301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:44:45.827698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:44:45.829243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:45.831844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:44:45.832061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:44:45.835462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:44:45.835907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:44:45.837816 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:44:45.838534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:44:45.847188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:45.847345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:45.851982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:44:45.856398 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:44:45.861415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:44:45.864315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:45.864489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:45.865555 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:45.872374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:44:45.872757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:44:45.875383 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:44:45.878417 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:44:45.878667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:44:45.881603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:44:45.881796 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:44:45.891613 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:45.891906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:44:45.894561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:44:45.898567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:44:45.907434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:44:45.913388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:44:45.915339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:44:45.915448 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:44:45.915586 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:44:45.918212 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:44:45.926526 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:44:45.934827 systemd[1]: Finished ensure-sysext.service. Mar 13 00:44:45.936936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:44:45.937088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:44:45.939791 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:44:45.939938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:44:45.942575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:44:45.942711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:44:45.945800 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:44:45.945976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:44:45.950791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:44:45.950867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:44:45.985277 systemd-resolved[1617]: Positive Trust Anchors: Mar 13 00:44:45.985287 systemd-resolved[1617]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:44:45.985321 systemd-resolved[1617]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:44:45.998213 augenrules[1659]: No rules Mar 13 00:44:45.999264 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:44:45.999455 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:44:46.002779 systemd-resolved[1617]: Using system hostname 'ci-4459.2.4-n-0cdf9482ba'. Mar 13 00:44:46.004156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:44:46.005691 systemd[1]: Reached target network.target - Network. Mar 13 00:44:46.008245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:44:46.541585 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:44:46.546356 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:44:46.873246 systemd-networkd[1343]: eth0: Gained IPv6LL Mar 13 00:44:46.875397 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:44:46.878398 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:44:48.967089 ldconfig[1299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:44:48.976660 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:44:48.980249 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:44:49.013551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:44:49.016434 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:44:49.020238 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:44:49.021865 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:44:49.025166 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:44:49.026626 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:44:49.027848 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:44:49.031156 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:44:49.034186 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:44:49.034216 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:44:49.035275 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:44:49.050884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:44:49.053540 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:44:49.057699 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:44:49.059586 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:44:49.061148 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:44:49.065499 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:44:49.067000 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:44:49.070637 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:44:49.073766 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:44:49.076183 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:44:49.077434 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:44:49.077458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:44:49.091099 systemd[1]: Starting chronyd.service - NTP client/server... Mar 13 00:44:49.092927 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:44:49.098240 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:44:49.104272 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:44:49.107802 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:44:49.112239 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:44:49.118091 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:44:49.120618 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:44:49.121560 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:44:49.124252 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Mar 13 00:44:49.129231 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 13 00:44:49.132227 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 13 00:44:49.135505 jq[1680]: false Mar 13 00:44:49.137187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:44:49.140888 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:44:49.146329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:44:49.151248 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:44:49.158267 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:44:49.166504 KVP[1683]: KVP starting; pid is:1683 Mar 13 00:44:49.168246 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:44:49.174044 kernel: hv_utils: KVP IC version 4.0 Mar 13 00:44:49.174093 extend-filesystems[1681]: Found /dev/nvme0n1p6 Mar 13 00:44:49.172263 KVP[1683]: KVP LIC Version: 3.1 Mar 13 00:44:49.176553 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:44:49.179498 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:44:49.179975 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:44:49.181471 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Refreshing passwd entry cache Mar 13 00:44:49.181480 oslogin_cache_refresh[1682]: Refreshing passwd entry cache Mar 13 00:44:49.182422 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:44:49.191517 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:44:49.199165 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:44:49.202142 extend-filesystems[1681]: Found /dev/nvme0n1p9 Mar 13 00:44:49.201829 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:44:49.202004 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:44:49.206506 extend-filesystems[1681]: Checking size of /dev/nvme0n1p9 Mar 13 00:44:49.212009 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:44:49.212960 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:44:49.227149 jq[1698]: true Mar 13 00:44:49.231556 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:44:49.232167 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:44:49.233202 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Failure getting users, quitting Mar 13 00:44:49.233202 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:44:49.233202 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Refreshing group entry cache Mar 13 00:44:49.232802 oslogin_cache_refresh[1682]: Failure getting users, quitting Mar 13 00:44:49.232818 oslogin_cache_refresh[1682]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:44:49.232859 oslogin_cache_refresh[1682]: Refreshing group entry cache Mar 13 00:44:49.248010 chronyd[1672]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 13 00:44:49.248433 oslogin_cache_refresh[1682]: Failure getting groups, quitting Mar 13 00:44:49.249600 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Failure getting groups, quitting Mar 13 00:44:49.249600 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:44:49.248441 oslogin_cache_refresh[1682]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:44:49.250402 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:44:49.250969 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:44:49.255024 extend-filesystems[1681]: Old size kept for /dev/nvme0n1p9 Mar 13 00:44:49.258793 jq[1718]: true Mar 13 00:44:49.266430 (ntainerd)[1719]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:44:49.267559 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:44:49.267740 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:44:49.282550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:44:49.296757 tar[1710]: linux-amd64/LICENSE Mar 13 00:44:49.296757 tar[1710]: linux-amd64/helm Mar 13 00:44:49.320219 update_engine[1697]: I20260313 00:44:49.320155 1697 main.cc:92] Flatcar Update Engine starting Mar 13 00:44:49.353024 chronyd[1672]: Timezone right/UTC failed leap second check, ignoring Mar 13 00:44:49.353234 systemd[1]: Started chronyd.service - NTP client/server. Mar 13 00:44:49.353164 chronyd[1672]: Loaded seccomp filter (level 2) Mar 13 00:44:49.405891 systemd-logind[1696]: New seat seat0. Mar 13 00:44:49.413179 bash[1753]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:44:49.413467 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:44:49.416276 systemd-logind[1696]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:44:49.416685 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:44:49.416779 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:44:49.484758 dbus-daemon[1675]: [system] SELinux support is enabled Mar 13 00:44:49.485080 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:44:49.492018 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:44:49.492041 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:44:49.494272 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:44:49.494365 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:44:49.495057 update_engine[1697]: I20260313 00:44:49.494954 1697 update_check_scheduler.cc:74] Next update check in 3m7s Mar 13 00:44:49.497350 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:44:49.498045 dbus-daemon[1675]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:44:49.503476 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:44:49.576133 coreos-metadata[1674]: Mar 13 00:44:49.572 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 00:44:49.578815 coreos-metadata[1674]: Mar 13 00:44:49.578 INFO Fetch successful Mar 13 00:44:49.578815 coreos-metadata[1674]: Mar 13 00:44:49.578 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 13 00:44:49.582050 coreos-metadata[1674]: Mar 13 00:44:49.581 INFO Fetch successful Mar 13 00:44:49.582050 coreos-metadata[1674]: Mar 13 00:44:49.581 INFO Fetching http://168.63.129.16/machine/afc5d63e-05f5-4b0f-832a-acb8304077f3/955b2981%2Df87f%2D4893%2Dbdfb%2D4f60c22af0bb.%5Fci%2D4459.2.4%2Dn%2D0cdf9482ba?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 13 00:44:49.588587 coreos-metadata[1674]: Mar 13 00:44:49.588 INFO Fetch successful Mar 13 00:44:49.588587 coreos-metadata[1674]: Mar 13 00:44:49.588 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 13 00:44:49.601763 coreos-metadata[1674]: Mar 13 00:44:49.599 INFO Fetch successful Mar 13 00:44:49.638404 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:44:49.640570 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:44:49.697521 sshd_keygen[1713]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:44:49.750150 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:44:49.756438 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:44:49.760253 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 13 00:44:49.800545 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:44:49.800741 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:44:49.807326 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:44:49.829235 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 13 00:44:49.848913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:44:49.854495 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:44:49.860188 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:44:49.863303 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:44:49.873047 locksmithd[1776]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:44:49.898109 tar[1710]: linux-amd64/README.md Mar 13 00:44:49.910884 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:44:50.380969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:44:50.384404 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:44:50.420682 containerd[1719]: time="2026-03-13T00:44:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:44:50.421669 containerd[1719]: time="2026-03-13T00:44:50.421465110Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:44:50.430958 containerd[1719]: time="2026-03-13T00:44:50.430927263Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.236µs" Mar 13 00:44:50.431047 containerd[1719]: time="2026-03-13T00:44:50.431034402Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:44:50.431090 containerd[1719]: time="2026-03-13T00:44:50.431082179Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:44:50.431257 containerd[1719]: time="2026-03-13T00:44:50.431246550Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:44:50.431299 containerd[1719]: time="2026-03-13T00:44:50.431291571Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:44:50.431350 containerd[1719]: time="2026-03-13T00:44:50.431342299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431430 containerd[1719]: time="2026-03-13T00:44:50.431405546Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431430 containerd[1719]: time="2026-03-13T00:44:50.431418570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431625464Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431642572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431653482Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431661536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431720152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431873322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431893213Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:44:50.431919 containerd[1719]: time="2026-03-13T00:44:50.431901774Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:44:50.432089 containerd[1719]: time="2026-03-13T00:44:50.431926016Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:44:50.432219 containerd[1719]: time="2026-03-13T00:44:50.432162659Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:44:50.432219 containerd[1719]: time="2026-03-13T00:44:50.432214333Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:44:50.699688 containerd[1719]: time="2026-03-13T00:44:50.699603092Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:44:50.699840 containerd[1719]: time="2026-03-13T00:44:50.699797931Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:44:50.699906 containerd[1719]: time="2026-03-13T00:44:50.699876986Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:44:50.699906 containerd[1719]: time="2026-03-13T00:44:50.699892809Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:44:50.700023 containerd[1719]: time="2026-03-13T00:44:50.699957443Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:44:50.700023 containerd[1719]: time="2026-03-13T00:44:50.699971704Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:44:50.700023 containerd[1719]: time="2026-03-13T00:44:50.699984394Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:44:50.700023 containerd[1719]: time="2026-03-13T00:44:50.699995711Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:44:50.700023 containerd[1719]: time="2026-03-13T00:44:50.700005830Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:44:50.700224 containerd[1719]: time="2026-03-13T00:44:50.700015044Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:44:50.700224 containerd[1719]: time="2026-03-13T00:44:50.700162302Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:44:50.700224 containerd[1719]: time="2026-03-13T00:44:50.700176290Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:44:50.700439 containerd[1719]: time="2026-03-13T00:44:50.700373413Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:44:50.700439 containerd[1719]: time="2026-03-13T00:44:50.700396126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:44:50.700439 containerd[1719]: time="2026-03-13T00:44:50.700409883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:44:50.700439 containerd[1719]: time="2026-03-13T00:44:50.700420436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:44:50.700600 containerd[1719]: time="2026-03-13T00:44:50.700430453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:44:50.700600 containerd[1719]: time="2026-03-13T00:44:50.700535480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:44:50.700600 containerd[1719]: time="2026-03-13T00:44:50.700545266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:44:50.700600 containerd[1719]: time="2026-03-13T00:44:50.700554148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:44:50.700600 containerd[1719]: time="2026-03-13T00:44:50.700565270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:44:50.700600 containerd[1719]: time="2026-03-13T00:44:50.700585392Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:44:50.700747 containerd[1719]: time="2026-03-13T00:44:50.700725001Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:44:50.700786 containerd[1719]: time="2026-03-13T00:44:50.700774487Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:44:50.700811 containerd[1719]: time="2026-03-13T00:44:50.700802148Z" level=info msg="Start snapshots syncer" Mar 13 00:44:50.700841 containerd[1719]: time="2026-03-13T00:44:50.700831112Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:44:50.701894 containerd[1719]: time="2026-03-13T00:44:50.701169422Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:44:50.701894 containerd[1719]: time="2026-03-13T00:44:50.701224246Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701260772Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701341383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701358127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701368191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701383490Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701396897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701407025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701416334Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701440331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701451231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701460926Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701479147Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701492162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:44:50.702228 containerd[1719]: time="2026-03-13T00:44:50.701500742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701510464Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701517587Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701526612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701540765Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701554768Z" level=info msg="runtime interface created" Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701559347Z" level=info msg="created NRI interface" Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701567196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701576685Z" level=info msg="Connect containerd service" Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.701594492Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:44:50.702467 containerd[1719]: time="2026-03-13T00:44:50.702202016Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:44:50.860461 kubelet[1825]: E0313 00:44:50.860423 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:44:50.862208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:44:50.862354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:44:50.862685 systemd[1]: kubelet.service: Consumed 811ms CPU time, 258.1M memory peak. Mar 13 00:44:51.238674 containerd[1719]: time="2026-03-13T00:44:51.238609769Z" level=info msg="Start subscribing containerd event" Mar 13 00:44:51.238853 containerd[1719]: time="2026-03-13T00:44:51.238784758Z" level=info msg="Start recovering state" Mar 13 00:44:51.238853 containerd[1719]: time="2026-03-13T00:44:51.238843040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:44:51.238917 containerd[1719]: time="2026-03-13T00:44:51.238877207Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.238987670Z" level=info msg="Start event monitor" Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.239001470Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.239022051Z" level=info msg="Start streaming server" Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.239032267Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.239039804Z" level=info msg="runtime interface starting up..." Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.239045889Z" level=info msg="starting plugins..." Mar 13 00:44:51.239096 containerd[1719]: time="2026-03-13T00:44:51.239058169Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:44:51.239451 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:44:51.244464 containerd[1719]: time="2026-03-13T00:44:51.239541443Z" level=info msg="containerd successfully booted in 0.819200s" Mar 13 00:44:51.242351 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:44:51.246791 systemd[1]: Startup finished in 3.079s (kernel) + 10.350s (initrd) + 10.763s (userspace) = 24.193s. Mar 13 00:44:51.490065 login[1810]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 13 00:44:51.491547 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 13 00:44:51.496820 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:44:51.497773 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:44:51.504090 systemd-logind[1696]: New session 2 of user core. Mar 13 00:44:51.538740 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:44:51.542497 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:44:51.566041 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:44:51.568139 systemd-logind[1696]: New session c1 of user core. Mar 13 00:44:51.614559 waagent[1805]: 2026-03-13T00:44:51.614505Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 13 00:44:51.615977 waagent[1805]: 2026-03-13T00:44:51.615898Z INFO Daemon Daemon OS: flatcar 4459.2.4 Mar 13 00:44:51.617005 waagent[1805]: 2026-03-13T00:44:51.616974Z INFO Daemon Daemon Python: 3.11.13 Mar 13 00:44:51.618161 waagent[1805]: 2026-03-13T00:44:51.618086Z INFO Daemon Daemon Run daemon Mar 13 00:44:51.619220 waagent[1805]: 2026-03-13T00:44:51.619191Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.4' Mar 13 00:44:51.621158 waagent[1805]: 2026-03-13T00:44:51.621022Z INFO Daemon Daemon Using waagent for provisioning Mar 13 00:44:51.622583 waagent[1805]: 2026-03-13T00:44:51.622543Z INFO Daemon Daemon Activate resource disk Mar 13 00:44:51.623711 waagent[1805]: 2026-03-13T00:44:51.623675Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 13 00:44:51.628144 waagent[1805]: 2026-03-13T00:44:51.627316Z INFO Daemon Daemon Found device: None Mar 13 00:44:51.628144 waagent[1805]: 2026-03-13T00:44:51.627437Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 13 00:44:51.628144 waagent[1805]: 2026-03-13T00:44:51.627500Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 13 00:44:51.628144 waagent[1805]: 2026-03-13T00:44:51.627990Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 00:44:51.628144 waagent[1805]: 2026-03-13T00:44:51.628087Z INFO Daemon Daemon Running default provisioning handler Mar 13 00:44:51.646705 waagent[1805]: 2026-03-13T00:44:51.646660Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 13 00:44:51.651738 waagent[1805]: 2026-03-13T00:44:51.651701Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 13 00:44:51.655001 waagent[1805]: 2026-03-13T00:44:51.654967Z INFO Daemon Daemon cloud-init is enabled: False Mar 13 00:44:51.657139 waagent[1805]: 2026-03-13T00:44:51.656647Z INFO Daemon Daemon Copying ovf-env.xml Mar 13 00:44:51.723334 systemd[1854]: Queued start job for default target default.target. Mar 13 00:44:51.732015 systemd[1854]: Created slice app.slice - User Application Slice. Mar 13 00:44:51.732044 systemd[1854]: Reached target paths.target - Paths. Mar 13 00:44:51.732662 systemd[1854]: Reached target timers.target - Timers. Mar 13 00:44:51.733729 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:44:51.736288 waagent[1805]: 2026-03-13T00:44:51.736244Z INFO Daemon Daemon Successfully mounted dvd Mar 13 00:44:51.749436 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:44:51.749525 systemd[1854]: Reached target sockets.target - Sockets. Mar 13 00:44:51.749631 systemd[1854]: Reached target basic.target - Basic System. Mar 13 00:44:51.749687 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:44:51.750519 systemd[1854]: Reached target default.target - Main User Target. Mar 13 00:44:51.750550 systemd[1854]: Startup finished in 176ms. Mar 13 00:44:51.755240 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:44:51.762696 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 13 00:44:51.765485 waagent[1805]: 2026-03-13T00:44:51.765443Z INFO Daemon Daemon Detect protocol endpoint Mar 13 00:44:51.766769 waagent[1805]: 2026-03-13T00:44:51.766695Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 00:44:51.768605 waagent[1805]: 2026-03-13T00:44:51.768531Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 13 00:44:51.770083 waagent[1805]: 2026-03-13T00:44:51.770045Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 13 00:44:51.774786 waagent[1805]: 2026-03-13T00:44:51.770255Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 13 00:44:51.774786 waagent[1805]: 2026-03-13T00:44:51.770414Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 13 00:44:51.780023 waagent[1805]: 2026-03-13T00:44:51.779991Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 13 00:44:51.782136 waagent[1805]: 2026-03-13T00:44:51.780504Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 13 00:44:51.782136 waagent[1805]: 2026-03-13T00:44:51.780725Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 13 00:44:51.942275 waagent[1805]: 2026-03-13T00:44:51.942229Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 13 00:44:51.942738 waagent[1805]: 2026-03-13T00:44:51.942705Z INFO Daemon Daemon Forcing an update of the goal state. Mar 13 00:44:51.952060 waagent[1805]: 2026-03-13T00:44:51.952029Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 00:44:51.965329 waagent[1805]: 2026-03-13T00:44:51.965300Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 13 00:44:51.966867 waagent[1805]: 2026-03-13T00:44:51.966829Z INFO Daemon Mar 13 00:44:51.967656 waagent[1805]: 2026-03-13T00:44:51.967589Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ab07c12d-cccc-4d68-ab6c-52184d1aae19 eTag: 4857876015281094093 source: Fabric] Mar 13 00:44:51.970028 waagent[1805]: 2026-03-13T00:44:51.970001Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 13 00:44:51.971459 waagent[1805]: 2026-03-13T00:44:51.971433Z INFO Daemon Mar 13 00:44:51.971873 waagent[1805]: 2026-03-13T00:44:51.971666Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 13 00:44:51.976037 waagent[1805]: 2026-03-13T00:44:51.976010Z INFO Daemon Daemon Downloading artifacts profile blob Mar 13 00:44:52.059054 waagent[1805]: 2026-03-13T00:44:52.058985Z INFO Daemon Downloaded certificate {'thumbprint': 'CE63CBF6E59C34C68E4FC882841CE40A3A3E00C4', 'hasPrivateKey': True} Mar 13 00:44:52.060899 waagent[1805]: 2026-03-13T00:44:52.059406Z INFO Daemon Fetch goal state completed Mar 13 00:44:52.066265 waagent[1805]: 2026-03-13T00:44:52.066226Z INFO Daemon Daemon Starting provisioning Mar 13 00:44:52.068734 waagent[1805]: 2026-03-13T00:44:52.066391Z INFO Daemon Daemon Handle ovf-env.xml. Mar 13 00:44:52.068734 waagent[1805]: 2026-03-13T00:44:52.066835Z INFO Daemon Daemon Set hostname [ci-4459.2.4-n-0cdf9482ba] Mar 13 00:44:52.084459 waagent[1805]: 2026-03-13T00:44:52.084412Z INFO Daemon Daemon Publish hostname [ci-4459.2.4-n-0cdf9482ba] Mar 13 00:44:52.085891 waagent[1805]: 2026-03-13T00:44:52.085600Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 13 00:44:52.086834 waagent[1805]: 2026-03-13T00:44:52.086028Z INFO Daemon Daemon Primary interface is [eth0] Mar 13 00:44:52.092589 systemd-networkd[1343]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:44:52.092597 systemd-networkd[1343]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:44:52.092618 systemd-networkd[1343]: eth0: DHCP lease lost Mar 13 00:44:52.093503 waagent[1805]: 2026-03-13T00:44:52.093459Z INFO Daemon Daemon Create user account if not exists Mar 13 00:44:52.095153 waagent[1805]: 2026-03-13T00:44:52.093665Z INFO Daemon Daemon User core already exists, skip useradd Mar 13 00:44:52.095153 waagent[1805]: 2026-03-13T00:44:52.093824Z INFO Daemon Daemon Configure sudoer Mar 13 00:44:52.101035 waagent[1805]: 2026-03-13T00:44:52.100988Z INFO Daemon Daemon Configure sshd Mar 13 00:44:52.105585 waagent[1805]: 2026-03-13T00:44:52.105548Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 13 00:44:52.108418 waagent[1805]: 2026-03-13T00:44:52.108355Z INFO Daemon Daemon Deploy ssh public key. Mar 13 00:44:52.109165 systemd-networkd[1343]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 13 00:44:52.490712 login[1810]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 13 00:44:52.494845 systemd-logind[1696]: New session 1 of user core. Mar 13 00:44:52.504224 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:44:53.182651 waagent[1805]: 2026-03-13T00:44:53.182601Z INFO Daemon Daemon Provisioning complete Mar 13 00:44:53.199120 waagent[1805]: 2026-03-13T00:44:53.199090Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 13 00:44:53.200500 waagent[1805]: 2026-03-13T00:44:53.200470Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 13 00:44:53.202382 waagent[1805]: 2026-03-13T00:44:53.202067Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 13 00:44:53.300297 waagent[1902]: 2026-03-13T00:44:53.300230Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 13 00:44:53.300547 waagent[1902]: 2026-03-13T00:44:53.300323Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.4 Mar 13 00:44:53.300547 waagent[1902]: 2026-03-13T00:44:53.300363Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 13 00:44:53.300547 waagent[1902]: 2026-03-13T00:44:53.300401Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Mar 13 00:44:53.337857 waagent[1902]: 2026-03-13T00:44:53.337805Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 13 00:44:53.337997 waagent[1902]: 2026-03-13T00:44:53.337971Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 00:44:53.338055 waagent[1902]: 2026-03-13T00:44:53.338027Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 00:44:53.342549 waagent[1902]: 2026-03-13T00:44:53.342499Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 00:44:53.347786 waagent[1902]: 2026-03-13T00:44:53.347755Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 13 00:44:53.348101 waagent[1902]: 2026-03-13T00:44:53.348071Z INFO ExtHandler Mar 13 00:44:53.348162 waagent[1902]: 2026-03-13T00:44:53.348139Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9abbe4e9-285e-4cdc-86db-68a00794ccb5 eTag: 4857876015281094093 source: Fabric] Mar 13 00:44:53.348365 waagent[1902]: 2026-03-13T00:44:53.348337Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 13 00:44:53.348699 waagent[1902]: 2026-03-13T00:44:53.348671Z INFO ExtHandler Mar 13 00:44:53.348737 waagent[1902]: 2026-03-13T00:44:53.348711Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 13 00:44:53.355806 waagent[1902]: 2026-03-13T00:44:53.355777Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 13 00:44:53.424673 waagent[1902]: 2026-03-13T00:44:53.424627Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CE63CBF6E59C34C68E4FC882841CE40A3A3E00C4', 'hasPrivateKey': True} Mar 13 00:44:53.424977 waagent[1902]: 2026-03-13T00:44:53.424951Z INFO ExtHandler Fetch goal state completed Mar 13 00:44:53.434874 waagent[1902]: 2026-03-13T00:44:53.434798Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 13 00:44:53.438702 waagent[1902]: 2026-03-13T00:44:53.438654Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1902 Mar 13 00:44:53.438806 waagent[1902]: 2026-03-13T00:44:53.438770Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 13 00:44:53.439033 waagent[1902]: 2026-03-13T00:44:53.439010Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 13 00:44:53.440002 waagent[1902]: 2026-03-13T00:44:53.439967Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] Mar 13 00:44:53.440346 waagent[1902]: 2026-03-13T00:44:53.440321Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.4', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 13 00:44:53.440454 waagent[1902]: 2026-03-13T00:44:53.440434Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 13 00:44:53.440816 waagent[1902]: 2026-03-13T00:44:53.440794Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 13 00:44:53.473838 waagent[1902]: 2026-03-13T00:44:53.473817Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 13 00:44:53.473957 waagent[1902]: 2026-03-13T00:44:53.473938Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 13 00:44:53.478825 waagent[1902]: 2026-03-13T00:44:53.478796Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 13 00:44:53.483691 systemd[1]: Reload requested from client PID 1917 ('systemctl') (unit waagent.service)... Mar 13 00:44:53.483701 systemd[1]: Reloading... Mar 13 00:44:53.557135 zram_generator::config[1959]: No configuration found. Mar 13 00:44:53.715421 systemd[1]: Reloading finished in 231 ms. Mar 13 00:44:53.731744 waagent[1902]: 2026-03-13T00:44:53.731171Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 13 00:44:53.731744 waagent[1902]: 2026-03-13T00:44:53.731264Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 13 00:44:53.795137 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Mar 13 00:44:54.451564 waagent[1902]: 2026-03-13T00:44:54.451495Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 13 00:44:54.451853 waagent[1902]: 2026-03-13T00:44:54.451829Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 13 00:44:54.452500 waagent[1902]: 2026-03-13T00:44:54.452467Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 13 00:44:54.452843 waagent[1902]: 2026-03-13T00:44:54.452814Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 00:44:54.452985 waagent[1902]: 2026-03-13T00:44:54.452864Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 13 00:44:54.452985 waagent[1902]: 2026-03-13T00:44:54.452970Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 00:44:54.453188 waagent[1902]: 2026-03-13T00:44:54.453164Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 13 00:44:54.453438 waagent[1902]: 2026-03-13T00:44:54.453412Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 00:44:54.453491 waagent[1902]: 2026-03-13T00:44:54.453472Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 00:44:54.453603 waagent[1902]: 2026-03-13T00:44:54.453583Z INFO EnvHandler ExtHandler Configure routes Mar 13 00:44:54.453646 waagent[1902]: 2026-03-13T00:44:54.453629Z INFO EnvHandler ExtHandler Gateway:None Mar 13 00:44:54.453685 waagent[1902]: 2026-03-13T00:44:54.453668Z INFO EnvHandler ExtHandler Routes:None Mar 13 00:44:54.453867 waagent[1902]: 2026-03-13T00:44:54.453824Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 13 00:44:54.453897 waagent[1902]: 2026-03-13T00:44:54.453872Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 13 00:44:54.454103 waagent[1902]: 2026-03-13T00:44:54.454063Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 13 00:44:54.454220 waagent[1902]: 2026-03-13T00:44:54.454200Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 13 00:44:54.454661 waagent[1902]: 2026-03-13T00:44:54.454623Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 13 00:44:54.454770 waagent[1902]: 2026-03-13T00:44:54.454749Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 13 00:44:54.454770 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 13 00:44:54.454770 waagent[1902]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 13 00:44:54.454770 waagent[1902]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 13 00:44:54.454770 waagent[1902]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 13 00:44:54.454770 waagent[1902]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 00:44:54.454770 waagent[1902]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 00:44:54.479638 waagent[1902]: 2026-03-13T00:44:54.479516Z INFO ExtHandler ExtHandler Mar 13 00:44:54.479638 waagent[1902]: 2026-03-13T00:44:54.479575Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6a72620d-5ee7-4bd7-90b9-66e87a53f985 correlation 99df0506-2d8d-4769-b01d-7e81146dc8b3 created: 2026-03-13T00:44:00.077888Z] Mar 13 00:44:54.479863 waagent[1902]: 2026-03-13T00:44:54.479833Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 13 00:44:54.480897 waagent[1902]: 2026-03-13T00:44:54.480665Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 13 00:44:54.514616 waagent[1902]: 2026-03-13T00:44:54.514577Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 13 00:44:54.514616 waagent[1902]: Try `iptables -h' or 'iptables --help' for more information.) Mar 13 00:44:54.515373 waagent[1902]: 2026-03-13T00:44:54.515331Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 50803D78-E58F-4853-BAE7-D3688A714E33;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 13 00:44:54.515880 waagent[1902]: 2026-03-13T00:44:54.515841Z INFO MonitorHandler ExtHandler Network interfaces: Mar 13 00:44:54.515880 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Mar 13 00:44:54.515880 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 13 00:44:54.515880 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:ef:5e brd ff:ff:ff:ff:ff:ff\ alias Network Device Mar 13 00:44:54.515880 waagent[1902]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:ef:5e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Mar 13 00:44:54.515880 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 13 00:44:54.515880 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 13 00:44:54.515880 waagent[1902]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 13 00:44:54.515880 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 13 00:44:54.515880 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 13 00:44:54.515880 waagent[1902]: 2: eth0 inet6 fe80::7e1e:52ff:fe36:ef5e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 13 00:44:54.541598 waagent[1902]: 2026-03-13T00:44:54.541554Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 13 00:44:54.541598 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:44:54.541598 waagent[1902]: pkts bytes target prot opt in out source destination Mar 13 00:44:54.541598 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:44:54.541598 waagent[1902]: pkts bytes target prot opt in out source destination Mar 13 00:44:54.541598 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:44:54.541598 waagent[1902]: pkts bytes target prot opt in out source destination Mar 13 00:44:54.541598 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 00:44:54.541598 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 00:44:54.541598 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 00:44:54.544368 waagent[1902]: 2026-03-13T00:44:54.544209Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 13 00:44:54.544368 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:44:54.544368 waagent[1902]: pkts bytes target prot opt in out source destination Mar 13 00:44:54.544368 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:44:54.544368 waagent[1902]: pkts bytes target prot opt in out source destination Mar 13 00:44:54.544368 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 00:44:54.544368 waagent[1902]: pkts bytes target prot opt in out source destination Mar 13 00:44:54.544368 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 00:44:54.544368 waagent[1902]: 3 356 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 00:44:54.544368 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 00:45:01.113076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:45:01.114494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:01.525161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:01.535330 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:45:01.566832 kubelet[2055]: E0313 00:45:01.566784 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:45:01.569550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:45:01.569677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:45:01.569929 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.4M memory peak. Mar 13 00:45:11.820340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:45:11.821728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:12.287191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:12.300365 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:45:12.330187 kubelet[2071]: E0313 00:45:12.330155 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:45:12.331652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:45:12.331775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:45:12.332067 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.5M memory peak. Mar 13 00:45:13.138052 chronyd[1672]: Selected source PHC0 Mar 13 00:45:16.193234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:45:16.194410 systemd[1]: Started sshd@0-10.200.8.22:22-10.200.16.10:59622.service - OpenSSH per-connection server daemon (10.200.16.10:59622). Mar 13 00:45:16.861496 sshd[2080]: Accepted publickey for core from 10.200.16.10 port 59622 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:16.862564 sshd-session[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:16.866898 systemd-logind[1696]: New session 3 of user core. Mar 13 00:45:16.872254 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:45:17.280555 systemd[1]: Started sshd@1-10.200.8.22:22-10.200.16.10:59632.service - OpenSSH per-connection server daemon (10.200.16.10:59632). Mar 13 00:45:17.811457 sshd[2086]: Accepted publickey for core from 10.200.16.10 port 59632 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:17.812484 sshd-session[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:17.816350 systemd-logind[1696]: New session 4 of user core. Mar 13 00:45:17.822259 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:45:18.113234 sshd[2089]: Connection closed by 10.200.16.10 port 59632 Mar 13 00:45:18.113622 sshd-session[2086]: pam_unix(sshd:session): session closed for user core Mar 13 00:45:18.117452 systemd[1]: sshd@1-10.200.8.22:22-10.200.16.10:59632.service: Deactivated successfully. Mar 13 00:45:18.118232 systemd-logind[1696]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:45:18.119226 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:45:18.120555 systemd-logind[1696]: Removed session 4. Mar 13 00:45:18.228378 systemd[1]: Started sshd@2-10.200.8.22:22-10.200.16.10:59640.service - OpenSSH per-connection server daemon (10.200.16.10:59640). Mar 13 00:45:18.766753 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 59640 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:18.767848 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:18.772080 systemd-logind[1696]: New session 5 of user core. Mar 13 00:45:18.779259 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:45:19.065341 sshd[2098]: Connection closed by 10.200.16.10 port 59640 Mar 13 00:45:19.065780 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Mar 13 00:45:19.069163 systemd-logind[1696]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:45:19.069196 systemd[1]: sshd@2-10.200.8.22:22-10.200.16.10:59640.service: Deactivated successfully. Mar 13 00:45:19.070858 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:45:19.072475 systemd-logind[1696]: Removed session 5. Mar 13 00:45:19.178257 systemd[1]: Started sshd@3-10.200.8.22:22-10.200.16.10:59642.service - OpenSSH per-connection server daemon (10.200.16.10:59642). Mar 13 00:45:19.712691 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 59642 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:19.713058 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:19.717012 systemd-logind[1696]: New session 6 of user core. Mar 13 00:45:19.723250 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:45:20.013822 sshd[2107]: Connection closed by 10.200.16.10 port 59642 Mar 13 00:45:20.015196 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Mar 13 00:45:20.017259 systemd[1]: sshd@3-10.200.8.22:22-10.200.16.10:59642.service: Deactivated successfully. Mar 13 00:45:20.019416 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:45:20.020096 systemd-logind[1696]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:45:20.020877 systemd-logind[1696]: Removed session 6. Mar 13 00:45:20.126234 systemd[1]: Started sshd@4-10.200.8.22:22-10.200.16.10:45208.service - OpenSSH per-connection server daemon (10.200.16.10:45208). Mar 13 00:45:20.660165 sshd[2113]: Accepted publickey for core from 10.200.16.10 port 45208 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:20.660770 sshd-session[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:20.664805 systemd-logind[1696]: New session 7 of user core. Mar 13 00:45:20.667241 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:45:20.969687 sudo[2117]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:45:20.969887 sudo[2117]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:45:20.994816 sudo[2117]: pam_unix(sudo:session): session closed for user root Mar 13 00:45:21.094134 sshd[2116]: Connection closed by 10.200.16.10 port 45208 Mar 13 00:45:21.095472 sshd-session[2113]: pam_unix(sshd:session): session closed for user core Mar 13 00:45:21.097853 systemd[1]: sshd@4-10.200.8.22:22-10.200.16.10:45208.service: Deactivated successfully. Mar 13 00:45:21.099516 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:45:21.101064 systemd-logind[1696]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:45:21.101928 systemd-logind[1696]: Removed session 7. Mar 13 00:45:21.205553 systemd[1]: Started sshd@5-10.200.8.22:22-10.200.16.10:45218.service - OpenSSH per-connection server daemon (10.200.16.10:45218). Mar 13 00:45:21.736162 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 45218 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:21.737056 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:21.741250 systemd-logind[1696]: New session 8 of user core. Mar 13 00:45:21.747242 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:45:21.938085 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:45:21.938330 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:45:21.945619 sudo[2128]: pam_unix(sudo:session): session closed for user root Mar 13 00:45:21.949285 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:45:21.949483 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:45:21.956416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:45:21.984943 augenrules[2150]: No rules Mar 13 00:45:21.985922 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:45:21.986089 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:45:21.987061 sudo[2127]: pam_unix(sudo:session): session closed for user root Mar 13 00:45:22.086259 sshd[2126]: Connection closed by 10.200.16.10 port 45218 Mar 13 00:45:22.087254 sshd-session[2123]: pam_unix(sshd:session): session closed for user core Mar 13 00:45:22.089648 systemd[1]: sshd@5-10.200.8.22:22-10.200.16.10:45218.service: Deactivated successfully. Mar 13 00:45:22.091094 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:45:22.091784 systemd-logind[1696]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:45:22.093224 systemd-logind[1696]: Removed session 8. Mar 13 00:45:22.204081 systemd[1]: Started sshd@6-10.200.8.22:22-10.200.16.10:45226.service - OpenSSH per-connection server daemon (10.200.16.10:45226). Mar 13 00:45:22.396395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:45:22.397824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:22.738852 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 45226 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:45:22.739952 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:45:22.744210 systemd-logind[1696]: New session 9 of user core. Mar 13 00:45:22.754273 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:45:22.876140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:22.880458 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:45:22.912123 kubelet[2171]: E0313 00:45:22.912074 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:45:22.913573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:45:22.913705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:45:22.913965 systemd[1]: kubelet.service: Consumed 119ms CPU time, 110.6M memory peak. Mar 13 00:45:22.941091 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:45:22.941327 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:45:24.538738 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:45:24.550421 (dockerd)[2197]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:45:25.674315 dockerd[2197]: time="2026-03-13T00:45:25.674266615Z" level=info msg="Starting up" Mar 13 00:45:25.674906 dockerd[2197]: time="2026-03-13T00:45:25.674881076Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:45:25.685767 dockerd[2197]: time="2026-03-13T00:45:25.685734442Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:45:25.795987 dockerd[2197]: time="2026-03-13T00:45:25.795953051Z" level=info msg="Loading containers: start." Mar 13 00:45:25.808134 kernel: Initializing XFRM netlink socket Mar 13 00:45:26.091245 systemd-networkd[1343]: docker0: Link UP Mar 13 00:45:26.111467 dockerd[2197]: time="2026-03-13T00:45:26.111437184Z" level=info msg="Loading containers: done." Mar 13 00:45:26.131634 dockerd[2197]: time="2026-03-13T00:45:26.131602370Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:45:26.131729 dockerd[2197]: time="2026-03-13T00:45:26.131660145Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:45:26.131729 dockerd[2197]: time="2026-03-13T00:45:26.131721604Z" level=info msg="Initializing buildkit" Mar 13 00:45:26.178933 dockerd[2197]: time="2026-03-13T00:45:26.178908555Z" level=info msg="Completed buildkit initialization" Mar 13 00:45:26.185048 dockerd[2197]: time="2026-03-13T00:45:26.185022035Z" level=info msg="Daemon has completed initialization" Mar 13 00:45:26.185200 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:45:26.185556 dockerd[2197]: time="2026-03-13T00:45:26.185101865Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:45:26.683017 containerd[1719]: time="2026-03-13T00:45:26.682984209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:45:27.477715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127889495.mount: Deactivated successfully. Mar 13 00:45:28.518385 containerd[1719]: time="2026-03-13T00:45:28.518336538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:28.520784 containerd[1719]: time="2026-03-13T00:45:28.520622666Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074505" Mar 13 00:45:28.524452 containerd[1719]: time="2026-03-13T00:45:28.524429532Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:28.529951 containerd[1719]: time="2026-03-13T00:45:28.529918313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:28.530690 containerd[1719]: time="2026-03-13T00:45:28.530667088Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.847648455s" Mar 13 00:45:28.530770 containerd[1719]: time="2026-03-13T00:45:28.530756712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:45:28.531550 containerd[1719]: time="2026-03-13T00:45:28.531524711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:45:29.750517 containerd[1719]: time="2026-03-13T00:45:29.750472974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:29.752696 containerd[1719]: time="2026-03-13T00:45:29.752569082Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165831" Mar 13 00:45:29.755200 containerd[1719]: time="2026-03-13T00:45:29.755167009Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:29.758840 containerd[1719]: time="2026-03-13T00:45:29.758813567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:29.759573 containerd[1719]: time="2026-03-13T00:45:29.759527568Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.227963307s" Mar 13 00:45:29.759961 containerd[1719]: time="2026-03-13T00:45:29.759558153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:45:29.763409 containerd[1719]: time="2026-03-13T00:45:29.763388015Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:45:30.774243 containerd[1719]: time="2026-03-13T00:45:30.774200030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:30.777277 containerd[1719]: time="2026-03-13T00:45:30.777136888Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729832" Mar 13 00:45:30.794826 containerd[1719]: time="2026-03-13T00:45:30.794791223Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:30.799157 containerd[1719]: time="2026-03-13T00:45:30.799130373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:30.799942 containerd[1719]: time="2026-03-13T00:45:30.799919197Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.036420965s" Mar 13 00:45:30.800012 containerd[1719]: time="2026-03-13T00:45:30.800002201Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:45:30.800497 containerd[1719]: time="2026-03-13T00:45:30.800468985Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:45:31.740694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815346148.mount: Deactivated successfully. Mar 13 00:45:32.014778 containerd[1719]: time="2026-03-13T00:45:32.014680994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:32.018243 containerd[1719]: time="2026-03-13T00:45:32.018159264Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861778" Mar 13 00:45:32.021645 containerd[1719]: time="2026-03-13T00:45:32.021621062Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:32.025181 containerd[1719]: time="2026-03-13T00:45:32.025155704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:32.025493 containerd[1719]: time="2026-03-13T00:45:32.025472369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.224975086s" Mar 13 00:45:32.025562 containerd[1719]: time="2026-03-13T00:45:32.025549392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:45:32.026009 containerd[1719]: time="2026-03-13T00:45:32.025983746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:45:32.651883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568439681.mount: Deactivated successfully. Mar 13 00:45:32.804732 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 13 00:45:33.146473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 13 00:45:33.148480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:33.748387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:33.751366 containerd[1719]: time="2026-03-13T00:45:33.751329695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:33.754903 containerd[1719]: time="2026-03-13T00:45:33.754779427Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Mar 13 00:45:33.756426 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:45:33.759056 containerd[1719]: time="2026-03-13T00:45:33.757995763Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:33.762466 containerd[1719]: time="2026-03-13T00:45:33.762433218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:33.763655 containerd[1719]: time="2026-03-13T00:45:33.763621807Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.737609928s" Mar 13 00:45:33.763763 containerd[1719]: time="2026-03-13T00:45:33.763750833Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:45:33.764254 containerd[1719]: time="2026-03-13T00:45:33.764234772Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:45:33.789181 kubelet[2538]: E0313 00:45:33.789147 2538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:45:33.790546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:45:33.790668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:45:33.790971 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110M memory peak. Mar 13 00:45:34.281032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525046550.mount: Deactivated successfully. Mar 13 00:45:34.299786 containerd[1719]: time="2026-03-13T00:45:34.299749773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:34.302129 containerd[1719]: time="2026-03-13T00:45:34.302089040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 13 00:45:34.305225 containerd[1719]: time="2026-03-13T00:45:34.305204997Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:34.309931 containerd[1719]: time="2026-03-13T00:45:34.309905407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:34.310560 containerd[1719]: time="2026-03-13T00:45:34.310529556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 546.255788ms" Mar 13 00:45:34.310621 containerd[1719]: time="2026-03-13T00:45:34.310567571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:45:34.311142 containerd[1719]: time="2026-03-13T00:45:34.311105353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:45:34.840935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129452310.mount: Deactivated successfully. Mar 13 00:45:35.062319 update_engine[1697]: I20260313 00:45:35.062214 1697 update_attempter.cc:509] Updating boot flags... Mar 13 00:45:35.867226 containerd[1719]: time="2026-03-13T00:45:35.867185491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:35.869354 containerd[1719]: time="2026-03-13T00:45:35.869244593Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860682" Mar 13 00:45:35.871734 containerd[1719]: time="2026-03-13T00:45:35.871713560Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:35.877059 containerd[1719]: time="2026-03-13T00:45:35.877017043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:45:35.878104 containerd[1719]: time="2026-03-13T00:45:35.877728213Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.566578173s" Mar 13 00:45:35.878104 containerd[1719]: time="2026-03-13T00:45:35.877758184Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:45:38.437669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:38.437822 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110M memory peak. Mar 13 00:45:38.439742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:38.471381 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-9.scope)... Mar 13 00:45:38.471393 systemd[1]: Reloading... Mar 13 00:45:38.557142 zram_generator::config[2709]: No configuration found. Mar 13 00:45:38.738856 systemd[1]: Reloading finished in 267 ms. Mar 13 00:45:38.898305 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:45:38.898387 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:45:38.898750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:38.898807 systemd[1]: kubelet.service: Consumed 82ms CPU time, 92.6M memory peak. Mar 13 00:45:38.900341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:39.374172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:39.380551 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:45:39.412835 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:45:39.412835 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:45:39.413074 kubelet[2776]: I0313 00:45:39.412888 2776 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:45:39.640178 kubelet[2776]: I0313 00:45:39.639393 2776 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:45:39.640178 kubelet[2776]: I0313 00:45:39.639415 2776 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:45:39.640178 kubelet[2776]: I0313 00:45:39.639439 2776 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:45:39.640178 kubelet[2776]: I0313 00:45:39.639449 2776 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:45:39.640178 kubelet[2776]: I0313 00:45:39.639903 2776 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:45:39.648842 kubelet[2776]: E0313 00:45:39.648819 2776 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:45:39.649070 kubelet[2776]: I0313 00:45:39.649052 2776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:45:39.654979 kubelet[2776]: I0313 00:45:39.654868 2776 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:45:39.657894 kubelet[2776]: I0313 00:45:39.657867 2776 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:45:39.658992 kubelet[2776]: I0313 00:45:39.658962 2776 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:45:39.659142 kubelet[2776]: I0313 00:45:39.658991 2776 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-0cdf9482ba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:45:39.659263 kubelet[2776]: I0313 00:45:39.659142 2776 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:45:39.659263 kubelet[2776]: I0313 00:45:39.659153 2776 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:45:39.659263 kubelet[2776]: I0313 00:45:39.659231 2776 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:45:39.664983 kubelet[2776]: I0313 00:45:39.664964 2776 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:45:39.665182 kubelet[2776]: I0313 00:45:39.665169 2776 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:45:39.665223 kubelet[2776]: I0313 00:45:39.665186 2776 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:45:39.665223 kubelet[2776]: I0313 00:45:39.665207 2776 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:45:39.666463 kubelet[2776]: I0313 00:45:39.666154 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:45:39.668734 kubelet[2776]: E0313 00:45:39.668624 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-0cdf9482ba&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:45:39.668734 kubelet[2776]: E0313 00:45:39.668731 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:45:39.669937 kubelet[2776]: I0313 00:45:39.669013 2776 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:45:39.669937 kubelet[2776]: I0313 00:45:39.669578 2776 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:45:39.669937 kubelet[2776]: I0313 00:45:39.669609 2776 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:45:39.669937 kubelet[2776]: W0313 00:45:39.669663 2776 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:45:39.673272 kubelet[2776]: I0313 00:45:39.673181 2776 server.go:1262] "Started kubelet" Mar 13 00:45:39.674396 kubelet[2776]: I0313 00:45:39.674381 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:45:39.678739 kubelet[2776]: E0313 00:45:39.677577 2776 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-0cdf9482ba.189c40041c21bde5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-0cdf9482ba,UID:ci-4459.2.4-n-0cdf9482ba,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-0cdf9482ba,},FirstTimestamp:2026-03-13 00:45:39.673152997 +0000 UTC m=+0.289208931,LastTimestamp:2026-03-13 00:45:39.673152997 +0000 UTC m=+0.289208931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-0cdf9482ba,}" Mar 13 00:45:39.680024 kubelet[2776]: E0313 00:45:39.680002 2776 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:45:39.681133 kubelet[2776]: I0313 00:45:39.681019 2776 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:45:39.683252 kubelet[2776]: I0313 00:45:39.683234 2776 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:45:39.684554 kubelet[2776]: I0313 00:45:39.684521 2776 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:45:39.684624 kubelet[2776]: I0313 00:45:39.684574 2776 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:45:39.684759 kubelet[2776]: I0313 00:45:39.684743 2776 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:45:39.686161 kubelet[2776]: I0313 00:45:39.685294 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:45:39.690172 kubelet[2776]: I0313 00:45:39.689226 2776 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:45:39.690172 kubelet[2776]: E0313 00:45:39.689433 2776 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" Mar 13 00:45:39.690172 kubelet[2776]: I0313 00:45:39.689684 2776 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:45:39.690172 kubelet[2776]: I0313 00:45:39.689726 2776 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:45:39.690904 kubelet[2776]: I0313 00:45:39.690891 2776 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:45:39.691243 kubelet[2776]: E0313 00:45:39.691220 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-0cdf9482ba?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="200ms" Mar 13 00:45:39.691369 kubelet[2776]: E0313 00:45:39.691339 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:45:39.692581 kubelet[2776]: I0313 00:45:39.692563 2776 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:45:39.693630 kubelet[2776]: I0313 00:45:39.693611 2776 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:45:39.707820 kubelet[2776]: I0313 00:45:39.707801 2776 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:45:39.707820 kubelet[2776]: I0313 00:45:39.707818 2776 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:45:39.707904 kubelet[2776]: I0313 00:45:39.707830 2776 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:45:39.712155 kubelet[2776]: I0313 00:45:39.712104 2776 policy_none.go:49] "None policy: Start" Mar 13 00:45:39.712155 kubelet[2776]: I0313 00:45:39.712148 2776 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:45:39.712239 kubelet[2776]: I0313 00:45:39.712160 2776 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:45:39.716059 kubelet[2776]: I0313 00:45:39.716051 2776 policy_none.go:47] "Start" Mar 13 00:45:39.719239 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:45:39.724207 kubelet[2776]: I0313 00:45:39.724176 2776 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:45:39.725236 kubelet[2776]: I0313 00:45:39.725166 2776 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:45:39.725236 kubelet[2776]: I0313 00:45:39.725184 2776 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:45:39.725236 kubelet[2776]: I0313 00:45:39.725200 2776 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:45:39.725236 kubelet[2776]: E0313 00:45:39.725230 2776 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:45:39.727062 kubelet[2776]: E0313 00:45:39.727021 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:45:39.731933 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:45:39.734513 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:45:39.741591 kubelet[2776]: E0313 00:45:39.741579 2776 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:45:39.741748 kubelet[2776]: I0313 00:45:39.741743 2776 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:45:39.741791 kubelet[2776]: I0313 00:45:39.741777 2776 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:45:39.742302 kubelet[2776]: I0313 00:45:39.742158 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:45:39.743858 kubelet[2776]: E0313 00:45:39.743841 2776 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:45:39.743924 kubelet[2776]: E0313 00:45:39.743876 2776 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-0cdf9482ba\" not found" Mar 13 00:45:39.838654 systemd[1]: Created slice kubepods-burstable-pod14cf988c92d814584f4586c3b7e0735a.slice - libcontainer container kubepods-burstable-pod14cf988c92d814584f4586c3b7e0735a.slice. Mar 13 00:45:39.845860 kubelet[2776]: E0313 00:45:39.845830 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.847478 kubelet[2776]: I0313 00:45:39.847461 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.847999 kubelet[2776]: E0313 00:45:39.847839 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.850277 systemd[1]: Created slice kubepods-burstable-podc1aa8f3ddcf34a7558bb3ff0b7112daa.slice - libcontainer container kubepods-burstable-podc1aa8f3ddcf34a7558bb3ff0b7112daa.slice. Mar 13 00:45:39.868894 kubelet[2776]: E0313 00:45:39.868879 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.871085 systemd[1]: Created slice kubepods-burstable-pod44eaf50940d8898367e92c052da8ac06.slice - libcontainer container kubepods-burstable-pod44eaf50940d8898367e92c052da8ac06.slice. Mar 13 00:45:39.872385 kubelet[2776]: E0313 00:45:39.872368 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.892128 kubelet[2776]: E0313 00:45:39.892020 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-0cdf9482ba?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="400ms" Mar 13 00:45:39.990660 kubelet[2776]: I0313 00:45:39.990620 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14cf988c92d814584f4586c3b7e0735a-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" (UID: \"14cf988c92d814584f4586c3b7e0735a\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.990660 kubelet[2776]: I0313 00:45:39.990663 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14cf988c92d814584f4586c3b7e0735a-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" (UID: \"14cf988c92d814584f4586c3b7e0735a\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.990952 kubelet[2776]: I0313 00:45:39.990680 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14cf988c92d814584f4586c3b7e0735a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" (UID: \"14cf988c92d814584f4586c3b7e0735a\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.990952 kubelet[2776]: I0313 00:45:39.990699 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.990952 kubelet[2776]: I0313 00:45:39.990716 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.990952 kubelet[2776]: I0313 00:45:39.990731 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.990952 kubelet[2776]: I0313 00:45:39.990746 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.991053 kubelet[2776]: I0313 00:45:39.990763 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:39.991053 kubelet[2776]: I0313 00:45:39.990891 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44eaf50940d8898367e92c052da8ac06-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-0cdf9482ba\" (UID: \"44eaf50940d8898367e92c052da8ac06\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:40.049889 kubelet[2776]: I0313 00:45:40.049872 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:40.050096 kubelet[2776]: E0313 00:45:40.050073 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:40.151907 containerd[1719]: time="2026-03-13T00:45:40.151878364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-0cdf9482ba,Uid:14cf988c92d814584f4586c3b7e0735a,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:40.173763 containerd[1719]: time="2026-03-13T00:45:40.173738408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-0cdf9482ba,Uid:c1aa8f3ddcf34a7558bb3ff0b7112daa,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:40.178395 containerd[1719]: time="2026-03-13T00:45:40.178197893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-0cdf9482ba,Uid:44eaf50940d8898367e92c052da8ac06,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:40.292380 kubelet[2776]: E0313 00:45:40.292355 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-0cdf9482ba?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="800ms" Mar 13 00:45:40.451986 kubelet[2776]: I0313 00:45:40.451910 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:40.452506 kubelet[2776]: E0313 00:45:40.452208 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:40.472630 kubelet[2776]: E0313 00:45:40.472560 2776 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.4-n-0cdf9482ba.189c40041c21bde5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.4-n-0cdf9482ba,UID:ci-4459.2.4-n-0cdf9482ba,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.4-n-0cdf9482ba,},FirstTimestamp:2026-03-13 00:45:39.673152997 +0000 UTC m=+0.289208931,LastTimestamp:2026-03-13 00:45:39.673152997 +0000 UTC m=+0.289208931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.4-n-0cdf9482ba,}" Mar 13 00:45:40.580516 kubelet[2776]: E0313 00:45:40.580489 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:45:40.664554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676023112.mount: Deactivated successfully. Mar 13 00:45:40.691509 containerd[1719]: time="2026-03-13T00:45:40.691474164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:40.701799 containerd[1719]: time="2026-03-13T00:45:40.701756299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 13 00:45:40.704419 containerd[1719]: time="2026-03-13T00:45:40.704358361Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:40.709819 containerd[1719]: time="2026-03-13T00:45:40.709786971Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:40.712137 containerd[1719]: time="2026-03-13T00:45:40.711974996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:45:40.715182 containerd[1719]: time="2026-03-13T00:45:40.715144063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:40.718802 containerd[1719]: time="2026-03-13T00:45:40.718773634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:45:40.719258 containerd[1719]: time="2026-03-13T00:45:40.719236585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 540.969696ms" Mar 13 00:45:40.721181 containerd[1719]: time="2026-03-13T00:45:40.721155503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:45:40.724148 containerd[1719]: time="2026-03-13T00:45:40.724100770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 568.184666ms" Mar 13 00:45:40.746568 containerd[1719]: time="2026-03-13T00:45:40.746540388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 563.45202ms" Mar 13 00:45:40.760590 containerd[1719]: time="2026-03-13T00:45:40.760560255Z" level=info msg="connecting to shim cf3d42dee791a31897cfec11726b465841654d4824a0ad4552cc524c6297e51b" address="unix:///run/containerd/s/68adc987ab9078d203fb0c87f0dd96a6664c31e797e73b9caa90ff1a44890dd8" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:40.785272 systemd[1]: Started cri-containerd-cf3d42dee791a31897cfec11726b465841654d4824a0ad4552cc524c6297e51b.scope - libcontainer container cf3d42dee791a31897cfec11726b465841654d4824a0ad4552cc524c6297e51b. Mar 13 00:45:40.845072 containerd[1719]: time="2026-03-13T00:45:40.845045494Z" level=info msg="connecting to shim 343e72e92da2250cd0546ba7d9263007bdf310b97088e23011fcae0839caa838" address="unix:///run/containerd/s/fc0e16de2ff3393d5ec71cec064693264016dbd7034dda043cdeb39ea3979ebf" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:40.863242 systemd[1]: Started cri-containerd-343e72e92da2250cd0546ba7d9263007bdf310b97088e23011fcae0839caa838.scope - libcontainer container 343e72e92da2250cd0546ba7d9263007bdf310b97088e23011fcae0839caa838. Mar 13 00:45:40.886251 containerd[1719]: time="2026-03-13T00:45:40.886224971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.4-n-0cdf9482ba,Uid:c1aa8f3ddcf34a7558bb3ff0b7112daa,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf3d42dee791a31897cfec11726b465841654d4824a0ad4552cc524c6297e51b\"" Mar 13 00:45:40.936090 kubelet[2776]: E0313 00:45:40.936064 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:45:40.996005 containerd[1719]: time="2026-03-13T00:45:40.995932060Z" level=info msg="CreateContainer within sandbox \"cf3d42dee791a31897cfec11726b465841654d4824a0ad4552cc524c6297e51b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:45:41.045940 containerd[1719]: time="2026-03-13T00:45:41.045912532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.4-n-0cdf9482ba,Uid:14cf988c92d814584f4586c3b7e0735a,Namespace:kube-system,Attempt:0,} returns sandbox id \"343e72e92da2250cd0546ba7d9263007bdf310b97088e23011fcae0839caa838\"" Mar 13 00:45:41.054773 kubelet[2776]: E0313 00:45:41.054724 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:45:41.093370 kubelet[2776]: E0313 00:45:41.093334 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-0cdf9482ba?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="1.6s" Mar 13 00:45:41.094470 containerd[1719]: time="2026-03-13T00:45:41.094448600Z" level=info msg="CreateContainer within sandbox \"343e72e92da2250cd0546ba7d9263007bdf310b97088e23011fcae0839caa838\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:45:41.194665 containerd[1719]: time="2026-03-13T00:45:41.194625237Z" level=info msg="connecting to shim aa4ce74c108d6d6c58d091c55f09d2d471817f4ae7281f1ade6b86c279ef62b4" address="unix:///run/containerd/s/ef48eb740d09c8fe5452a444911773fe9f764b23d6167e85468c83fb62a2a9fc" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:41.210346 kubelet[2776]: E0313 00:45:41.210318 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-0cdf9482ba&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:45:41.215272 systemd[1]: Started cri-containerd-aa4ce74c108d6d6c58d091c55f09d2d471817f4ae7281f1ade6b86c279ef62b4.scope - libcontainer container aa4ce74c108d6d6c58d091c55f09d2d471817f4ae7281f1ade6b86c279ef62b4. Mar 13 00:45:41.255583 kubelet[2776]: I0313 00:45:41.255508 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:41.256173 kubelet[2776]: E0313 00:45:41.256151 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:41.391521 containerd[1719]: time="2026-03-13T00:45:41.391489683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.4-n-0cdf9482ba,Uid:44eaf50940d8898367e92c052da8ac06,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa4ce74c108d6d6c58d091c55f09d2d471817f4ae7281f1ade6b86c279ef62b4\"" Mar 13 00:45:41.484583 containerd[1719]: time="2026-03-13T00:45:41.484554392Z" level=info msg="CreateContainer within sandbox \"aa4ce74c108d6d6c58d091c55f09d2d471817f4ae7281f1ade6b86c279ef62b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:45:41.646276 containerd[1719]: time="2026-03-13T00:45:41.646249532Z" level=info msg="Container 89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:41.745473 containerd[1719]: time="2026-03-13T00:45:41.745447544Z" level=info msg="Container dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:41.843584 kubelet[2776]: E0313 00:45:41.843559 2776 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:45:42.694516 kubelet[2776]: E0313 00:45:42.694478 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-0cdf9482ba?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="3.2s" Mar 13 00:45:43.340189 kubelet[2776]: I0313 00:45:42.857618 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:43.340189 kubelet[2776]: E0313 00:45:42.857853 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:43.340189 kubelet[2776]: E0313 00:45:42.900423 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:45:43.340189 kubelet[2776]: E0313 00:45:43.085821 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.4-n-0cdf9482ba&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:45:43.340189 kubelet[2776]: E0313 00:45:43.127582 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:45:43.340189 kubelet[2776]: E0313 00:45:43.269683 2776 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:45:45.690770 containerd[1719]: time="2026-03-13T00:45:45.689786459Z" level=info msg="Container e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:45.890944 kubelet[2776]: E0313 00:45:45.890912 2776 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:45:45.895241 kubelet[2776]: E0313 00:45:45.895203 2776 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.4-n-0cdf9482ba?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="6.4s" Mar 13 00:45:46.060233 kubelet[2776]: I0313 00:45:46.060157 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:46.060543 kubelet[2776]: E0313 00:45:46.060515 2776 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:46.447895 containerd[1719]: time="2026-03-13T00:45:46.447857812Z" level=info msg="CreateContainer within sandbox \"343e72e92da2250cd0546ba7d9263007bdf310b97088e23011fcae0839caa838\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d\"" Mar 13 00:45:46.448646 containerd[1719]: time="2026-03-13T00:45:46.448611950Z" level=info msg="StartContainer for \"89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d\"" Mar 13 00:45:46.449635 containerd[1719]: time="2026-03-13T00:45:46.449598763Z" level=info msg="connecting to shim 89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d" address="unix:///run/containerd/s/fc0e16de2ff3393d5ec71cec064693264016dbd7034dda043cdeb39ea3979ebf" protocol=ttrpc version=3 Mar 13 00:45:46.470263 systemd[1]: Started cri-containerd-89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d.scope - libcontainer container 89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d. Mar 13 00:45:46.590706 containerd[1719]: time="2026-03-13T00:45:46.590625593Z" level=info msg="StartContainer for \"89104f3d7a30411f71f55447f672e4d1ff12e66ce272867f68de18111196199d\" returns successfully" Mar 13 00:45:46.592087 containerd[1719]: time="2026-03-13T00:45:46.592055219Z" level=info msg="CreateContainer within sandbox \"cf3d42dee791a31897cfec11726b465841654d4824a0ad4552cc524c6297e51b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416\"" Mar 13 00:45:46.592492 containerd[1719]: time="2026-03-13T00:45:46.592471362Z" level=info msg="StartContainer for \"dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416\"" Mar 13 00:45:46.593424 containerd[1719]: time="2026-03-13T00:45:46.593401901Z" level=info msg="connecting to shim dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416" address="unix:///run/containerd/s/68adc987ab9078d203fb0c87f0dd96a6664c31e797e73b9caa90ff1a44890dd8" protocol=ttrpc version=3 Mar 13 00:45:46.620250 systemd[1]: Started cri-containerd-dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416.scope - libcontainer container dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416. Mar 13 00:45:46.799921 containerd[1719]: time="2026-03-13T00:45:46.799846674Z" level=info msg="StartContainer for \"dabceb49fedfa69ab02ed70756bc93938c2004ce11b786b3e888a68aacc9e416\" returns successfully" Mar 13 00:45:46.805907 kubelet[2776]: E0313 00:45:46.805882 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:46.845611 containerd[1719]: time="2026-03-13T00:45:46.845534699Z" level=info msg="CreateContainer within sandbox \"aa4ce74c108d6d6c58d091c55f09d2d471817f4ae7281f1ade6b86c279ef62b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7\"" Mar 13 00:45:46.847221 containerd[1719]: time="2026-03-13T00:45:46.847202009Z" level=info msg="StartContainer for \"e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7\"" Mar 13 00:45:46.848353 containerd[1719]: time="2026-03-13T00:45:46.848331803Z" level=info msg="connecting to shim e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7" address="unix:///run/containerd/s/ef48eb740d09c8fe5452a444911773fe9f764b23d6167e85468c83fb62a2a9fc" protocol=ttrpc version=3 Mar 13 00:45:46.876576 systemd[1]: Started cri-containerd-e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7.scope - libcontainer container e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7. Mar 13 00:45:47.400994 containerd[1719]: time="2026-03-13T00:45:47.400957766Z" level=info msg="StartContainer for \"e1f4aa9395e83412b26d349388944dccdae425c735af3dfb6900aa2ec9009be7\" returns successfully" Mar 13 00:45:47.813315 kubelet[2776]: E0313 00:45:47.813226 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:47.813611 kubelet[2776]: E0313 00:45:47.813486 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:47.815143 kubelet[2776]: E0313 00:45:47.814142 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:48.689968 kubelet[2776]: E0313 00:45:48.689923 2776 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.4-n-0cdf9482ba" not found Mar 13 00:45:48.814699 kubelet[2776]: E0313 00:45:48.814672 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:48.815199 kubelet[2776]: E0313 00:45:48.815083 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:48.815199 kubelet[2776]: E0313 00:45:48.815197 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:49.058492 kubelet[2776]: E0313 00:45:49.058404 2776 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.4-n-0cdf9482ba" not found Mar 13 00:45:49.644021 kubelet[2776]: E0313 00:45:49.643990 2776 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.4-n-0cdf9482ba" not found Mar 13 00:45:49.675239 kubelet[2776]: I0313 00:45:49.675212 2776 apiserver.go:52] "Watching apiserver" Mar 13 00:45:49.690410 kubelet[2776]: I0313 00:45:49.690363 2776 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:45:49.744146 kubelet[2776]: E0313 00:45:49.744059 2776 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.4-n-0cdf9482ba\" not found" Mar 13 00:45:49.816360 kubelet[2776]: E0313 00:45:49.816328 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.041085 kubelet[2776]: E0313 00:45:52.041029 2776 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.4-n-0cdf9482ba" not found Mar 13 00:45:52.298415 kubelet[2776]: E0313 00:45:52.298305 2776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.328305 kubelet[2776]: E0313 00:45:52.328167 2776 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.462426 kubelet[2776]: I0313 00:45:52.462250 2776 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.469685 kubelet[2776]: I0313 00:45:52.469653 2776 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.490064 kubelet[2776]: I0313 00:45:52.490035 2776 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.499836 kubelet[2776]: I0313 00:45:52.499722 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:52.500716 kubelet[2776]: I0313 00:45:52.500581 2776 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.507133 kubelet[2776]: I0313 00:45:52.507100 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:52.507206 kubelet[2776]: I0313 00:45:52.507182 2776 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:52.519258 kubelet[2776]: I0313 00:45:52.519242 2776 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:53.010724 systemd[1]: Reload requested from client PID 3053 ('systemctl') (unit session-9.scope)... Mar 13 00:45:53.010737 systemd[1]: Reloading... Mar 13 00:45:53.095186 zram_generator::config[3097]: No configuration found. Mar 13 00:45:53.312819 systemd[1]: Reloading finished in 301 ms. Mar 13 00:45:53.342225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:53.366802 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:45:53.367022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:53.367069 systemd[1]: kubelet.service: Consumed 599ms CPU time, 125.1M memory peak. Mar 13 00:45:53.368532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:45:56.522006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:45:56.528449 (kubelet)[3167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:45:56.568448 kubelet[3167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:45:56.568448 kubelet[3167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:45:56.568691 kubelet[3167]: I0313 00:45:56.568529 3167 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:45:56.575596 kubelet[3167]: I0313 00:45:56.575566 3167 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:45:56.575596 kubelet[3167]: I0313 00:45:56.575591 3167 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:45:56.575697 kubelet[3167]: I0313 00:45:56.575614 3167 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:45:56.575697 kubelet[3167]: I0313 00:45:56.575619 3167 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:45:56.576147 kubelet[3167]: I0313 00:45:56.575798 3167 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:45:56.577091 kubelet[3167]: I0313 00:45:56.577069 3167 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:45:56.578972 kubelet[3167]: I0313 00:45:56.578850 3167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:45:56.586020 kubelet[3167]: I0313 00:45:56.585563 3167 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:45:56.588848 kubelet[3167]: I0313 00:45:56.588829 3167 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:45:56.589033 kubelet[3167]: I0313 00:45:56.589013 3167 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:45:56.589202 kubelet[3167]: I0313 00:45:56.589035 3167 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.4-n-0cdf9482ba","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:45:56.589202 kubelet[3167]: I0313 00:45:56.589178 3167 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:45:56.589202 kubelet[3167]: I0313 00:45:56.589185 3167 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:45:56.589202 kubelet[3167]: I0313 00:45:56.589204 3167 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:45:56.589351 kubelet[3167]: I0313 00:45:56.589342 3167 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:45:56.589645 kubelet[3167]: I0313 00:45:56.589459 3167 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:45:56.589645 kubelet[3167]: I0313 00:45:56.589469 3167 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:45:56.589645 kubelet[3167]: I0313 00:45:56.589487 3167 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:45:56.591207 kubelet[3167]: I0313 00:45:56.589502 3167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:45:56.594381 kubelet[3167]: I0313 00:45:56.594100 3167 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:45:56.594681 kubelet[3167]: I0313 00:45:56.594595 3167 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:45:56.594681 kubelet[3167]: I0313 00:45:56.594623 3167 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:45:56.603447 kubelet[3167]: I0313 00:45:56.603416 3167 server.go:1262] "Started kubelet" Mar 13 00:45:56.609538 kubelet[3167]: I0313 00:45:56.609517 3167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:45:56.611125 kubelet[3167]: I0313 00:45:56.611002 3167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:45:56.612229 kubelet[3167]: I0313 00:45:56.612205 3167 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:45:56.612457 kubelet[3167]: E0313 00:45:56.612287 3167 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.4-n-0cdf9482ba\" not found" Mar 13 00:45:56.613049 kubelet[3167]: I0313 00:45:56.612695 3167 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:45:56.613049 kubelet[3167]: I0313 00:45:56.612788 3167 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:45:56.614228 kubelet[3167]: I0313 00:45:56.614182 3167 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:45:56.615144 kubelet[3167]: I0313 00:45:56.615002 3167 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:45:56.617553 kubelet[3167]: I0313 00:45:56.616742 3167 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:45:56.617553 kubelet[3167]: I0313 00:45:56.616826 3167 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:45:56.620309 kubelet[3167]: I0313 00:45:56.620274 3167 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:45:56.620373 kubelet[3167]: I0313 00:45:56.620317 3167 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:45:56.621138 kubelet[3167]: I0313 00:45:56.620439 3167 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:45:56.629174 kubelet[3167]: I0313 00:45:56.628722 3167 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:45:56.633516 kubelet[3167]: I0313 00:45:56.633488 3167 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:45:56.634544 kubelet[3167]: I0313 00:45:56.634506 3167 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:45:56.634544 kubelet[3167]: I0313 00:45:56.634530 3167 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:45:56.634544 kubelet[3167]: I0313 00:45:56.634547 3167 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:45:56.634658 kubelet[3167]: E0313 00:45:56.634577 3167 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:45:56.642611 kubelet[3167]: E0313 00:45:56.642582 3167 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:45:56.693020 kubelet[3167]: I0313 00:45:56.693002 3167 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:45:56.693020 kubelet[3167]: I0313 00:45:56.693019 3167 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:45:56.693102 kubelet[3167]: I0313 00:45:56.693032 3167 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:45:56.693233 kubelet[3167]: I0313 00:45:56.693143 3167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:45:56.693233 kubelet[3167]: I0313 00:45:56.693151 3167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:45:56.693233 kubelet[3167]: I0313 00:45:56.693163 3167 policy_none.go:49] "None policy: Start" Mar 13 00:45:56.693233 kubelet[3167]: I0313 00:45:56.693170 3167 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:45:56.693233 kubelet[3167]: I0313 00:45:56.693178 3167 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:45:56.693333 kubelet[3167]: I0313 00:45:56.693248 3167 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:45:56.693333 kubelet[3167]: I0313 00:45:56.693254 3167 policy_none.go:47] "Start" Mar 13 00:45:56.704443 kubelet[3167]: E0313 00:45:56.704401 3167 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:45:56.704736 kubelet[3167]: I0313 00:45:56.704723 3167 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:45:56.704786 kubelet[3167]: I0313 00:45:56.704736 3167 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:45:56.704993 kubelet[3167]: I0313 00:45:56.704984 3167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:45:56.709718 kubelet[3167]: E0313 00:45:56.709700 3167 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:45:56.732291 sudo[3207]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 00:45:56.732461 sudo[3207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 00:45:56.736035 kubelet[3167]: I0313 00:45:56.736017 3167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.736668 kubelet[3167]: I0313 00:45:56.736300 3167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.736668 kubelet[3167]: I0313 00:45:56.736452 3167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.748645 kubelet[3167]: I0313 00:45:56.748565 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:56.748645 kubelet[3167]: E0313 00:45:56.748603 3167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-0cdf9482ba\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.753527 kubelet[3167]: I0313 00:45:56.753179 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:56.753527 kubelet[3167]: I0313 00:45:56.753209 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:56.753527 kubelet[3167]: E0313 00:45:56.753234 3167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.753527 kubelet[3167]: E0313 00:45:56.753279 3167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.813923 kubelet[3167]: I0313 00:45:56.812971 3167 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814123 kubelet[3167]: I0313 00:45:56.813925 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44eaf50940d8898367e92c052da8ac06-kubeconfig\") pod \"kube-scheduler-ci-4459.2.4-n-0cdf9482ba\" (UID: \"44eaf50940d8898367e92c052da8ac06\") " pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814266 kubelet[3167]: I0313 00:45:56.814211 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14cf988c92d814584f4586c3b7e0735a-ca-certs\") pod \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" (UID: \"14cf988c92d814584f4586c3b7e0735a\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814266 kubelet[3167]: I0313 00:45:56.814233 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14cf988c92d814584f4586c3b7e0735a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" (UID: \"14cf988c92d814584f4586c3b7e0735a\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814266 kubelet[3167]: I0313 00:45:56.814251 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-ca-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814504 kubelet[3167]: I0313 00:45:56.814461 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814504 kubelet[3167]: I0313 00:45:56.814488 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814652 kubelet[3167]: I0313 00:45:56.814607 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814652 kubelet[3167]: I0313 00:45:56.814624 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14cf988c92d814584f4586c3b7e0735a-k8s-certs\") pod \"kube-apiserver-ci-4459.2.4-n-0cdf9482ba\" (UID: \"14cf988c92d814584f4586c3b7e0735a\") " pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.814738 kubelet[3167]: I0313 00:45:56.814639 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1aa8f3ddcf34a7558bb3ff0b7112daa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.4-n-0cdf9482ba\" (UID: \"c1aa8f3ddcf34a7558bb3ff0b7112daa\") " pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.827210 kubelet[3167]: I0313 00:45:56.827192 3167 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.827294 kubelet[3167]: I0313 00:45:56.827284 3167 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:56.827322 kubelet[3167]: I0313 00:45:56.827305 3167 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:45:56.827585 containerd[1719]: time="2026-03-13T00:45:56.827560894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:45:56.828193 kubelet[3167]: I0313 00:45:56.828108 3167 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:45:57.044873 sudo[3207]: pam_unix(sudo:session): session closed for user root Mar 13 00:45:57.592576 kubelet[3167]: I0313 00:45:57.592547 3167 apiserver.go:52] "Watching apiserver" Mar 13 00:45:57.609506 systemd[1]: Created slice kubepods-besteffort-podd9a75b4b_9bb2_4fdd_b0c7_22d05764dd3a.slice - libcontainer container kubepods-besteffort-podd9a75b4b_9bb2_4fdd_b0c7_22d05764dd3a.slice. Mar 13 00:45:57.618973 kubelet[3167]: I0313 00:45:57.618858 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" podStartSLOduration=5.61884472 podStartE2EDuration="5.61884472s" podCreationTimestamp="2026-03-13 00:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:57.618809066 +0000 UTC m=+1.087389781" watchObservedRunningTime="2026-03-13 00:45:57.61884472 +0000 UTC m=+1.087425410" Mar 13 00:45:57.621099 kubelet[3167]: I0313 00:45:57.620103 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-hubble-tls\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621288 kubelet[3167]: I0313 00:45:57.621139 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da219c9-c922-4d93-81de-b9403855675c-clustermesh-secrets\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621288 kubelet[3167]: I0313 00:45:57.621163 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-kernel\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621288 kubelet[3167]: I0313 00:45:57.621180 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-hostproc\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621288 kubelet[3167]: I0313 00:45:57.621196 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-net\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621288 kubelet[3167]: I0313 00:45:57.621210 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-run\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621288 kubelet[3167]: I0313 00:45:57.621236 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-lib-modules\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621430 kubelet[3167]: I0313 00:45:57.621282 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-etc-cni-netd\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621430 kubelet[3167]: I0313 00:45:57.621303 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96clm\" (UniqueName: \"kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-kube-api-access-96clm\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621430 kubelet[3167]: I0313 00:45:57.621324 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdn57\" (UniqueName: \"kubernetes.io/projected/15fba49b-bd38-41e3-971b-f31fbc2964df-kube-api-access-rdn57\") pod \"cilium-operator-6f9c7c5859-vd7tt\" (UID: \"15fba49b-bd38-41e3-971b-f31fbc2964df\") " pod="kube-system/cilium-operator-6f9c7c5859-vd7tt" Mar 13 00:45:57.621430 kubelet[3167]: I0313 00:45:57.621351 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cni-path\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621430 kubelet[3167]: I0313 00:45:57.621368 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-xtables-lock\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621538 kubelet[3167]: I0313 00:45:57.621383 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a-lib-modules\") pod \"kube-proxy-gc79j\" (UID: \"d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a\") " pod="kube-system/kube-proxy-gc79j" Mar 13 00:45:57.621538 kubelet[3167]: I0313 00:45:57.621401 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fba49b-bd38-41e3-971b-f31fbc2964df-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-vd7tt\" (UID: \"15fba49b-bd38-41e3-971b-f31fbc2964df\") " pod="kube-system/cilium-operator-6f9c7c5859-vd7tt" Mar 13 00:45:57.621538 kubelet[3167]: I0313 00:45:57.621416 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xrp\" (UniqueName: \"kubernetes.io/projected/d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a-kube-api-access-74xrp\") pod \"kube-proxy-gc79j\" (UID: \"d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a\") " pod="kube-system/kube-proxy-gc79j" Mar 13 00:45:57.621538 kubelet[3167]: I0313 00:45:57.621433 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a-kube-proxy\") pod \"kube-proxy-gc79j\" (UID: \"d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a\") " pod="kube-system/kube-proxy-gc79j" Mar 13 00:45:57.621538 kubelet[3167]: I0313 00:45:57.621449 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a-xtables-lock\") pod \"kube-proxy-gc79j\" (UID: \"d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a\") " pod="kube-system/kube-proxy-gc79j" Mar 13 00:45:57.621660 kubelet[3167]: I0313 00:45:57.621463 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-bpf-maps\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621660 kubelet[3167]: I0313 00:45:57.621480 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-cgroup\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.621660 kubelet[3167]: I0313 00:45:57.621493 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da219c9-c922-4d93-81de-b9403855675c-cilium-config-path\") pod \"cilium-bt7ss\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " pod="kube-system/cilium-bt7ss" Mar 13 00:45:57.625150 systemd[1]: Created slice kubepods-burstable-pod3da219c9_c922_4d93_81de_b9403855675c.slice - libcontainer container kubepods-burstable-pod3da219c9_c922_4d93_81de_b9403855675c.slice. Mar 13 00:45:57.629542 systemd[1]: Created slice kubepods-besteffort-pod15fba49b_bd38_41e3_971b_f31fbc2964df.slice - libcontainer container kubepods-besteffort-pod15fba49b_bd38_41e3_971b_f31fbc2964df.slice. Mar 13 00:45:57.641097 kubelet[3167]: I0313 00:45:57.641057 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.4-n-0cdf9482ba" podStartSLOduration=5.641044639 podStartE2EDuration="5.641044639s" podCreationTimestamp="2026-03-13 00:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:57.631889939 +0000 UTC m=+1.100470632" watchObservedRunningTime="2026-03-13 00:45:57.641044639 +0000 UTC m=+1.109625343" Mar 13 00:45:57.650478 kubelet[3167]: I0313 00:45:57.650406 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.4-n-0cdf9482ba" podStartSLOduration=5.650394869 podStartE2EDuration="5.650394869s" podCreationTimestamp="2026-03-13 00:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:57.641026186 +0000 UTC m=+1.109606879" watchObservedRunningTime="2026-03-13 00:45:57.650394869 +0000 UTC m=+1.118975564" Mar 13 00:45:57.668343 kubelet[3167]: I0313 00:45:57.668314 3167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:57.673898 kubelet[3167]: I0313 00:45:57.673880 3167 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 00:45:57.673969 kubelet[3167]: E0313 00:45:57.673918 3167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.4-n-0cdf9482ba\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.4-n-0cdf9482ba" Mar 13 00:45:57.713312 kubelet[3167]: I0313 00:45:57.713295 3167 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:45:57.926756 containerd[1719]: time="2026-03-13T00:45:57.926721495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gc79j,Uid:d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:57.931879 containerd[1719]: time="2026-03-13T00:45:57.931854028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bt7ss,Uid:3da219c9-c922-4d93-81de-b9403855675c,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:57.937677 containerd[1719]: time="2026-03-13T00:45:57.937652560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-vd7tt,Uid:15fba49b-bd38-41e3-971b-f31fbc2964df,Namespace:kube-system,Attempt:0,}" Mar 13 00:45:58.000900 containerd[1719]: time="2026-03-13T00:45:58.000626590Z" level=info msg="connecting to shim f35b9d0291b6a433d2738fe4b0a5ddd540438bd34fcca74af1a74baf1e978b1f" address="unix:///run/containerd/s/1b57fa4dd64ab797c1c416e814d21ca42a35910ec6a8a6c25f5008f2a7d1151e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:58.002317 containerd[1719]: time="2026-03-13T00:45:58.002293798Z" level=info msg="connecting to shim 5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb" address="unix:///run/containerd/s/137b9828df94b65819569dc34c9ae6dd71207453d2c66fa6a363278eaf0a60a0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:58.023970 containerd[1719]: time="2026-03-13T00:45:58.023944784Z" level=info msg="connecting to shim 10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df" address="unix:///run/containerd/s/63df04ab72576936388c31cf154d085108caebfebcde0cbdd932ad58b50b9377" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:45:58.033393 systemd[1]: Started cri-containerd-f35b9d0291b6a433d2738fe4b0a5ddd540438bd34fcca74af1a74baf1e978b1f.scope - libcontainer container f35b9d0291b6a433d2738fe4b0a5ddd540438bd34fcca74af1a74baf1e978b1f. Mar 13 00:45:58.045236 systemd[1]: Started cri-containerd-5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb.scope - libcontainer container 5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb. Mar 13 00:45:58.048608 systemd[1]: Started cri-containerd-10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df.scope - libcontainer container 10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df. Mar 13 00:45:58.081973 containerd[1719]: time="2026-03-13T00:45:58.081941818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gc79j,Uid:d9a75b4b-9bb2-4fdd-b0c7-22d05764dd3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f35b9d0291b6a433d2738fe4b0a5ddd540438bd34fcca74af1a74baf1e978b1f\"" Mar 13 00:45:58.088198 containerd[1719]: time="2026-03-13T00:45:58.088096707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bt7ss,Uid:3da219c9-c922-4d93-81de-b9403855675c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\"" Mar 13 00:45:58.090322 containerd[1719]: time="2026-03-13T00:45:58.089800286Z" level=info msg="CreateContainer within sandbox \"f35b9d0291b6a433d2738fe4b0a5ddd540438bd34fcca74af1a74baf1e978b1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:45:58.090531 containerd[1719]: time="2026-03-13T00:45:58.090515318Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 00:45:58.111334 containerd[1719]: time="2026-03-13T00:45:58.111308482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-vd7tt,Uid:15fba49b-bd38-41e3-971b-f31fbc2964df,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\"" Mar 13 00:45:58.125847 containerd[1719]: time="2026-03-13T00:45:58.125736146Z" level=info msg="Container 0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:58.139522 containerd[1719]: time="2026-03-13T00:45:58.139500020Z" level=info msg="CreateContainer within sandbox \"f35b9d0291b6a433d2738fe4b0a5ddd540438bd34fcca74af1a74baf1e978b1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11\"" Mar 13 00:45:58.139944 containerd[1719]: time="2026-03-13T00:45:58.139923174Z" level=info msg="StartContainer for \"0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11\"" Mar 13 00:45:58.141268 containerd[1719]: time="2026-03-13T00:45:58.141243939Z" level=info msg="connecting to shim 0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11" address="unix:///run/containerd/s/1b57fa4dd64ab797c1c416e814d21ca42a35910ec6a8a6c25f5008f2a7d1151e" protocol=ttrpc version=3 Mar 13 00:45:58.156248 systemd[1]: Started cri-containerd-0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11.scope - libcontainer container 0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11. Mar 13 00:45:58.339369 containerd[1719]: time="2026-03-13T00:45:58.338931248Z" level=info msg="StartContainer for \"0ecf330435e6c8da1d2a79a49cc5b601d17bee8957540153e68e593650517b11\" returns successfully" Mar 13 00:45:58.684499 kubelet[3167]: I0313 00:45:58.684426 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gc79j" podStartSLOduration=2.684409207 podStartE2EDuration="2.684409207s" podCreationTimestamp="2026-03-13 00:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:58.684331042 +0000 UTC m=+2.152911733" watchObservedRunningTime="2026-03-13 00:45:58.684409207 +0000 UTC m=+2.152989899" Mar 13 00:46:02.125515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389298074.mount: Deactivated successfully. Mar 13 00:46:03.556335 containerd[1719]: time="2026-03-13T00:46:03.556293498Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:03.559021 containerd[1719]: time="2026-03-13T00:46:03.558981373Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 00:46:03.563892 containerd[1719]: time="2026-03-13T00:46:03.563847220Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:03.565127 containerd[1719]: time="2026-03-13T00:46:03.564860326Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.474278229s" Mar 13 00:46:03.565127 containerd[1719]: time="2026-03-13T00:46:03.564891775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 00:46:03.565814 containerd[1719]: time="2026-03-13T00:46:03.565790680Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 00:46:03.571417 containerd[1719]: time="2026-03-13T00:46:03.571394128Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:46:03.590130 containerd[1719]: time="2026-03-13T00:46:03.590074674Z" level=info msg="Container 4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:03.612998 containerd[1719]: time="2026-03-13T00:46:03.612978112Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\"" Mar 13 00:46:03.613583 containerd[1719]: time="2026-03-13T00:46:03.613553953Z" level=info msg="StartContainer for \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\"" Mar 13 00:46:03.614425 containerd[1719]: time="2026-03-13T00:46:03.614390592Z" level=info msg="connecting to shim 4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171" address="unix:///run/containerd/s/137b9828df94b65819569dc34c9ae6dd71207453d2c66fa6a363278eaf0a60a0" protocol=ttrpc version=3 Mar 13 00:46:03.633293 systemd[1]: Started cri-containerd-4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171.scope - libcontainer container 4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171. Mar 13 00:46:03.658594 containerd[1719]: time="2026-03-13T00:46:03.658570511Z" level=info msg="StartContainer for \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\" returns successfully" Mar 13 00:46:03.663479 systemd[1]: cri-containerd-4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171.scope: Deactivated successfully. Mar 13 00:46:03.664960 containerd[1719]: time="2026-03-13T00:46:03.664938976Z" level=info msg="received container exit event container_id:\"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\" id:\"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\" pid:3566 exited_at:{seconds:1773362763 nanos:664637244}" Mar 13 00:46:03.688752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171-rootfs.mount: Deactivated successfully. Mar 13 00:46:08.291737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476848523.mount: Deactivated successfully. Mar 13 00:46:08.679306 containerd[1719]: time="2026-03-13T00:46:08.679268610Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:08.682275 containerd[1719]: time="2026-03-13T00:46:08.682245096Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 00:46:08.691579 containerd[1719]: time="2026-03-13T00:46:08.691481472Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:08.692443 containerd[1719]: time="2026-03-13T00:46:08.692358603Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.126541586s" Mar 13 00:46:08.692443 containerd[1719]: time="2026-03-13T00:46:08.692387162Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 00:46:08.699981 containerd[1719]: time="2026-03-13T00:46:08.699421749Z" level=info msg="CreateContainer within sandbox \"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 00:46:08.706927 containerd[1719]: time="2026-03-13T00:46:08.706895883Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:46:08.726080 containerd[1719]: time="2026-03-13T00:46:08.725484353Z" level=info msg="Container 49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:08.730720 containerd[1719]: time="2026-03-13T00:46:08.730698260Z" level=info msg="Container f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:08.746387 containerd[1719]: time="2026-03-13T00:46:08.746362411Z" level=info msg="CreateContainer within sandbox \"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\"" Mar 13 00:46:08.747056 containerd[1719]: time="2026-03-13T00:46:08.746708023Z" level=info msg="StartContainer for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\"" Mar 13 00:46:08.747456 containerd[1719]: time="2026-03-13T00:46:08.747428716Z" level=info msg="connecting to shim 49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2" address="unix:///run/containerd/s/63df04ab72576936388c31cf154d085108caebfebcde0cbdd932ad58b50b9377" protocol=ttrpc version=3 Mar 13 00:46:08.749101 containerd[1719]: time="2026-03-13T00:46:08.748976050Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\"" Mar 13 00:46:08.749507 containerd[1719]: time="2026-03-13T00:46:08.749488355Z" level=info msg="StartContainer for \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\"" Mar 13 00:46:08.752705 containerd[1719]: time="2026-03-13T00:46:08.752602239Z" level=info msg="connecting to shim f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec" address="unix:///run/containerd/s/137b9828df94b65819569dc34c9ae6dd71207453d2c66fa6a363278eaf0a60a0" protocol=ttrpc version=3 Mar 13 00:46:08.768284 systemd[1]: Started cri-containerd-49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2.scope - libcontainer container 49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2. Mar 13 00:46:08.770824 systemd[1]: Started cri-containerd-f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec.scope - libcontainer container f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec. Mar 13 00:46:08.805229 containerd[1719]: time="2026-03-13T00:46:08.803769333Z" level=info msg="StartContainer for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" returns successfully" Mar 13 00:46:08.817598 containerd[1719]: time="2026-03-13T00:46:08.817573715Z" level=info msg="StartContainer for \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\" returns successfully" Mar 13 00:46:08.833663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:46:08.833860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:46:08.834177 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:46:08.836345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:46:08.839770 systemd[1]: cri-containerd-f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec.scope: Deactivated successfully. Mar 13 00:46:08.840740 containerd[1719]: time="2026-03-13T00:46:08.840717468Z" level=info msg="received container exit event container_id:\"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\" id:\"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\" pid:3650 exited_at:{seconds:1773362768 nanos:840319094}" Mar 13 00:46:08.862166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:46:09.716812 kubelet[3167]: I0313 00:46:09.716757 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-vd7tt" podStartSLOduration=2.135811871 podStartE2EDuration="12.716742518s" podCreationTimestamp="2026-03-13 00:45:57 +0000 UTC" firstStartedPulling="2026-03-13 00:45:58.112052033 +0000 UTC m=+1.580632713" lastFinishedPulling="2026-03-13 00:46:08.692982682 +0000 UTC m=+12.161563360" observedRunningTime="2026-03-13 00:46:09.716602029 +0000 UTC m=+13.185182719" watchObservedRunningTime="2026-03-13 00:46:09.716742518 +0000 UTC m=+13.185323208" Mar 13 00:46:09.718274 containerd[1719]: time="2026-03-13T00:46:09.718152930Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:46:09.751162 containerd[1719]: time="2026-03-13T00:46:09.750272359Z" level=info msg="Container f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:09.772293 containerd[1719]: time="2026-03-13T00:46:09.772265901Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\"" Mar 13 00:46:09.774498 containerd[1719]: time="2026-03-13T00:46:09.774476735Z" level=info msg="StartContainer for \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\"" Mar 13 00:46:09.775877 containerd[1719]: time="2026-03-13T00:46:09.775853608Z" level=info msg="connecting to shim f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757" address="unix:///run/containerd/s/137b9828df94b65819569dc34c9ae6dd71207453d2c66fa6a363278eaf0a60a0" protocol=ttrpc version=3 Mar 13 00:46:09.804247 systemd[1]: Started cri-containerd-f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757.scope - libcontainer container f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757. Mar 13 00:46:09.870420 systemd[1]: cri-containerd-f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757.scope: Deactivated successfully. Mar 13 00:46:09.871948 containerd[1719]: time="2026-03-13T00:46:09.871905727Z" level=info msg="received container exit event container_id:\"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\" id:\"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\" pid:3709 exited_at:{seconds:1773362769 nanos:871747871}" Mar 13 00:46:09.873875 containerd[1719]: time="2026-03-13T00:46:09.873805039Z" level=info msg="StartContainer for \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\" returns successfully" Mar 13 00:46:09.890093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757-rootfs.mount: Deactivated successfully. Mar 13 00:46:10.723691 containerd[1719]: time="2026-03-13T00:46:10.723654713Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:46:10.747247 containerd[1719]: time="2026-03-13T00:46:10.746615309Z" level=info msg="Container 5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:10.759813 containerd[1719]: time="2026-03-13T00:46:10.759784345Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\"" Mar 13 00:46:10.760433 containerd[1719]: time="2026-03-13T00:46:10.760407736Z" level=info msg="StartContainer for \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\"" Mar 13 00:46:10.761421 containerd[1719]: time="2026-03-13T00:46:10.761363372Z" level=info msg="connecting to shim 5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07" address="unix:///run/containerd/s/137b9828df94b65819569dc34c9ae6dd71207453d2c66fa6a363278eaf0a60a0" protocol=ttrpc version=3 Mar 13 00:46:10.784256 systemd[1]: Started cri-containerd-5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07.scope - libcontainer container 5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07. Mar 13 00:46:10.802905 systemd[1]: cri-containerd-5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07.scope: Deactivated successfully. Mar 13 00:46:10.808741 containerd[1719]: time="2026-03-13T00:46:10.808290930Z" level=info msg="received container exit event container_id:\"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\" id:\"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\" pid:3748 exited_at:{seconds:1773362770 nanos:803893955}" Mar 13 00:46:10.814310 containerd[1719]: time="2026-03-13T00:46:10.814286325Z" level=info msg="StartContainer for \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\" returns successfully" Mar 13 00:46:10.822299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07-rootfs.mount: Deactivated successfully. Mar 13 00:46:11.726908 containerd[1719]: time="2026-03-13T00:46:11.726806155Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:46:11.754246 containerd[1719]: time="2026-03-13T00:46:11.754210869Z" level=info msg="Container ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:11.766033 containerd[1719]: time="2026-03-13T00:46:11.766000822Z" level=info msg="CreateContainer within sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\"" Mar 13 00:46:11.767140 containerd[1719]: time="2026-03-13T00:46:11.766396144Z" level=info msg="StartContainer for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\"" Mar 13 00:46:11.767374 containerd[1719]: time="2026-03-13T00:46:11.767335642Z" level=info msg="connecting to shim ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2" address="unix:///run/containerd/s/137b9828df94b65819569dc34c9ae6dd71207453d2c66fa6a363278eaf0a60a0" protocol=ttrpc version=3 Mar 13 00:46:11.789267 systemd[1]: Started cri-containerd-ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2.scope - libcontainer container ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2. Mar 13 00:46:11.822710 containerd[1719]: time="2026-03-13T00:46:11.822623872Z" level=info msg="StartContainer for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" returns successfully" Mar 13 00:46:11.878509 kubelet[3167]: I0313 00:46:11.877278 3167 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:46:11.929384 systemd[1]: Created slice kubepods-burstable-podf2c28fcb_9213_4f11_b581_90765e3a6f3d.slice - libcontainer container kubepods-burstable-podf2c28fcb_9213_4f11_b581_90765e3a6f3d.slice. Mar 13 00:46:11.936028 systemd[1]: Created slice kubepods-burstable-pod9ebee503_2411_4133_8adb_2947d0bb1408.slice - libcontainer container kubepods-burstable-pod9ebee503_2411_4133_8adb_2947d0bb1408.slice. Mar 13 00:46:12.014543 kubelet[3167]: I0313 00:46:12.014479 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ebee503-2411-4133-8adb-2947d0bb1408-config-volume\") pod \"coredns-66bc5c9577-mzkpj\" (UID: \"9ebee503-2411-4133-8adb-2947d0bb1408\") " pod="kube-system/coredns-66bc5c9577-mzkpj" Mar 13 00:46:12.014543 kubelet[3167]: I0313 00:46:12.014535 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2c28fcb-9213-4f11-b581-90765e3a6f3d-config-volume\") pod \"coredns-66bc5c9577-sks72\" (UID: \"f2c28fcb-9213-4f11-b581-90765e3a6f3d\") " pod="kube-system/coredns-66bc5c9577-sks72" Mar 13 00:46:12.014634 kubelet[3167]: I0313 00:46:12.014554 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87mps\" (UniqueName: \"kubernetes.io/projected/f2c28fcb-9213-4f11-b581-90765e3a6f3d-kube-api-access-87mps\") pod \"coredns-66bc5c9577-sks72\" (UID: \"f2c28fcb-9213-4f11-b581-90765e3a6f3d\") " pod="kube-system/coredns-66bc5c9577-sks72" Mar 13 00:46:12.014634 kubelet[3167]: I0313 00:46:12.014570 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlxk\" (UniqueName: \"kubernetes.io/projected/9ebee503-2411-4133-8adb-2947d0bb1408-kube-api-access-khlxk\") pod \"coredns-66bc5c9577-mzkpj\" (UID: \"9ebee503-2411-4133-8adb-2947d0bb1408\") " pod="kube-system/coredns-66bc5c9577-mzkpj" Mar 13 00:46:12.241647 containerd[1719]: time="2026-03-13T00:46:12.241352999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sks72,Uid:f2c28fcb-9213-4f11-b581-90765e3a6f3d,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:12.245496 containerd[1719]: time="2026-03-13T00:46:12.245455211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mzkpj,Uid:9ebee503-2411-4133-8adb-2947d0bb1408,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:12.739427 kubelet[3167]: I0313 00:46:12.739227 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bt7ss" podStartSLOduration=10.262797963 podStartE2EDuration="15.739211798s" podCreationTimestamp="2026-03-13 00:45:57 +0000 UTC" firstStartedPulling="2026-03-13 00:45:58.089261766 +0000 UTC m=+1.557842457" lastFinishedPulling="2026-03-13 00:46:03.565675602 +0000 UTC m=+7.034256292" observedRunningTime="2026-03-13 00:46:12.738595321 +0000 UTC m=+16.207176032" watchObservedRunningTime="2026-03-13 00:46:12.739211798 +0000 UTC m=+16.207792483" Mar 13 00:46:13.868807 systemd-networkd[1343]: cilium_host: Link UP Mar 13 00:46:13.872603 systemd-networkd[1343]: cilium_net: Link UP Mar 13 00:46:13.872748 systemd-networkd[1343]: cilium_host: Gained carrier Mar 13 00:46:13.872851 systemd-networkd[1343]: cilium_net: Gained carrier Mar 13 00:46:13.977215 systemd-networkd[1343]: cilium_host: Gained IPv6LL Mar 13 00:46:14.000239 systemd-networkd[1343]: cilium_vxlan: Link UP Mar 13 00:46:14.000248 systemd-networkd[1343]: cilium_vxlan: Gained carrier Mar 13 00:46:14.189167 kernel: NET: Registered PF_ALG protocol family Mar 13 00:46:14.657067 systemd-networkd[1343]: lxc_health: Link UP Mar 13 00:46:14.662760 systemd-networkd[1343]: lxc_health: Gained carrier Mar 13 00:46:14.745201 systemd-networkd[1343]: cilium_net: Gained IPv6LL Mar 13 00:46:14.812135 kernel: eth0: renamed from tmp9df6f Mar 13 00:46:14.824368 kernel: eth0: renamed from tmp10fda Mar 13 00:46:14.824148 systemd-networkd[1343]: lxc5497b5a28762: Link UP Mar 13 00:46:14.825320 systemd-networkd[1343]: lxc311a3bb22bc5: Link UP Mar 13 00:46:14.828625 systemd-networkd[1343]: lxc5497b5a28762: Gained carrier Mar 13 00:46:14.828940 systemd-networkd[1343]: lxc311a3bb22bc5: Gained carrier Mar 13 00:46:15.193252 systemd-networkd[1343]: cilium_vxlan: Gained IPv6LL Mar 13 00:46:16.217557 systemd-networkd[1343]: lxc5497b5a28762: Gained IPv6LL Mar 13 00:46:16.281256 systemd-networkd[1343]: lxc_health: Gained IPv6LL Mar 13 00:46:16.601488 systemd-networkd[1343]: lxc311a3bb22bc5: Gained IPv6LL Mar 13 00:46:17.771840 containerd[1719]: time="2026-03-13T00:46:17.771746485Z" level=info msg="connecting to shim 10fdacc6cb597b59e31f72544aacffef4593df0dfa0a7a4750ee36cda26f2055" address="unix:///run/containerd/s/0fd6546f92e7706bd7329f6b547acadabcef5646a5a718ebf7384c619ba08c5e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:17.810270 containerd[1719]: time="2026-03-13T00:46:17.809039289Z" level=info msg="connecting to shim 9df6fbee4e7e22a5064e7b675b7d7de6be784ddba06c98d353805acd906cb20c" address="unix:///run/containerd/s/4b7f9a7854c93ea0c61373026c9c5e3a02260a5e4c2182758c53ff1ea668f2ab" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:17.817264 systemd[1]: Started cri-containerd-10fdacc6cb597b59e31f72544aacffef4593df0dfa0a7a4750ee36cda26f2055.scope - libcontainer container 10fdacc6cb597b59e31f72544aacffef4593df0dfa0a7a4750ee36cda26f2055. Mar 13 00:46:17.850227 systemd[1]: Started cri-containerd-9df6fbee4e7e22a5064e7b675b7d7de6be784ddba06c98d353805acd906cb20c.scope - libcontainer container 9df6fbee4e7e22a5064e7b675b7d7de6be784ddba06c98d353805acd906cb20c. Mar 13 00:46:17.881396 containerd[1719]: time="2026-03-13T00:46:17.881373543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sks72,Uid:f2c28fcb-9213-4f11-b581-90765e3a6f3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fdacc6cb597b59e31f72544aacffef4593df0dfa0a7a4750ee36cda26f2055\"" Mar 13 00:46:17.889162 containerd[1719]: time="2026-03-13T00:46:17.888968855Z" level=info msg="CreateContainer within sandbox \"10fdacc6cb597b59e31f72544aacffef4593df0dfa0a7a4750ee36cda26f2055\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:46:17.905128 containerd[1719]: time="2026-03-13T00:46:17.905087577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mzkpj,Uid:9ebee503-2411-4133-8adb-2947d0bb1408,Namespace:kube-system,Attempt:0,} returns sandbox id \"9df6fbee4e7e22a5064e7b675b7d7de6be784ddba06c98d353805acd906cb20c\"" Mar 13 00:46:17.911894 containerd[1719]: time="2026-03-13T00:46:17.911872584Z" level=info msg="CreateContainer within sandbox \"9df6fbee4e7e22a5064e7b675b7d7de6be784ddba06c98d353805acd906cb20c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:46:17.915249 containerd[1719]: time="2026-03-13T00:46:17.915223650Z" level=info msg="Container 97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:17.929599 containerd[1719]: time="2026-03-13T00:46:17.929576585Z" level=info msg="Container a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:17.935610 containerd[1719]: time="2026-03-13T00:46:17.935586050Z" level=info msg="CreateContainer within sandbox \"10fdacc6cb597b59e31f72544aacffef4593df0dfa0a7a4750ee36cda26f2055\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5\"" Mar 13 00:46:17.935963 containerd[1719]: time="2026-03-13T00:46:17.935942335Z" level=info msg="StartContainer for \"97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5\"" Mar 13 00:46:17.936607 containerd[1719]: time="2026-03-13T00:46:17.936580704Z" level=info msg="connecting to shim 97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5" address="unix:///run/containerd/s/0fd6546f92e7706bd7329f6b547acadabcef5646a5a718ebf7384c619ba08c5e" protocol=ttrpc version=3 Mar 13 00:46:17.944555 containerd[1719]: time="2026-03-13T00:46:17.944476232Z" level=info msg="CreateContainer within sandbox \"9df6fbee4e7e22a5064e7b675b7d7de6be784ddba06c98d353805acd906cb20c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac\"" Mar 13 00:46:17.945218 containerd[1719]: time="2026-03-13T00:46:17.945195322Z" level=info msg="StartContainer for \"a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac\"" Mar 13 00:46:17.946379 containerd[1719]: time="2026-03-13T00:46:17.946320133Z" level=info msg="connecting to shim a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac" address="unix:///run/containerd/s/4b7f9a7854c93ea0c61373026c9c5e3a02260a5e4c2182758c53ff1ea668f2ab" protocol=ttrpc version=3 Mar 13 00:46:17.953361 systemd[1]: Started cri-containerd-97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5.scope - libcontainer container 97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5. Mar 13 00:46:17.964248 systemd[1]: Started cri-containerd-a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac.scope - libcontainer container a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac. Mar 13 00:46:17.996469 containerd[1719]: time="2026-03-13T00:46:17.996395783Z" level=info msg="StartContainer for \"a372069098892f9eb5079c31db5be58ab7d983d4a4bac0c4dfc9f3e3e94871ac\" returns successfully" Mar 13 00:46:17.997461 containerd[1719]: time="2026-03-13T00:46:17.997315676Z" level=info msg="StartContainer for \"97a380280915fbf7d7c9f4654e6307eac30371b6ae11980d519800521f5193f5\" returns successfully" Mar 13 00:46:18.752398 kubelet[3167]: I0313 00:46:18.752338 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mzkpj" podStartSLOduration=22.752163876 podStartE2EDuration="22.752163876s" podCreationTimestamp="2026-03-13 00:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:18.751483528 +0000 UTC m=+22.220064217" watchObservedRunningTime="2026-03-13 00:46:18.752163876 +0000 UTC m=+22.220744565" Mar 13 00:46:18.763569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433241642.mount: Deactivated successfully. Mar 13 00:46:18.787090 kubelet[3167]: I0313 00:46:18.787043 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sks72" podStartSLOduration=22.786949643 podStartE2EDuration="22.786949643s" podCreationTimestamp="2026-03-13 00:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:18.786251121 +0000 UTC m=+22.254831809" watchObservedRunningTime="2026-03-13 00:46:18.786949643 +0000 UTC m=+22.255530329" Mar 13 00:46:21.268151 kubelet[3167]: E0313 00:46:21.267964 3167 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:48132->127.0.0.1:43503: read tcp 127.0.0.1:48132->127.0.0.1:43503: read: connection reset by peer Mar 13 00:46:21.756441 kubelet[3167]: I0313 00:46:21.755441 3167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:46:23.828837 sudo[2178]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:23.928205 sshd[2165]: Connection closed by 10.200.16.10 port 45226 Mar 13 00:46:23.928661 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:23.931548 systemd[1]: sshd@6-10.200.8.22:22-10.200.16.10:45226.service: Deactivated successfully. Mar 13 00:46:23.933366 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:46:23.933584 systemd[1]: session-9.scope: Consumed 4.126s CPU time, 270.5M memory peak. Mar 13 00:46:23.935477 systemd-logind[1696]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:46:23.937090 systemd-logind[1696]: Removed session 9. Mar 13 00:47:57.065298 update_engine[1697]: I20260313 00:47:57.065203 1697 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 13 00:47:57.065298 update_engine[1697]: I20260313 00:47:57.065244 1697 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 13 00:47:57.065668 update_engine[1697]: I20260313 00:47:57.065377 1697 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 13 00:47:57.065826 update_engine[1697]: I20260313 00:47:57.065696 1697 omaha_request_params.cc:62] Current group set to stable Mar 13 00:47:57.065826 update_engine[1697]: I20260313 00:47:57.065799 1697 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 13 00:47:57.065826 update_engine[1697]: I20260313 00:47:57.065804 1697 update_attempter.cc:643] Scheduling an action processor start. Mar 13 00:47:57.065826 update_engine[1697]: I20260313 00:47:57.065820 1697 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:47:57.065906 update_engine[1697]: I20260313 00:47:57.065842 1697 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 13 00:47:57.065906 update_engine[1697]: I20260313 00:47:57.065881 1697 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:47:57.065906 update_engine[1697]: I20260313 00:47:57.065886 1697 omaha_request_action.cc:272] Request: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: Mar 13 00:47:57.065906 update_engine[1697]: I20260313 00:47:57.065891 1697 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:47:57.066440 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 13 00:47:57.066793 update_engine[1697]: I20260313 00:47:57.066768 1697 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:47:57.067218 update_engine[1697]: I20260313 00:47:57.067198 1697 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:47:57.104226 update_engine[1697]: E20260313 00:47:57.104192 1697 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:47:57.104306 update_engine[1697]: I20260313 00:47:57.104269 1697 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 13 00:48:07.063190 update_engine[1697]: I20260313 00:48:07.063107 1697 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:48:07.063569 update_engine[1697]: I20260313 00:48:07.063216 1697 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:48:07.063569 update_engine[1697]: I20260313 00:48:07.063562 1697 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:48:07.086999 update_engine[1697]: E20260313 00:48:07.086968 1697 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:48:07.087085 update_engine[1697]: I20260313 00:48:07.087040 1697 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 13 00:48:17.061336 update_engine[1697]: I20260313 00:48:17.061270 1697 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:48:17.061676 update_engine[1697]: I20260313 00:48:17.061359 1697 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:48:17.061702 update_engine[1697]: I20260313 00:48:17.061673 1697 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:48:17.088800 update_engine[1697]: E20260313 00:48:17.088775 1697 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:48:17.088875 update_engine[1697]: I20260313 00:48:17.088840 1697 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 13 00:48:27.061804 update_engine[1697]: I20260313 00:48:27.061739 1697 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:48:27.062196 update_engine[1697]: I20260313 00:48:27.061828 1697 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:48:27.062196 update_engine[1697]: I20260313 00:48:27.062187 1697 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:48:27.156648 update_engine[1697]: E20260313 00:48:27.156603 1697 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:48:27.156810 update_engine[1697]: I20260313 00:48:27.156697 1697 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 00:48:27.156810 update_engine[1697]: I20260313 00:48:27.156706 1697 omaha_request_action.cc:617] Omaha request response: Mar 13 00:48:27.156810 update_engine[1697]: E20260313 00:48:27.156772 1697 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 13 00:48:27.156810 update_engine[1697]: I20260313 00:48:27.156790 1697 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 13 00:48:27.156810 update_engine[1697]: I20260313 00:48:27.156795 1697 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:48:27.156810 update_engine[1697]: I20260313 00:48:27.156800 1697 update_attempter.cc:306] Processing Done. Mar 13 00:48:27.156946 update_engine[1697]: E20260313 00:48:27.156816 1697 update_attempter.cc:619] Update failed. Mar 13 00:48:27.156946 update_engine[1697]: I20260313 00:48:27.156821 1697 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 13 00:48:27.156946 update_engine[1697]: I20260313 00:48:27.156826 1697 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 13 00:48:27.156946 update_engine[1697]: I20260313 00:48:27.156832 1697 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 13 00:48:27.156946 update_engine[1697]: I20260313 00:48:27.156916 1697 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:48:27.156946 update_engine[1697]: I20260313 00:48:27.156940 1697 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:48:27.157066 update_engine[1697]: I20260313 00:48:27.156945 1697 omaha_request_action.cc:272] Request: Mar 13 00:48:27.157066 update_engine[1697]: Mar 13 00:48:27.157066 update_engine[1697]: Mar 13 00:48:27.157066 update_engine[1697]: Mar 13 00:48:27.157066 update_engine[1697]: Mar 13 00:48:27.157066 update_engine[1697]: Mar 13 00:48:27.157066 update_engine[1697]: Mar 13 00:48:27.157066 update_engine[1697]: I20260313 00:48:27.156950 1697 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:48:27.157066 update_engine[1697]: I20260313 00:48:27.156970 1697 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:48:27.157433 update_engine[1697]: I20260313 00:48:27.157287 1697 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:48:27.157573 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 13 00:48:27.237381 update_engine[1697]: E20260313 00:48:27.237344 1697 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237411 1697 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237419 1697 omaha_request_action.cc:617] Omaha request response: Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237426 1697 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237432 1697 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237435 1697 update_attempter.cc:306] Processing Done. Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237442 1697 update_attempter.cc:310] Error event sent. Mar 13 00:48:27.237474 update_engine[1697]: I20260313 00:48:27.237448 1697 update_check_scheduler.cc:74] Next update check in 47m14s Mar 13 00:48:27.237798 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 13 00:49:23.574324 systemd[1]: Started sshd@7-10.200.8.22:22-10.200.16.10:56924.service - OpenSSH per-connection server daemon (10.200.16.10:56924). Mar 13 00:49:24.104843 sshd[4673]: Accepted publickey for core from 10.200.16.10 port 56924 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:24.105900 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:24.110035 systemd-logind[1696]: New session 10 of user core. Mar 13 00:49:24.113273 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:49:24.464558 sshd[4676]: Connection closed by 10.200.16.10 port 56924 Mar 13 00:49:24.465027 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:24.467652 systemd[1]: sshd@7-10.200.8.22:22-10.200.16.10:56924.service: Deactivated successfully. Mar 13 00:49:24.469545 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:49:24.470880 systemd-logind[1696]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:49:24.472072 systemd-logind[1696]: Removed session 10. Mar 13 00:49:29.581887 systemd[1]: Started sshd@8-10.200.8.22:22-10.200.16.10:56938.service - OpenSSH per-connection server daemon (10.200.16.10:56938). Mar 13 00:49:30.112639 sshd[4691]: Accepted publickey for core from 10.200.16.10 port 56938 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:30.113698 sshd-session[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:30.117801 systemd-logind[1696]: New session 11 of user core. Mar 13 00:49:30.125267 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:49:30.462884 sshd[4694]: Connection closed by 10.200.16.10 port 56938 Mar 13 00:49:30.464217 sshd-session[4691]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:30.467398 systemd-logind[1696]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:49:30.467537 systemd[1]: sshd@8-10.200.8.22:22-10.200.16.10:56938.service: Deactivated successfully. Mar 13 00:49:30.469298 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:49:30.470798 systemd-logind[1696]: Removed session 11. Mar 13 00:49:35.573034 systemd[1]: Started sshd@9-10.200.8.22:22-10.200.16.10:55356.service - OpenSSH per-connection server daemon (10.200.16.10:55356). Mar 13 00:49:36.103558 sshd[4707]: Accepted publickey for core from 10.200.16.10 port 55356 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:36.104620 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:36.108166 systemd-logind[1696]: New session 12 of user core. Mar 13 00:49:36.115238 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:49:36.455254 sshd[4710]: Connection closed by 10.200.16.10 port 55356 Mar 13 00:49:36.456285 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:36.459544 systemd[1]: sshd@9-10.200.8.22:22-10.200.16.10:55356.service: Deactivated successfully. Mar 13 00:49:36.459764 systemd-logind[1696]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:49:36.461540 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:49:36.463096 systemd-logind[1696]: Removed session 12. Mar 13 00:49:41.573345 systemd[1]: Started sshd@10-10.200.8.22:22-10.200.16.10:52320.service - OpenSSH per-connection server daemon (10.200.16.10:52320). Mar 13 00:49:42.105101 sshd[4724]: Accepted publickey for core from 10.200.16.10 port 52320 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:42.106185 sshd-session[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:42.110516 systemd-logind[1696]: New session 13 of user core. Mar 13 00:49:42.125247 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:49:42.454624 sshd[4727]: Connection closed by 10.200.16.10 port 52320 Mar 13 00:49:42.455284 sshd-session[4724]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:42.458337 systemd[1]: sshd@10-10.200.8.22:22-10.200.16.10:52320.service: Deactivated successfully. Mar 13 00:49:42.459960 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:49:42.460902 systemd-logind[1696]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:49:42.461917 systemd-logind[1696]: Removed session 13. Mar 13 00:49:47.566722 systemd[1]: Started sshd@11-10.200.8.22:22-10.200.16.10:52336.service - OpenSSH per-connection server daemon (10.200.16.10:52336). Mar 13 00:49:48.099157 sshd[4741]: Accepted publickey for core from 10.200.16.10 port 52336 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:48.100262 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:48.106285 systemd-logind[1696]: New session 14 of user core. Mar 13 00:49:48.114295 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:49:48.447878 sshd[4747]: Connection closed by 10.200.16.10 port 52336 Mar 13 00:49:48.448343 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:48.451386 systemd[1]: sshd@11-10.200.8.22:22-10.200.16.10:52336.service: Deactivated successfully. Mar 13 00:49:48.453045 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:49:48.453698 systemd-logind[1696]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:49:48.454662 systemd-logind[1696]: Removed session 14. Mar 13 00:49:48.560492 systemd[1]: Started sshd@12-10.200.8.22:22-10.200.16.10:52342.service - OpenSSH per-connection server daemon (10.200.16.10:52342). Mar 13 00:49:49.093097 sshd[4760]: Accepted publickey for core from 10.200.16.10 port 52342 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:49.094222 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:49.098170 systemd-logind[1696]: New session 15 of user core. Mar 13 00:49:49.102264 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:49:49.468226 sshd[4763]: Connection closed by 10.200.16.10 port 52342 Mar 13 00:49:49.469280 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:49.471795 systemd[1]: sshd@12-10.200.8.22:22-10.200.16.10:52342.service: Deactivated successfully. Mar 13 00:49:49.473430 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:49:49.475082 systemd-logind[1696]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:49:49.475961 systemd-logind[1696]: Removed session 15. Mar 13 00:49:49.584816 systemd[1]: Started sshd@13-10.200.8.22:22-10.200.16.10:52356.service - OpenSSH per-connection server daemon (10.200.16.10:52356). Mar 13 00:49:50.115498 sshd[4773]: Accepted publickey for core from 10.200.16.10 port 52356 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:50.116649 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:50.120774 systemd-logind[1696]: New session 16 of user core. Mar 13 00:49:50.131251 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:49:50.461992 sshd[4776]: Connection closed by 10.200.16.10 port 52356 Mar 13 00:49:50.463273 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:50.466174 systemd-logind[1696]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:49:50.466669 systemd[1]: sshd@13-10.200.8.22:22-10.200.16.10:52356.service: Deactivated successfully. Mar 13 00:49:50.468431 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:49:50.469786 systemd-logind[1696]: Removed session 16. Mar 13 00:49:55.578308 systemd[1]: Started sshd@14-10.200.8.22:22-10.200.16.10:45060.service - OpenSSH per-connection server daemon (10.200.16.10:45060). Mar 13 00:49:56.108036 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 45060 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:56.108515 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:56.113060 systemd-logind[1696]: New session 17 of user core. Mar 13 00:49:56.118300 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:49:56.452067 sshd[4795]: Connection closed by 10.200.16.10 port 45060 Mar 13 00:49:56.452553 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:56.455703 systemd[1]: sshd@14-10.200.8.22:22-10.200.16.10:45060.service: Deactivated successfully. Mar 13 00:49:56.457428 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:49:56.458170 systemd-logind[1696]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:49:56.459369 systemd-logind[1696]: Removed session 17. Mar 13 00:49:56.563682 systemd[1]: Started sshd@15-10.200.8.22:22-10.200.16.10:45076.service - OpenSSH per-connection server daemon (10.200.16.10:45076). Mar 13 00:49:57.107507 sshd[4807]: Accepted publickey for core from 10.200.16.10 port 45076 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:57.107921 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:57.111764 systemd-logind[1696]: New session 18 of user core. Mar 13 00:49:57.119245 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:49:57.520963 sshd[4812]: Connection closed by 10.200.16.10 port 45076 Mar 13 00:49:57.521453 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:57.524567 systemd[1]: sshd@15-10.200.8.22:22-10.200.16.10:45076.service: Deactivated successfully. Mar 13 00:49:57.526172 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:49:57.527045 systemd-logind[1696]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:49:57.528465 systemd-logind[1696]: Removed session 18. Mar 13 00:49:57.630571 systemd[1]: Started sshd@16-10.200.8.22:22-10.200.16.10:45092.service - OpenSSH per-connection server daemon (10.200.16.10:45092). Mar 13 00:49:58.162016 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 45092 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:58.163047 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:58.167512 systemd-logind[1696]: New session 19 of user core. Mar 13 00:49:58.171239 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:49:58.834508 sshd[4825]: Connection closed by 10.200.16.10 port 45092 Mar 13 00:49:58.835006 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:58.838888 systemd[1]: sshd@16-10.200.8.22:22-10.200.16.10:45092.service: Deactivated successfully. Mar 13 00:49:58.840609 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:49:58.841487 systemd-logind[1696]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:49:58.842779 systemd-logind[1696]: Removed session 19. Mar 13 00:49:58.944603 systemd[1]: Started sshd@17-10.200.8.22:22-10.200.16.10:45094.service - OpenSSH per-connection server daemon (10.200.16.10:45094). Mar 13 00:49:59.480186 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 45094 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:49:59.481253 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:59.485190 systemd-logind[1696]: New session 20 of user core. Mar 13 00:49:59.494244 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:49:59.917059 sshd[4845]: Connection closed by 10.200.16.10 port 45094 Mar 13 00:49:59.917556 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:59.920424 systemd[1]: sshd@17-10.200.8.22:22-10.200.16.10:45094.service: Deactivated successfully. Mar 13 00:49:59.922325 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:49:59.923173 systemd-logind[1696]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:49:59.925070 systemd-logind[1696]: Removed session 20. Mar 13 00:50:00.036058 systemd[1]: Started sshd@18-10.200.8.22:22-10.200.16.10:55700.service - OpenSSH per-connection server daemon (10.200.16.10:55700). Mar 13 00:50:00.575832 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 55700 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:00.577001 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:00.581204 systemd-logind[1696]: New session 21 of user core. Mar 13 00:50:00.588260 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:50:00.921325 sshd[4860]: Connection closed by 10.200.16.10 port 55700 Mar 13 00:50:00.922275 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:00.925529 systemd-logind[1696]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:50:00.925669 systemd[1]: sshd@18-10.200.8.22:22-10.200.16.10:55700.service: Deactivated successfully. Mar 13 00:50:00.927446 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:50:00.929212 systemd-logind[1696]: Removed session 21. Mar 13 00:50:06.033345 systemd[1]: Started sshd@19-10.200.8.22:22-10.200.16.10:55708.service - OpenSSH per-connection server daemon (10.200.16.10:55708). Mar 13 00:50:06.563338 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 55708 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:06.564420 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:06.568881 systemd-logind[1696]: New session 22 of user core. Mar 13 00:50:06.572244 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:50:06.909653 sshd[4877]: Connection closed by 10.200.16.10 port 55708 Mar 13 00:50:06.910283 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:06.913408 systemd[1]: sshd@19-10.200.8.22:22-10.200.16.10:55708.service: Deactivated successfully. Mar 13 00:50:06.915245 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:50:06.915981 systemd-logind[1696]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:50:06.916953 systemd-logind[1696]: Removed session 22. Mar 13 00:50:12.034026 systemd[1]: Started sshd@20-10.200.8.22:22-10.200.16.10:36412.service - OpenSSH per-connection server daemon (10.200.16.10:36412). Mar 13 00:50:12.566144 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 36412 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:12.567016 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:12.571002 systemd-logind[1696]: New session 23 of user core. Mar 13 00:50:12.579259 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:50:12.910230 sshd[4893]: Connection closed by 10.200.16.10 port 36412 Mar 13 00:50:12.911281 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:12.914520 systemd[1]: sshd@20-10.200.8.22:22-10.200.16.10:36412.service: Deactivated successfully. Mar 13 00:50:12.916280 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:50:12.917022 systemd-logind[1696]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:50:12.918322 systemd-logind[1696]: Removed session 23. Mar 13 00:50:13.021503 systemd[1]: Started sshd@21-10.200.8.22:22-10.200.16.10:36420.service - OpenSSH per-connection server daemon (10.200.16.10:36420). Mar 13 00:50:13.562062 sshd[4906]: Accepted publickey for core from 10.200.16.10 port 36420 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:13.562475 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:13.566166 systemd-logind[1696]: New session 24 of user core. Mar 13 00:50:13.571258 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:50:15.060176 containerd[1719]: time="2026-03-13T00:50:15.059924029Z" level=info msg="StopContainer for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" with timeout 30 (s)" Mar 13 00:50:15.061872 containerd[1719]: time="2026-03-13T00:50:15.061782671Z" level=info msg="Stop container \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" with signal terminated" Mar 13 00:50:15.079520 containerd[1719]: time="2026-03-13T00:50:15.079481248Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:50:15.083439 systemd[1]: cri-containerd-49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2.scope: Deactivated successfully. Mar 13 00:50:15.085324 containerd[1719]: time="2026-03-13T00:50:15.084962839Z" level=info msg="received container exit event container_id:\"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" id:\"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" pid:3644 exited_at:{seconds:1773363015 nanos:84362819}" Mar 13 00:50:15.087217 containerd[1719]: time="2026-03-13T00:50:15.087193325Z" level=info msg="StopContainer for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" with timeout 2 (s)" Mar 13 00:50:15.087476 containerd[1719]: time="2026-03-13T00:50:15.087461935Z" level=info msg="Stop container \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" with signal terminated" Mar 13 00:50:15.096793 systemd-networkd[1343]: lxc_health: Link DOWN Mar 13 00:50:15.096803 systemd-networkd[1343]: lxc_health: Lost carrier Mar 13 00:50:15.113759 containerd[1719]: time="2026-03-13T00:50:15.113610943Z" level=info msg="received container exit event container_id:\"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" id:\"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" pid:3785 exited_at:{seconds:1773363015 nanos:113450659}" Mar 13 00:50:15.114324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2-rootfs.mount: Deactivated successfully. Mar 13 00:50:15.115949 systemd[1]: cri-containerd-ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2.scope: Deactivated successfully. Mar 13 00:50:15.116316 systemd[1]: cri-containerd-ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2.scope: Consumed 5.651s CPU time, 140.7M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:50:15.132603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2-rootfs.mount: Deactivated successfully. Mar 13 00:50:15.194070 containerd[1719]: time="2026-03-13T00:50:15.194048292Z" level=info msg="StopContainer for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" returns successfully" Mar 13 00:50:15.194818 containerd[1719]: time="2026-03-13T00:50:15.194788636Z" level=info msg="StopPodSandbox for \"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\"" Mar 13 00:50:15.194880 containerd[1719]: time="2026-03-13T00:50:15.194843836Z" level=info msg="Container to stop \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:50:15.199750 containerd[1719]: time="2026-03-13T00:50:15.199661794Z" level=info msg="StopContainer for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" returns successfully" Mar 13 00:50:15.201006 containerd[1719]: time="2026-03-13T00:50:15.200450395Z" level=info msg="StopPodSandbox for \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\"" Mar 13 00:50:15.201099 containerd[1719]: time="2026-03-13T00:50:15.201045731Z" level=info msg="Container to stop \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:50:15.201099 containerd[1719]: time="2026-03-13T00:50:15.201061354Z" level=info msg="Container to stop \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:50:15.201099 containerd[1719]: time="2026-03-13T00:50:15.201070365Z" level=info msg="Container to stop \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:50:15.201099 containerd[1719]: time="2026-03-13T00:50:15.201079657Z" level=info msg="Container to stop \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:50:15.201099 containerd[1719]: time="2026-03-13T00:50:15.201089897Z" level=info msg="Container to stop \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:50:15.204410 systemd[1]: cri-containerd-10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df.scope: Deactivated successfully. Mar 13 00:50:15.208531 systemd[1]: cri-containerd-5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb.scope: Deactivated successfully. Mar 13 00:50:15.210362 containerd[1719]: time="2026-03-13T00:50:15.210328926Z" level=info msg="received sandbox exit event container_id:\"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" id:\"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" exit_status:137 exited_at:{seconds:1773363015 nanos:210082416}" monitor_name=podsandbox Mar 13 00:50:15.210431 containerd[1719]: time="2026-03-13T00:50:15.210329612Z" level=info msg="received sandbox exit event container_id:\"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" id:\"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" exit_status:137 exited_at:{seconds:1773363015 nanos:210043048}" monitor_name=podsandbox Mar 13 00:50:15.228608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb-rootfs.mount: Deactivated successfully. Mar 13 00:50:15.234942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df-rootfs.mount: Deactivated successfully. Mar 13 00:50:15.286228 containerd[1719]: time="2026-03-13T00:50:15.286056438Z" level=info msg="shim disconnected" id=10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df namespace=k8s.io Mar 13 00:50:15.286425 containerd[1719]: time="2026-03-13T00:50:15.286214588Z" level=warning msg="cleaning up after shim disconnected" id=10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df namespace=k8s.io Mar 13 00:50:15.286425 containerd[1719]: time="2026-03-13T00:50:15.286365364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:50:15.288047 containerd[1719]: time="2026-03-13T00:50:15.287995367Z" level=info msg="shim disconnected" id=5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb namespace=k8s.io Mar 13 00:50:15.288130 containerd[1719]: time="2026-03-13T00:50:15.288024406Z" level=warning msg="cleaning up after shim disconnected" id=5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb namespace=k8s.io Mar 13 00:50:15.288130 containerd[1719]: time="2026-03-13T00:50:15.288058529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:50:15.298690 containerd[1719]: time="2026-03-13T00:50:15.298563435Z" level=info msg="received sandbox container exit event sandbox_id:\"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" exit_status:137 exited_at:{seconds:1773363015 nanos:210082416}" monitor_name=criService Mar 13 00:50:15.301014 containerd[1719]: time="2026-03-13T00:50:15.299221544Z" level=info msg="TearDown network for sandbox \"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" successfully" Mar 13 00:50:15.301014 containerd[1719]: time="2026-03-13T00:50:15.299240482Z" level=info msg="StopPodSandbox for \"10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df\" returns successfully" Mar 13 00:50:15.301540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10fff2b9c81b95cc1f29fcef94280d6423dcf2cbbc02773a0ca4b2af490f17df-shm.mount: Deactivated successfully. Mar 13 00:50:15.303824 containerd[1719]: time="2026-03-13T00:50:15.303775898Z" level=info msg="received sandbox container exit event sandbox_id:\"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" exit_status:137 exited_at:{seconds:1773363015 nanos:210043048}" monitor_name=criService Mar 13 00:50:15.304962 containerd[1719]: time="2026-03-13T00:50:15.304791025Z" level=info msg="TearDown network for sandbox \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" successfully" Mar 13 00:50:15.304962 containerd[1719]: time="2026-03-13T00:50:15.304811202Z" level=info msg="StopPodSandbox for \"5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb\" returns successfully" Mar 13 00:50:15.407140 kubelet[3167]: I0313 00:50:15.406234 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-hostproc" (OuterVolumeSpecName: "hostproc") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.407140 kubelet[3167]: I0313 00:50:15.406249 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-hostproc\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407140 kubelet[3167]: I0313 00:50:15.406284 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-hubble-tls\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407140 kubelet[3167]: I0313 00:50:15.406305 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da219c9-c922-4d93-81de-b9403855675c-clustermesh-secrets\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407140 kubelet[3167]: I0313 00:50:15.406319 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da219c9-c922-4d93-81de-b9403855675c-cilium-config-path\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407140 kubelet[3167]: I0313 00:50:15.406331 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96clm\" (UniqueName: \"kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-kube-api-access-96clm\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407537 kubelet[3167]: I0313 00:50:15.406344 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-bpf-maps\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407537 kubelet[3167]: I0313 00:50:15.406359 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-kernel\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407537 kubelet[3167]: I0313 00:50:15.406376 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-lib-modules\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407537 kubelet[3167]: I0313 00:50:15.406390 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cni-path\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407537 kubelet[3167]: I0313 00:50:15.406409 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-cgroup\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407537 kubelet[3167]: I0313 00:50:15.406425 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-net\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407684 kubelet[3167]: I0313 00:50:15.406441 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-run\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407684 kubelet[3167]: I0313 00:50:15.406458 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdn57\" (UniqueName: \"kubernetes.io/projected/15fba49b-bd38-41e3-971b-f31fbc2964df-kube-api-access-rdn57\") pod \"15fba49b-bd38-41e3-971b-f31fbc2964df\" (UID: \"15fba49b-bd38-41e3-971b-f31fbc2964df\") " Mar 13 00:50:15.407684 kubelet[3167]: I0313 00:50:15.406473 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-etc-cni-netd\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407684 kubelet[3167]: I0313 00:50:15.406488 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-xtables-lock\") pod \"3da219c9-c922-4d93-81de-b9403855675c\" (UID: \"3da219c9-c922-4d93-81de-b9403855675c\") " Mar 13 00:50:15.407684 kubelet[3167]: I0313 00:50:15.406505 3167 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fba49b-bd38-41e3-971b-f31fbc2964df-cilium-config-path\") pod \"15fba49b-bd38-41e3-971b-f31fbc2964df\" (UID: \"15fba49b-bd38-41e3-971b-f31fbc2964df\") " Mar 13 00:50:15.407684 kubelet[3167]: I0313 00:50:15.406536 3167 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-hostproc\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.409167 kubelet[3167]: I0313 00:50:15.409140 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da219c9-c922-4d93-81de-b9403855675c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:50:15.409237 kubelet[3167]: I0313 00:50:15.409186 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cni-path" (OuterVolumeSpecName: "cni-path") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.409306 kubelet[3167]: I0313 00:50:15.409292 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fba49b-bd38-41e3-971b-f31fbc2964df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15fba49b-bd38-41e3-971b-f31fbc2964df" (UID: "15fba49b-bd38-41e3-971b-f31fbc2964df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:50:15.409362 kubelet[3167]: I0313 00:50:15.409352 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.409405 kubelet[3167]: I0313 00:50:15.409397 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.409444 kubelet[3167]: I0313 00:50:15.409437 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.410266 kubelet[3167]: I0313 00:50:15.410245 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.411003 kubelet[3167]: I0313 00:50:15.410245 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.411084 kubelet[3167]: I0313 00:50:15.410262 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.411133 kubelet[3167]: I0313 00:50:15.410276 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.411176 kubelet[3167]: I0313 00:50:15.410289 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:50:15.413608 kubelet[3167]: I0313 00:50:15.413584 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fba49b-bd38-41e3-971b-f31fbc2964df-kube-api-access-rdn57" (OuterVolumeSpecName: "kube-api-access-rdn57") pod "15fba49b-bd38-41e3-971b-f31fbc2964df" (UID: "15fba49b-bd38-41e3-971b-f31fbc2964df"). InnerVolumeSpecName "kube-api-access-rdn57". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:50:15.413857 kubelet[3167]: I0313 00:50:15.413832 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-kube-api-access-96clm" (OuterVolumeSpecName: "kube-api-access-96clm") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "kube-api-access-96clm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:50:15.413920 kubelet[3167]: I0313 00:50:15.413909 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da219c9-c922-4d93-81de-b9403855675c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:50:15.414143 kubelet[3167]: I0313 00:50:15.414124 3167 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3da219c9-c922-4d93-81de-b9403855675c" (UID: "3da219c9-c922-4d93-81de-b9403855675c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:50:15.506841 kubelet[3167]: I0313 00:50:15.506814 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-cgroup\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506841 kubelet[3167]: I0313 00:50:15.506833 3167 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-net\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506841 kubelet[3167]: I0313 00:50:15.506843 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cilium-run\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506851 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdn57\" (UniqueName: \"kubernetes.io/projected/15fba49b-bd38-41e3-971b-f31fbc2964df-kube-api-access-rdn57\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506859 3167 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-etc-cni-netd\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506867 3167 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-xtables-lock\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506874 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fba49b-bd38-41e3-971b-f31fbc2964df-cilium-config-path\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506881 3167 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-hubble-tls\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506890 3167 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da219c9-c922-4d93-81de-b9403855675c-clustermesh-secrets\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506899 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da219c9-c922-4d93-81de-b9403855675c-cilium-config-path\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.506956 kubelet[3167]: I0313 00:50:15.506907 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96clm\" (UniqueName: \"kubernetes.io/projected/3da219c9-c922-4d93-81de-b9403855675c-kube-api-access-96clm\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.507096 kubelet[3167]: I0313 00:50:15.506916 3167 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-bpf-maps\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.507096 kubelet[3167]: I0313 00:50:15.506925 3167 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-host-proc-sys-kernel\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.507096 kubelet[3167]: I0313 00:50:15.506933 3167 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-lib-modules\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:15.507096 kubelet[3167]: I0313 00:50:15.506941 3167 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da219c9-c922-4d93-81de-b9403855675c-cni-path\") on node \"ci-4459.2.4-n-0cdf9482ba\" DevicePath \"\"" Mar 13 00:50:16.112742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5910ecd4690b02002896b59a41139153a19abb76a4d5f5c6799169eb4ca41ceb-shm.mount: Deactivated successfully. Mar 13 00:50:16.112832 systemd[1]: var-lib-kubelet-pods-3da219c9\x2dc922\x2d4d93\x2d81de\x2db9403855675c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96clm.mount: Deactivated successfully. Mar 13 00:50:16.112898 systemd[1]: var-lib-kubelet-pods-3da219c9\x2dc922\x2d4d93\x2d81de\x2db9403855675c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 00:50:16.112953 systemd[1]: var-lib-kubelet-pods-15fba49b\x2dbd38\x2d41e3\x2d971b\x2df31fbc2964df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drdn57.mount: Deactivated successfully. Mar 13 00:50:16.113005 systemd[1]: var-lib-kubelet-pods-3da219c9\x2dc922\x2d4d93\x2d81de\x2db9403855675c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 00:50:16.160479 kubelet[3167]: I0313 00:50:16.158489 3167 scope.go:117] "RemoveContainer" containerID="49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2" Mar 13 00:50:16.162703 systemd[1]: Removed slice kubepods-besteffort-pod15fba49b_bd38_41e3_971b_f31fbc2964df.slice - libcontainer container kubepods-besteffort-pod15fba49b_bd38_41e3_971b_f31fbc2964df.slice. Mar 13 00:50:16.163983 containerd[1719]: time="2026-03-13T00:50:16.163951370Z" level=info msg="RemoveContainer for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\"" Mar 13 00:50:16.173425 systemd[1]: Removed slice kubepods-burstable-pod3da219c9_c922_4d93_81de_b9403855675c.slice - libcontainer container kubepods-burstable-pod3da219c9_c922_4d93_81de_b9403855675c.slice. Mar 13 00:50:16.174262 systemd[1]: kubepods-burstable-pod3da219c9_c922_4d93_81de_b9403855675c.slice: Consumed 5.721s CPU time, 141.1M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:50:16.176415 containerd[1719]: time="2026-03-13T00:50:16.176390657Z" level=info msg="RemoveContainer for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" returns successfully" Mar 13 00:50:16.177248 kubelet[3167]: I0313 00:50:16.177222 3167 scope.go:117] "RemoveContainer" containerID="49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2" Mar 13 00:50:16.178017 containerd[1719]: time="2026-03-13T00:50:16.177983136Z" level=error msg="ContainerStatus for \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\": not found" Mar 13 00:50:16.178248 kubelet[3167]: E0313 00:50:16.178227 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\": not found" containerID="49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2" Mar 13 00:50:16.178315 kubelet[3167]: I0313 00:50:16.178256 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2"} err="failed to get container status \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"49a412ff2c161ea5baebedbbb70d0881cf629b24b71e1288a4c250f57ab060d2\": not found" Mar 13 00:50:16.178348 kubelet[3167]: I0313 00:50:16.178315 3167 scope.go:117] "RemoveContainer" containerID="ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2" Mar 13 00:50:16.182081 containerd[1719]: time="2026-03-13T00:50:16.182061719Z" level=info msg="RemoveContainer for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\"" Mar 13 00:50:16.190527 containerd[1719]: time="2026-03-13T00:50:16.190494650Z" level=info msg="RemoveContainer for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" returns successfully" Mar 13 00:50:16.190671 kubelet[3167]: I0313 00:50:16.190646 3167 scope.go:117] "RemoveContainer" containerID="5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07" Mar 13 00:50:16.192285 containerd[1719]: time="2026-03-13T00:50:16.192228097Z" level=info msg="RemoveContainer for \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\"" Mar 13 00:50:16.200424 containerd[1719]: time="2026-03-13T00:50:16.200388848Z" level=info msg="RemoveContainer for \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\" returns successfully" Mar 13 00:50:16.200550 kubelet[3167]: I0313 00:50:16.200533 3167 scope.go:117] "RemoveContainer" containerID="f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757" Mar 13 00:50:16.202084 containerd[1719]: time="2026-03-13T00:50:16.202063055Z" level=info msg="RemoveContainer for \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\"" Mar 13 00:50:16.209579 containerd[1719]: time="2026-03-13T00:50:16.209520445Z" level=info msg="RemoveContainer for \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\" returns successfully" Mar 13 00:50:16.210235 kubelet[3167]: I0313 00:50:16.210208 3167 scope.go:117] "RemoveContainer" containerID="f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec" Mar 13 00:50:16.213670 containerd[1719]: time="2026-03-13T00:50:16.213174282Z" level=info msg="RemoveContainer for \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\"" Mar 13 00:50:16.221075 containerd[1719]: time="2026-03-13T00:50:16.221052624Z" level=info msg="RemoveContainer for \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\" returns successfully" Mar 13 00:50:16.221218 kubelet[3167]: I0313 00:50:16.221197 3167 scope.go:117] "RemoveContainer" containerID="4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171" Mar 13 00:50:16.222235 containerd[1719]: time="2026-03-13T00:50:16.222213254Z" level=info msg="RemoveContainer for \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\"" Mar 13 00:50:16.228184 containerd[1719]: time="2026-03-13T00:50:16.228163107Z" level=info msg="RemoveContainer for \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\" returns successfully" Mar 13 00:50:16.228362 kubelet[3167]: I0313 00:50:16.228305 3167 scope.go:117] "RemoveContainer" containerID="ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2" Mar 13 00:50:16.228518 containerd[1719]: time="2026-03-13T00:50:16.228491334Z" level=error msg="ContainerStatus for \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\": not found" Mar 13 00:50:16.228612 kubelet[3167]: E0313 00:50:16.228595 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\": not found" containerID="ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2" Mar 13 00:50:16.228647 kubelet[3167]: I0313 00:50:16.228635 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2"} err="failed to get container status \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad1e0490dbe46704c67963e6912abac089e6ffd6fe0af208e30006ea883c6ce2\": not found" Mar 13 00:50:16.228703 kubelet[3167]: I0313 00:50:16.228651 3167 scope.go:117] "RemoveContainer" containerID="5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07" Mar 13 00:50:16.228816 containerd[1719]: time="2026-03-13T00:50:16.228793109Z" level=error msg="ContainerStatus for \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\": not found" Mar 13 00:50:16.228898 kubelet[3167]: E0313 00:50:16.228883 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\": not found" containerID="5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07" Mar 13 00:50:16.228933 kubelet[3167]: I0313 00:50:16.228900 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07"} err="failed to get container status \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\": rpc error: code = NotFound desc = an error occurred when try to find container \"5dbde9c2f604b67c148ad2e837d880274f1787a95e195c0bbef0dd6c28e66b07\": not found" Mar 13 00:50:16.228933 kubelet[3167]: I0313 00:50:16.228912 3167 scope.go:117] "RemoveContainer" containerID="f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757" Mar 13 00:50:16.229126 containerd[1719]: time="2026-03-13T00:50:16.229038523Z" level=error msg="ContainerStatus for \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\": not found" Mar 13 00:50:16.229154 kubelet[3167]: E0313 00:50:16.229130 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\": not found" containerID="f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757" Mar 13 00:50:16.229154 kubelet[3167]: I0313 00:50:16.229146 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757"} err="failed to get container status \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7ff70d54c782eced7777010ef0b9bb1f19b52bf26be0f81b879013d3f7dd757\": not found" Mar 13 00:50:16.229208 kubelet[3167]: I0313 00:50:16.229159 3167 scope.go:117] "RemoveContainer" containerID="f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec" Mar 13 00:50:16.229328 containerd[1719]: time="2026-03-13T00:50:16.229306497Z" level=error msg="ContainerStatus for \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\": not found" Mar 13 00:50:16.229411 kubelet[3167]: E0313 00:50:16.229396 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\": not found" containerID="f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec" Mar 13 00:50:16.229441 kubelet[3167]: I0313 00:50:16.229424 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec"} err="failed to get container status \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"f24e82000658d6e28f0bbb380e5c8171195c711b2f0e2f208cfce74be9bff2ec\": not found" Mar 13 00:50:16.229465 kubelet[3167]: I0313 00:50:16.229453 3167 scope.go:117] "RemoveContainer" containerID="4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171" Mar 13 00:50:16.229583 containerd[1719]: time="2026-03-13T00:50:16.229562130Z" level=error msg="ContainerStatus for \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\": not found" Mar 13 00:50:16.229644 kubelet[3167]: E0313 00:50:16.229629 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\": not found" containerID="4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171" Mar 13 00:50:16.229671 kubelet[3167]: I0313 00:50:16.229644 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171"} err="failed to get container status \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a61f153eb8b9fd5a78da840da8c1c2522366d32b5885d8836eb07bbf4c0b171\": not found" Mar 13 00:50:16.636547 kubelet[3167]: I0313 00:50:16.636521 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fba49b-bd38-41e3-971b-f31fbc2964df" path="/var/lib/kubelet/pods/15fba49b-bd38-41e3-971b-f31fbc2964df/volumes" Mar 13 00:50:16.636871 kubelet[3167]: I0313 00:50:16.636842 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da219c9-c922-4d93-81de-b9403855675c" path="/var/lib/kubelet/pods/3da219c9-c922-4d93-81de-b9403855675c/volumes" Mar 13 00:50:16.755564 kubelet[3167]: E0313 00:50:16.755532 3167 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:50:17.096632 sshd[4909]: Connection closed by 10.200.16.10 port 36420 Mar 13 00:50:17.097154 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:17.100274 systemd[1]: sshd@21-10.200.8.22:22-10.200.16.10:36420.service: Deactivated successfully. Mar 13 00:50:17.102066 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:50:17.103513 systemd-logind[1696]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:50:17.104834 systemd-logind[1696]: Removed session 24. Mar 13 00:50:17.206757 systemd[1]: Started sshd@22-10.200.8.22:22-10.200.16.10:36430.service - OpenSSH per-connection server daemon (10.200.16.10:36430). Mar 13 00:50:17.739570 sshd[5057]: Accepted publickey for core from 10.200.16.10 port 36430 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:17.741062 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:17.745334 systemd-logind[1696]: New session 25 of user core. Mar 13 00:50:17.749273 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:50:18.284914 systemd[1]: Created slice kubepods-burstable-pod4415b4fe_ca3a_4939_8a44_1cfd5476eced.slice - libcontainer container kubepods-burstable-pod4415b4fe_ca3a_4939_8a44_1cfd5476eced.slice. Mar 13 00:50:18.302753 sshd[5060]: Connection closed by 10.200.16.10 port 36430 Mar 13 00:50:18.306280 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:18.310677 systemd[1]: sshd@22-10.200.8.22:22-10.200.16.10:36430.service: Deactivated successfully. Mar 13 00:50:18.312445 systemd-logind[1696]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:50:18.313410 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:50:18.317397 systemd-logind[1696]: Removed session 25. Mar 13 00:50:18.318274 kubelet[3167]: I0313 00:50:18.318244 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-etc-cni-netd\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318515 kubelet[3167]: I0313 00:50:18.318283 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6x6m\" (UniqueName: \"kubernetes.io/projected/4415b4fe-ca3a-4939-8a44-1cfd5476eced-kube-api-access-x6x6m\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318515 kubelet[3167]: I0313 00:50:18.318301 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-cilium-run\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318515 kubelet[3167]: I0313 00:50:18.318316 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4415b4fe-ca3a-4939-8a44-1cfd5476eced-cilium-config-path\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318515 kubelet[3167]: I0313 00:50:18.318334 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-host-proc-sys-net\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318515 kubelet[3167]: I0313 00:50:18.318349 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-bpf-maps\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318515 kubelet[3167]: I0313 00:50:18.318364 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-lib-modules\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318656 kubelet[3167]: I0313 00:50:18.318379 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-cilium-cgroup\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318656 kubelet[3167]: I0313 00:50:18.318397 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-cni-path\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318656 kubelet[3167]: I0313 00:50:18.318413 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4415b4fe-ca3a-4939-8a44-1cfd5476eced-cilium-ipsec-secrets\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318656 kubelet[3167]: I0313 00:50:18.318430 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4415b4fe-ca3a-4939-8a44-1cfd5476eced-hubble-tls\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318656 kubelet[3167]: I0313 00:50:18.318457 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-hostproc\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318656 kubelet[3167]: I0313 00:50:18.318471 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-xtables-lock\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318801 kubelet[3167]: I0313 00:50:18.318486 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4415b4fe-ca3a-4939-8a44-1cfd5476eced-clustermesh-secrets\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.318801 kubelet[3167]: I0313 00:50:18.318501 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4415b4fe-ca3a-4939-8a44-1cfd5476eced-host-proc-sys-kernel\") pod \"cilium-tbmqd\" (UID: \"4415b4fe-ca3a-4939-8a44-1cfd5476eced\") " pod="kube-system/cilium-tbmqd" Mar 13 00:50:18.416058 systemd[1]: Started sshd@23-10.200.8.22:22-10.200.16.10:36440.service - OpenSSH per-connection server daemon (10.200.16.10:36440). Mar 13 00:50:18.595438 containerd[1719]: time="2026-03-13T00:50:18.595354258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbmqd,Uid:4415b4fe-ca3a-4939-8a44-1cfd5476eced,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:18.640299 containerd[1719]: time="2026-03-13T00:50:18.640258111Z" level=info msg="connecting to shim 10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873" address="unix:///run/containerd/s/cc22a54b151238ddb363d070925ae17acc62e15106815fb928dbfdf628ac9574" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:18.663261 systemd[1]: Started cri-containerd-10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873.scope - libcontainer container 10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873. Mar 13 00:50:18.686492 containerd[1719]: time="2026-03-13T00:50:18.686465703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbmqd,Uid:4415b4fe-ca3a-4939-8a44-1cfd5476eced,Namespace:kube-system,Attempt:0,} returns sandbox id \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\"" Mar 13 00:50:18.694975 containerd[1719]: time="2026-03-13T00:50:18.694946871Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:50:18.708679 containerd[1719]: time="2026-03-13T00:50:18.708654862Z" level=info msg="Container c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:18.724059 containerd[1719]: time="2026-03-13T00:50:18.724036859Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d\"" Mar 13 00:50:18.724516 containerd[1719]: time="2026-03-13T00:50:18.724424257Z" level=info msg="StartContainer for \"c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d\"" Mar 13 00:50:18.725555 containerd[1719]: time="2026-03-13T00:50:18.725526875Z" level=info msg="connecting to shim c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d" address="unix:///run/containerd/s/cc22a54b151238ddb363d070925ae17acc62e15106815fb928dbfdf628ac9574" protocol=ttrpc version=3 Mar 13 00:50:18.739283 systemd[1]: Started cri-containerd-c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d.scope - libcontainer container c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d. Mar 13 00:50:18.765424 containerd[1719]: time="2026-03-13T00:50:18.765368858Z" level=info msg="StartContainer for \"c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d\" returns successfully" Mar 13 00:50:18.768605 systemd[1]: cri-containerd-c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d.scope: Deactivated successfully. Mar 13 00:50:18.770059 containerd[1719]: time="2026-03-13T00:50:18.769978356Z" level=info msg="received container exit event container_id:\"c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d\" id:\"c455ff409734b0ba95536ed146bdd4e0d144801d1ca33300c024c9bfc9432d3d\" pid:5135 exited_at:{seconds:1773363018 nanos:769793719}" Mar 13 00:50:18.958454 sshd[5071]: Accepted publickey for core from 10.200.16.10 port 36440 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:18.959450 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:18.963518 systemd-logind[1696]: New session 26 of user core. Mar 13 00:50:18.971202 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:50:19.191434 containerd[1719]: time="2026-03-13T00:50:19.191402610Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:50:19.209355 containerd[1719]: time="2026-03-13T00:50:19.209294274Z" level=info msg="Container bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:19.231521 containerd[1719]: time="2026-03-13T00:50:19.231495535Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002\"" Mar 13 00:50:19.232069 containerd[1719]: time="2026-03-13T00:50:19.231971421Z" level=info msg="StartContainer for \"bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002\"" Mar 13 00:50:19.233166 containerd[1719]: time="2026-03-13T00:50:19.233141603Z" level=info msg="connecting to shim bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002" address="unix:///run/containerd/s/cc22a54b151238ddb363d070925ae17acc62e15106815fb928dbfdf628ac9574" protocol=ttrpc version=3 Mar 13 00:50:19.250266 systemd[1]: Started cri-containerd-bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002.scope - libcontainer container bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002. Mar 13 00:50:19.257144 sshd[5167]: Connection closed by 10.200.16.10 port 36440 Mar 13 00:50:19.258429 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:19.262198 systemd[1]: sshd@23-10.200.8.22:22-10.200.16.10:36440.service: Deactivated successfully. Mar 13 00:50:19.263602 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:50:19.265935 systemd-logind[1696]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:50:19.268978 systemd-logind[1696]: Removed session 26. Mar 13 00:50:19.282368 systemd[1]: cri-containerd-bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002.scope: Deactivated successfully. Mar 13 00:50:19.283959 containerd[1719]: time="2026-03-13T00:50:19.283826791Z" level=info msg="received container exit event container_id:\"bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002\" id:\"bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002\" pid:5182 exited_at:{seconds:1773363019 nanos:283153097}" Mar 13 00:50:19.284805 containerd[1719]: time="2026-03-13T00:50:19.284786752Z" level=info msg="StartContainer for \"bb018c08a8b784e9341ef936af1ec8426d8555171de74a5defe3410d80306002\" returns successfully" Mar 13 00:50:19.369791 systemd[1]: Started sshd@24-10.200.8.22:22-10.200.16.10:36456.service - OpenSSH per-connection server daemon (10.200.16.10:36456). Mar 13 00:50:19.635438 kubelet[3167]: E0313 00:50:19.635342 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-mzkpj" podUID="9ebee503-2411-4133-8adb-2947d0bb1408" Mar 13 00:50:19.901138 sshd[5216]: Accepted publickey for core from 10.200.16.10 port 36456 ssh2: RSA SHA256:K335DX4DWRJ0Q+v3xlgEnkVNcOnxJOTLqPWWP0OACKI Mar 13 00:50:19.902101 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:19.906442 systemd-logind[1696]: New session 27 of user core. Mar 13 00:50:19.910244 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 00:50:20.188642 containerd[1719]: time="2026-03-13T00:50:20.188564346Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:50:20.208850 containerd[1719]: time="2026-03-13T00:50:20.207732861Z" level=info msg="Container a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:20.225048 containerd[1719]: time="2026-03-13T00:50:20.225022741Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d\"" Mar 13 00:50:20.226139 containerd[1719]: time="2026-03-13T00:50:20.225512701Z" level=info msg="StartContainer for \"a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d\"" Mar 13 00:50:20.226978 containerd[1719]: time="2026-03-13T00:50:20.226942949Z" level=info msg="connecting to shim a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d" address="unix:///run/containerd/s/cc22a54b151238ddb363d070925ae17acc62e15106815fb928dbfdf628ac9574" protocol=ttrpc version=3 Mar 13 00:50:20.249261 systemd[1]: Started cri-containerd-a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d.scope - libcontainer container a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d. Mar 13 00:50:20.299306 systemd[1]: cri-containerd-a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d.scope: Deactivated successfully. Mar 13 00:50:20.300778 containerd[1719]: time="2026-03-13T00:50:20.300752072Z" level=info msg="received container exit event container_id:\"a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d\" id:\"a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d\" pid:5238 exited_at:{seconds:1773363020 nanos:300422880}" Mar 13 00:50:20.308373 containerd[1719]: time="2026-03-13T00:50:20.308337016Z" level=info msg="StartContainer for \"a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d\" returns successfully" Mar 13 00:50:20.317998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c9204398782eb7cb40fe51d9e7cc2c4d80a6543bf2fccaa577ed41410c3a2d-rootfs.mount: Deactivated successfully. Mar 13 00:50:21.192451 containerd[1719]: time="2026-03-13T00:50:21.192387885Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:50:21.207041 containerd[1719]: time="2026-03-13T00:50:21.206511417Z" level=info msg="Container 32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:21.218635 containerd[1719]: time="2026-03-13T00:50:21.218606432Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898\"" Mar 13 00:50:21.219160 containerd[1719]: time="2026-03-13T00:50:21.219096345Z" level=info msg="StartContainer for \"32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898\"" Mar 13 00:50:21.220195 containerd[1719]: time="2026-03-13T00:50:21.220102044Z" level=info msg="connecting to shim 32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898" address="unix:///run/containerd/s/cc22a54b151238ddb363d070925ae17acc62e15106815fb928dbfdf628ac9574" protocol=ttrpc version=3 Mar 13 00:50:21.239238 systemd[1]: Started cri-containerd-32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898.scope - libcontainer container 32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898. Mar 13 00:50:21.258680 systemd[1]: cri-containerd-32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898.scope: Deactivated successfully. Mar 13 00:50:21.265160 containerd[1719]: time="2026-03-13T00:50:21.264332390Z" level=info msg="received container exit event container_id:\"32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898\" id:\"32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898\" pid:5280 exited_at:{seconds:1773363021 nanos:259194750}" Mar 13 00:50:21.269928 containerd[1719]: time="2026-03-13T00:50:21.269908419Z" level=info msg="StartContainer for \"32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898\" returns successfully" Mar 13 00:50:21.279845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32c33eb9e9b9b6efda033a1262eb906b6848db83a4c8934048b38d68c78ef898-rootfs.mount: Deactivated successfully. Mar 13 00:50:21.324531 kubelet[3167]: I0313 00:50:21.324495 3167 setters.go:543] "Node became not ready" node="ci-4459.2.4-n-0cdf9482ba" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T00:50:21Z","lastTransitionTime":"2026-03-13T00:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 00:50:21.635856 kubelet[3167]: E0313 00:50:21.635752 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-mzkpj" podUID="9ebee503-2411-4133-8adb-2947d0bb1408" Mar 13 00:50:21.756127 kubelet[3167]: E0313 00:50:21.756079 3167 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:50:22.196847 containerd[1719]: time="2026-03-13T00:50:22.196776839Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:50:22.214602 containerd[1719]: time="2026-03-13T00:50:22.214575123Z" level=info msg="Container 4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:22.231652 containerd[1719]: time="2026-03-13T00:50:22.231622842Z" level=info msg="CreateContainer within sandbox \"10cbc798aaffaf745d5d9c588b4644b80a99c173120c39529710eae1220cf873\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee\"" Mar 13 00:50:22.232103 containerd[1719]: time="2026-03-13T00:50:22.232008106Z" level=info msg="StartContainer for \"4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee\"" Mar 13 00:50:22.233259 containerd[1719]: time="2026-03-13T00:50:22.233175084Z" level=info msg="connecting to shim 4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee" address="unix:///run/containerd/s/cc22a54b151238ddb363d070925ae17acc62e15106815fb928dbfdf628ac9574" protocol=ttrpc version=3 Mar 13 00:50:22.252237 systemd[1]: Started cri-containerd-4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee.scope - libcontainer container 4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee. Mar 13 00:50:22.290099 containerd[1719]: time="2026-03-13T00:50:22.290028361Z" level=info msg="StartContainer for \"4ce8d84d37b52aa988bed39bab22a22f01d2c45231a303d052a8afbd185e81ee\" returns successfully" Mar 13 00:50:22.586135 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Mar 13 00:50:23.209624 kubelet[3167]: I0313 00:50:23.209435 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tbmqd" podStartSLOduration=5.209420408 podStartE2EDuration="5.209420408s" podCreationTimestamp="2026-03-13 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:23.20916258 +0000 UTC m=+266.677743269" watchObservedRunningTime="2026-03-13 00:50:23.209420408 +0000 UTC m=+266.678001101" Mar 13 00:50:23.635981 kubelet[3167]: E0313 00:50:23.635836 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-mzkpj" podUID="9ebee503-2411-4133-8adb-2947d0bb1408" Mar 13 00:50:25.081300 systemd-networkd[1343]: lxc_health: Link UP Mar 13 00:50:25.087457 systemd-networkd[1343]: lxc_health: Gained carrier Mar 13 00:50:25.635804 kubelet[3167]: E0313 00:50:25.635404 3167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-mzkpj" podUID="9ebee503-2411-4133-8adb-2947d0bb1408" Mar 13 00:50:26.842221 systemd-networkd[1343]: lxc_health: Gained IPv6LL Mar 13 00:50:30.690817 sshd[5219]: Connection closed by 10.200.16.10 port 36456 Mar 13 00:50:30.693129 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:30.696493 systemd-logind[1696]: Session 27 logged out. Waiting for processes to exit. Mar 13 00:50:30.696808 systemd[1]: sshd@24-10.200.8.22:22-10.200.16.10:36456.service: Deactivated successfully. Mar 13 00:50:30.698737 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 00:50:30.700724 systemd-logind[1696]: Removed session 27.