Mar 12 01:21:55.245717 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:21:55.245742 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:21:55.245753 kernel: BIOS-provided physical RAM map: Mar 12 01:21:55.245759 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:21:55.245765 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:21:55.245770 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:21:55.245776 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:21:55.245782 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:21:55.245787 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 12 01:21:55.245793 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 12 01:21:55.245800 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 12 01:21:55.245806 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 12 01:21:55.245811 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 12 01:21:55.245817 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 12 01:21:55.245824 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 12 01:21:55.245829 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:21:55.245838 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 12 01:21:55.245844 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 12 01:21:55.245849 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:21:55.245855 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:21:55.245860 kernel: NX (Execute Disable) protection: active Mar 12 01:21:55.245866 kernel: APIC: Static calls initialized Mar 12 01:21:55.245872 kernel: efi: EFI v2.7 by EDK II Mar 12 01:21:55.245877 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 12 01:21:55.245883 kernel: SMBIOS 2.8 present. Mar 12 01:21:55.245889 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 12 01:21:55.245895 kernel: Hypervisor detected: KVM Mar 12 01:21:55.245903 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:21:55.245909 kernel: kvm-clock: using sched offset of 6220738025 cycles Mar 12 01:21:55.245915 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:21:55.245921 kernel: tsc: Detected 2445.424 MHz processor Mar 12 01:21:55.245927 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:21:55.245934 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:21:55.245939 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 12 01:21:55.245945 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 12 01:21:55.245951 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:21:55.245960 kernel: Using GB pages for direct mapping Mar 12 01:21:55.245966 kernel: Secure boot disabled Mar 12 01:21:55.245972 kernel: ACPI: Early table checksum verification disabled Mar 12 01:21:55.245978 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 12 01:21:55.245987 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 12 01:21:55.245994 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:21:55.246000 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:21:55.246009 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 12 01:21:55.246015 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:21:55.246021 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:21:55.246027 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:21:55.246034 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:21:55.246040 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 01:21:55.246046 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 12 01:21:55.246054 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 12 01:21:55.246061 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 12 01:21:55.246067 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 12 01:21:55.246073 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 12 01:21:55.246079 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 12 01:21:55.246085 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 12 01:21:55.246091 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 12 01:21:55.246097 kernel: No NUMA configuration found Mar 12 01:21:55.246103 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 12 01:21:55.246112 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 12 01:21:55.246118 kernel: Zone ranges: Mar 12 01:21:55.246124 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:21:55.246131 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 12 01:21:55.246137 kernel: Normal empty Mar 12 01:21:55.246143 kernel: Movable zone start for each node Mar 12 01:21:55.246149 kernel: Early memory node ranges Mar 12 01:21:55.246155 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 12 01:21:55.246161 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 12 01:21:55.246168 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 12 01:21:55.246176 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 12 01:21:55.246182 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 12 01:21:55.246188 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 12 01:21:55.246195 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 12 01:21:55.246201 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:21:55.246207 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 12 01:21:55.246213 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 12 01:21:55.246219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:21:55.246225 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 12 01:21:55.246234 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 12 01:21:55.246240 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 12 01:21:55.246246 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:21:55.246252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:21:55.246258 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:21:55.246264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:21:55.246271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:21:55.246277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:21:55.246283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:21:55.246289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:21:55.246298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:21:55.246304 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:21:55.246310 kernel: TSC deadline timer available Mar 12 01:21:55.246316 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:21:55.246323 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:21:55.246329 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:21:55.246335 kernel: kvm-guest: setup PV sched yield Mar 12 01:21:55.246341 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 12 01:21:55.246347 kernel: Booting paravirtualized kernel on KVM Mar 12 01:21:55.246356 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:21:55.246363 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:21:55.246369 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:21:55.246375 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:21:55.246381 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:21:55.246387 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:21:55.246394 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:21:55.246401 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:21:55.246410 kernel: random: crng init done Mar 12 01:21:55.246416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:21:55.246422 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:21:55.246428 kernel: Fallback order for Node 0: 0 Mar 12 01:21:55.246435 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 12 01:21:55.246441 kernel: Policy zone: DMA32 Mar 12 01:21:55.246447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:21:55.246453 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 12 01:21:55.246460 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:21:55.246468 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:21:55.246474 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:21:55.246481 kernel: Dynamic Preempt: voluntary Mar 12 01:21:55.246489 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:21:55.246516 kernel: rcu: RCU event tracing is enabled. Mar 12 01:21:55.246533 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:21:55.246548 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:21:55.246561 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:21:55.246573 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:21:55.246585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:21:55.246591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:21:55.246598 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:21:55.246643 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:21:55.246687 kernel: Console: colour dummy device 80x25 Mar 12 01:21:55.246694 kernel: printk: console [ttyS0] enabled Mar 12 01:21:55.246700 kernel: ACPI: Core revision 20230628 Mar 12 01:21:55.246707 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:21:55.246717 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:21:55.246723 kernel: x2apic enabled Mar 12 01:21:55.246730 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:21:55.246737 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:21:55.246743 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:21:55.246750 kernel: kvm-guest: setup PV IPIs Mar 12 01:21:55.246756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:21:55.246763 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:21:55.246769 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 12 01:21:55.246778 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:21:55.246785 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:21:55.246791 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:21:55.246798 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:21:55.246804 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:21:55.246811 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:21:55.246818 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:21:55.246825 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:21:55.246832 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:21:55.246841 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:21:55.246847 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:21:55.246854 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:21:55.246860 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:21:55.246867 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:21:55.246873 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:21:55.246880 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:21:55.246886 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:21:55.246895 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:21:55.246902 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:21:55.246908 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:21:55.246915 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:21:55.246921 kernel: landlock: Up and running. Mar 12 01:21:55.246928 kernel: SELinux: Initializing. Mar 12 01:21:55.246934 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:21:55.246941 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:21:55.246947 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:21:55.246956 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:21:55.246963 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:21:55.246970 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:21:55.246976 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:21:55.246982 kernel: signal: max sigframe size: 1776 Mar 12 01:21:55.246989 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:21:55.246996 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:21:55.247002 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:21:55.247009 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:21:55.247018 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:21:55.247025 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:21:55.247031 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:21:55.247038 kernel: smpboot: Max logical packages: 1 Mar 12 01:21:55.247044 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 12 01:21:55.247051 kernel: devtmpfs: initialized Mar 12 01:21:55.247057 kernel: x86/mm: Memory block size: 128MB Mar 12 01:21:55.247064 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 12 01:21:55.247070 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 12 01:21:55.247079 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 12 01:21:55.247086 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 12 01:21:55.247092 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 12 01:21:55.247099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:21:55.247106 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:21:55.247112 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:21:55.247119 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:21:55.247125 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:21:55.247132 kernel: audit: type=2000 audit(1773278514.315:1): state=initialized audit_enabled=0 res=1 Mar 12 01:21:55.247140 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:21:55.247147 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:21:55.247153 kernel: cpuidle: using governor menu Mar 12 01:21:55.247160 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:21:55.247166 kernel: dca service started, version 1.12.1 Mar 12 01:21:55.247173 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:21:55.247179 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:21:55.247186 kernel: PCI: Using configuration type 1 for base access Mar 12 01:21:55.247192 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:21:55.247201 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:21:55.247208 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:21:55.247214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:21:55.247221 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:21:55.247227 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:21:55.247234 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:21:55.247240 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:21:55.247247 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:21:55.247253 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:21:55.247262 kernel: ACPI: Interpreter enabled Mar 12 01:21:55.247268 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:21:55.247275 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:21:55.247282 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:21:55.247288 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:21:55.247294 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:21:55.247301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:21:55.247567 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:21:55.247814 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:21:55.247947 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:21:55.247957 kernel: PCI host bridge to bus 0000:00 Mar 12 01:21:55.248085 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:21:55.248199 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:21:55.248309 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:21:55.248420 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:21:55.248581 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:21:55.248893 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 12 01:21:55.249010 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:21:55.249155 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:21:55.249288 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:21:55.249408 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 12 01:21:55.249639 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 12 01:21:55.249814 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 12 01:21:55.249973 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 12 01:21:55.250098 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:21:55.250228 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:21:55.250348 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 12 01:21:55.250468 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 12 01:21:55.250595 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 12 01:21:55.250828 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:21:55.250955 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 12 01:21:55.251130 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 12 01:21:55.251315 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 12 01:21:55.251526 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:21:55.251997 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 12 01:21:55.252337 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 12 01:21:55.252591 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 12 01:21:55.252917 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 12 01:21:55.253145 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:21:55.253337 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:21:55.253528 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:21:55.253763 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 12 01:21:55.253914 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 12 01:21:55.254059 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:21:55.254199 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 12 01:21:55.254210 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:21:55.254217 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:21:55.254224 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:21:55.254231 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:21:55.254243 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:21:55.254250 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:21:55.254256 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:21:55.254263 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:21:55.254269 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:21:55.254276 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:21:55.254283 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:21:55.254289 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:21:55.254296 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:21:55.254305 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:21:55.254312 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:21:55.254319 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:21:55.254326 kernel: iommu: Default domain type: Translated Mar 12 01:21:55.254333 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:21:55.254340 kernel: efivars: Registered efivars operations Mar 12 01:21:55.254346 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:21:55.254354 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:21:55.254360 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 12 01:21:55.254369 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 12 01:21:55.254376 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 12 01:21:55.254383 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 12 01:21:55.254554 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:21:55.254772 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:21:55.254897 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:21:55.254906 kernel: vgaarb: loaded Mar 12 01:21:55.254913 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:21:55.254923 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:21:55.254942 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:21:55.254955 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:21:55.254967 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:21:55.254974 kernel: pnp: PnP ACPI init Mar 12 01:21:55.255116 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:21:55.255137 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:21:55.255149 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:21:55.255161 kernel: NET: Registered PF_INET protocol family Mar 12 01:21:55.255178 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:21:55.255191 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:21:55.255198 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:21:55.255205 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:21:55.255212 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:21:55.255218 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:21:55.255225 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:21:55.255232 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:21:55.255245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:21:55.255257 kernel: NET: Registered PF_XDP protocol family Mar 12 01:21:55.255429 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 12 01:21:55.255579 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 12 01:21:55.255916 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:21:55.256033 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:21:55.256141 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:21:55.256249 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:21:55.256366 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:21:55.256530 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 12 01:21:55.256542 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:21:55.256549 kernel: Initialise system trusted keyrings Mar 12 01:21:55.256556 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:21:55.256563 kernel: Key type asymmetric registered Mar 12 01:21:55.256569 kernel: Asymmetric key parser 'x509' registered Mar 12 01:21:55.256576 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:21:55.256583 kernel: io scheduler mq-deadline registered Mar 12 01:21:55.256594 kernel: io scheduler kyber registered Mar 12 01:21:55.256739 kernel: io scheduler bfq registered Mar 12 01:21:55.256747 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:21:55.256755 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:21:55.256762 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:21:55.256769 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:21:55.256776 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:21:55.256782 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:21:55.256789 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:21:55.256800 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:21:55.256807 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:21:55.256943 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:21:55.256954 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:21:55.257067 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:21:55.257182 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:21:54 UTC (1773278514) Mar 12 01:21:55.257295 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:21:55.257304 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:21:55.257315 kernel: efifb: probing for efifb Mar 12 01:21:55.257377 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 12 01:21:55.257389 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 12 01:21:55.257399 kernel: efifb: scrolling: redraw Mar 12 01:21:55.257409 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 12 01:21:55.257422 kernel: Console: switching to colour frame buffer device 100x37 Mar 12 01:21:55.257432 kernel: fb0: EFI VGA frame buffer device Mar 12 01:21:55.257487 kernel: pstore: Using crash dump compression: deflate Mar 12 01:21:55.257501 kernel: pstore: Registered efi_pstore as persistent store backend Mar 12 01:21:55.257516 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:21:55.257528 kernel: Segment Routing with IPv6 Mar 12 01:21:55.257538 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:21:55.257551 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:21:55.257598 kernel: Key type dns_resolver registered Mar 12 01:21:55.257989 kernel: IPI shorthand broadcast: enabled Mar 12 01:21:55.258021 kernel: sched_clock: Marking stable (1168022201, 365918707)->(1909013452, -375072544) Mar 12 01:21:55.258031 kernel: registered taskstats version 1 Mar 12 01:21:55.258038 kernel: Loading compiled-in X.509 certificates Mar 12 01:21:55.258047 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:21:55.258054 kernel: Key type .fscrypt registered Mar 12 01:21:55.258061 kernel: Key type fscrypt-provisioning registered Mar 12 01:21:55.258067 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:21:55.258074 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:21:55.258081 kernel: ima: No architecture policies found Mar 12 01:21:55.258089 kernel: clk: Disabling unused clocks Mar 12 01:21:55.258096 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:21:55.258102 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:21:55.258112 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:21:55.258119 kernel: Run /init as init process Mar 12 01:21:55.258126 kernel: with arguments: Mar 12 01:21:55.258133 kernel: /init Mar 12 01:21:55.258139 kernel: with environment: Mar 12 01:21:55.258146 kernel: HOME=/ Mar 12 01:21:55.258153 kernel: TERM=linux Mar 12 01:21:55.258163 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:21:55.258174 systemd[1]: Detected virtualization kvm. Mar 12 01:21:55.258182 systemd[1]: Detected architecture x86-64. Mar 12 01:21:55.258189 systemd[1]: Running in initrd. Mar 12 01:21:55.258196 systemd[1]: No hostname configured, using default hostname. Mar 12 01:21:55.258203 systemd[1]: Hostname set to . Mar 12 01:21:55.258211 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:21:55.258218 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:21:55.258225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:21:55.258235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:21:55.258243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:21:55.258251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:21:55.258258 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:21:55.258271 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:21:55.258282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:21:55.258289 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:21:55.258297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:21:55.258304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:21:55.258311 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:21:55.258319 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:21:55.258328 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:21:55.258336 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:21:55.258346 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:21:55.258361 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:21:55.258375 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:21:55.258386 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:21:55.258398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:21:55.258410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:21:55.258421 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:21:55.258439 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:21:55.258450 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:21:55.258463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:21:55.258477 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:21:55.258489 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:21:55.258502 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:21:55.258514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:21:55.258527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:21:55.258570 systemd-journald[194]: Collecting audit messages is disabled. Mar 12 01:21:55.258593 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:21:55.258863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:21:55.258875 systemd-journald[194]: Journal started Mar 12 01:21:55.258896 systemd-journald[194]: Runtime Journal (/run/log/journal/0735765f4a8847a7b45e8dfbced7ff38) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:21:55.275016 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:21:55.278550 systemd-modules-load[195]: Inserted module 'overlay' Mar 12 01:21:55.283552 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:21:55.291401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:21:55.323825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:21:55.326315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:21:55.331730 kernel: Bridge firewalling registered Mar 12 01:21:55.329351 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 12 01:21:55.334035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:21:55.337296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:21:55.339898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:21:55.383186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:21:55.385014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:21:55.387344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:21:55.401811 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:21:55.405529 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:21:55.415469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:21:55.431522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:21:55.438377 dracut-cmdline[225]: dracut-dracut-053 Mar 12 01:21:55.442993 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:21:55.473922 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:21:55.492959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:21:55.535842 systemd-resolved[251]: Positive Trust Anchors: Mar 12 01:21:55.535884 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:21:55.535932 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:21:55.572350 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 12 01:21:55.576998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:21:55.578543 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:21:55.601756 kernel: SCSI subsystem initialized Mar 12 01:21:55.611731 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:21:55.625036 kernel: iscsi: registered transport (tcp) Mar 12 01:21:55.652211 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:21:55.652292 kernel: QLogic iSCSI HBA Driver Mar 12 01:21:55.730917 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:21:55.749971 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:21:55.811083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:21:55.813760 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:21:55.813829 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:21:55.870767 kernel: raid6: avx2x4 gen() 27999 MB/s Mar 12 01:21:55.888763 kernel: raid6: avx2x2 gen() 24404 MB/s Mar 12 01:21:55.909955 kernel: raid6: avx2x1 gen() 15810 MB/s Mar 12 01:21:55.910032 kernel: raid6: using algorithm avx2x4 gen() 27999 MB/s Mar 12 01:21:55.931893 kernel: raid6: .... xor() 5756 MB/s, rmw enabled Mar 12 01:21:55.931976 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:21:55.958210 kernel: xor: automatically using best checksumming function avx Mar 12 01:21:56.215734 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:21:56.238954 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:21:56.274261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:21:56.308564 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 12 01:21:56.317072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:21:56.343996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:21:56.371268 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 12 01:21:56.424164 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:21:56.443898 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:21:56.568120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:21:56.585928 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:21:56.603091 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:21:56.621732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:21:56.633538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:21:56.644448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:21:56.664775 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:21:56.682889 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:21:56.683198 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:21:56.694137 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:21:56.695269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:21:56.721954 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:21:56.725837 kernel: GPT:9289727 != 19775487 Mar 12 01:21:56.725882 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:21:56.725914 kernel: GPT:9289727 != 19775487 Mar 12 01:21:56.725943 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:21:56.725965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:21:56.736790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:21:56.749766 kernel: libata version 3.00 loaded. Mar 12 01:21:56.736886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:21:56.760291 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:21:56.771770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:21:56.809309 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:21:56.771885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:21:56.833087 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Mar 12 01:21:56.792383 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:21:56.847736 kernel: AES CTR mode by8 optimization enabled Mar 12 01:21:56.850839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:21:56.882144 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (470) Mar 12 01:21:56.890817 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:21:56.904705 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:21:56.913703 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:21:56.904972 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:21:56.932999 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:21:56.933710 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:21:56.933981 kernel: scsi host0: ahci Mar 12 01:21:56.914094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:21:56.944859 kernel: scsi host1: ahci Mar 12 01:21:56.958309 kernel: scsi host2: ahci Mar 12 01:21:56.958791 kernel: scsi host3: ahci Mar 12 01:21:56.959065 kernel: scsi host4: ahci Mar 12 01:21:56.959329 kernel: scsi host5: ahci Mar 12 01:21:56.959586 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 12 01:21:56.945014 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:21:56.993994 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 12 01:21:56.994038 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 12 01:21:56.994055 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 12 01:21:56.994070 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 12 01:21:56.994103 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 12 01:21:56.994294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:21:57.006216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:21:57.035031 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:21:57.045743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:21:57.055596 disk-uuid[557]: Primary Header is updated. Mar 12 01:21:57.055596 disk-uuid[557]: Secondary Entries is updated. Mar 12 01:21:57.055596 disk-uuid[557]: Secondary Header is updated. Mar 12 01:21:57.073754 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:21:57.089505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:21:57.099965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:21:57.291708 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:21:57.291802 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:21:57.296280 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:21:57.300719 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:21:57.300773 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:21:57.306715 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:21:57.306766 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:21:57.310926 kernel: ata3.00: applying bridge limits Mar 12 01:21:57.311141 kernel: ata3.00: configured for UDMA/100 Mar 12 01:21:57.317756 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:21:57.383133 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:21:57.383546 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:21:57.395762 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:21:58.107025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:21:58.107093 disk-uuid[559]: The operation has completed successfully. Mar 12 01:21:58.148852 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:21:58.149061 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:21:58.188054 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:21:58.194461 sh[595]: Success Mar 12 01:21:58.213748 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:21:58.287867 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:21:58.294548 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:21:58.304095 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:21:58.329363 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:21:58.329433 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:21:58.329461 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:21:58.335802 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:21:58.335846 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:21:58.348274 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:21:58.350000 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:21:58.363922 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:21:58.366481 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:21:58.397592 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:21:58.397721 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:21:58.397734 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:21:58.406734 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:21:58.421370 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:21:58.429744 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:21:58.441516 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:21:58.454164 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:21:58.516347 ignition[704]: Ignition 2.19.0 Mar 12 01:21:58.516386 ignition[704]: Stage: fetch-offline Mar 12 01:21:58.516448 ignition[704]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:21:58.516465 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:21:58.516779 ignition[704]: parsed url from cmdline: "" Mar 12 01:21:58.516786 ignition[704]: no config URL provided Mar 12 01:21:58.516797 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:21:58.516815 ignition[704]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:21:58.516859 ignition[704]: op(1): [started] loading QEMU firmware config module Mar 12 01:21:58.516867 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:21:58.528824 ignition[704]: op(1): [finished] loading QEMU firmware config module Mar 12 01:21:58.585013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:21:58.602118 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:21:58.630877 systemd-networkd[782]: lo: Link UP Mar 12 01:21:58.630910 systemd-networkd[782]: lo: Gained carrier Mar 12 01:21:58.639448 systemd-networkd[782]: Enumeration completed Mar 12 01:21:58.642923 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:21:58.648814 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:21:58.648820 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:21:58.670589 systemd-networkd[782]: eth0: Link UP Mar 12 01:21:58.670698 systemd-networkd[782]: eth0: Gained carrier Mar 12 01:21:58.670721 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:21:58.685584 systemd[1]: Reached target network.target - Network. Mar 12 01:21:58.708755 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:21:58.782994 ignition[704]: parsing config with SHA512: ddc22262f7090469516e738e91722665a8ffef87d37ffdc99961aedf92c2534f17d672249e66cdd0f1dee32635bfedc05f1851ad7a055d7fb1d2b80c81f138b4 Mar 12 01:21:58.788765 unknown[704]: fetched base config from "system" Mar 12 01:21:58.789735 unknown[704]: fetched user config from "qemu" Mar 12 01:21:58.790291 ignition[704]: fetch-offline: fetch-offline passed Mar 12 01:21:58.792340 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:21:58.790378 ignition[704]: Ignition finished successfully Mar 12 01:21:58.797004 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:21:58.817028 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:21:58.838800 ignition[786]: Ignition 2.19.0 Mar 12 01:21:58.838823 ignition[786]: Stage: kargs Mar 12 01:21:58.838992 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:21:58.839005 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:21:58.839826 ignition[786]: kargs: kargs passed Mar 12 01:21:58.839881 ignition[786]: Ignition finished successfully Mar 12 01:21:58.861546 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:21:58.877038 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:21:58.891741 ignition[793]: Ignition 2.19.0 Mar 12 01:21:58.891765 ignition[793]: Stage: disks Mar 12 01:21:58.891974 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:21:58.891986 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:21:58.892708 ignition[793]: disks: disks passed Mar 12 01:21:58.892752 ignition[793]: Ignition finished successfully Mar 12 01:21:58.911001 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:21:58.912706 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:21:58.914596 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:21:58.929190 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:21:58.941829 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:21:58.944567 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:21:58.964915 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:21:58.985938 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:21:58.992215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:21:59.014788 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:21:59.117779 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:21:59.117898 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:21:59.119941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:21:59.131778 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:21:59.138770 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:21:59.145355 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 12 01:21:59.142710 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:21:59.154112 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:21:59.154130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:21:59.154139 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:21:59.142769 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:21:59.142803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:21:59.158769 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:21:59.171716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:21:59.178847 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:21:59.191995 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:21:59.237093 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:21:59.242006 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:21:59.247465 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:21:59.252916 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:21:59.365032 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:21:59.389881 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:21:59.395431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:21:59.408069 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:21:59.413367 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:21:59.430048 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:21:59.443128 ignition[926]: INFO : Ignition 2.19.0 Mar 12 01:21:59.443128 ignition[926]: INFO : Stage: mount Mar 12 01:21:59.447107 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:21:59.447107 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:21:59.447107 ignition[926]: INFO : mount: mount passed Mar 12 01:21:59.447107 ignition[926]: INFO : Ignition finished successfully Mar 12 01:21:59.458263 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:21:59.467931 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:21:59.475835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:21:59.496264 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 12 01:21:59.496314 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:21:59.498737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:21:59.498761 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:21:59.506749 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:21:59.508727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:21:59.533982 ignition[955]: INFO : Ignition 2.19.0 Mar 12 01:21:59.533982 ignition[955]: INFO : Stage: files Mar 12 01:21:59.537795 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:21:59.537795 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:21:59.543835 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:21:59.546892 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:21:59.546892 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:21:59.557346 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:21:59.561414 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:21:59.565231 unknown[955]: wrote ssh authorized keys file for user: core Mar 12 01:21:59.567851 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:21:59.572148 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:21:59.576848 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:21:59.628222 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:21:59.733849 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:21:59.733849 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 01:21:59.746329 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 12 01:21:59.889203 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 01:22:00.121109 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 01:22:00.121109 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:22:00.130240 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 01:22:00.269975 systemd-networkd[782]: eth0: Gained IPv6LL Mar 12 01:22:00.724720 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 01:22:01.195214 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:22:01.195214 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 12 01:22:01.204196 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:22:01.209406 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:22:01.209406 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 12 01:22:01.209406 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 12 01:22:01.220352 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:22:01.225403 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:22:01.225403 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 12 01:22:01.225403 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:22:01.262726 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:22:01.268182 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:22:01.272287 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:22:01.272287 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:22:01.279716 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:22:01.279716 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:22:01.279716 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:22:01.279716 ignition[955]: INFO : files: files passed Mar 12 01:22:01.279716 ignition[955]: INFO : Ignition finished successfully Mar 12 01:22:01.298539 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:22:01.315024 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:22:01.317387 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:22:01.330883 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:22:01.331022 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:22:01.340344 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:22:01.347898 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:22:01.347898 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:22:01.356894 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:22:01.363553 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:22:01.365802 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:22:01.380918 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:22:01.412387 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:22:01.412543 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:22:01.418078 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:22:01.423584 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:22:01.428773 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:22:01.429632 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:22:01.450938 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:22:01.467893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:22:01.483974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:22:01.485930 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:22:01.493257 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:22:01.499378 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:22:01.499528 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:22:01.507540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:22:01.509460 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:22:01.517407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:22:01.522455 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:22:01.527387 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:22:01.533470 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:22:01.542227 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:22:01.545466 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:22:01.551322 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:22:01.556518 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:22:01.561416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:22:01.561803 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:22:01.566542 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:22:01.572289 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:22:01.579355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:22:01.579580 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:22:01.586071 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:22:01.586225 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:22:01.592009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:22:01.592137 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:22:01.597634 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:22:01.603524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:22:01.607772 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:22:01.612373 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:22:01.617792 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:22:01.623419 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:22:01.623534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:22:01.628461 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:22:01.628562 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:22:01.634255 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:22:01.634383 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:22:01.641561 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:22:01.641758 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:22:01.663011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:22:01.669517 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:22:01.681969 ignition[1009]: INFO : Ignition 2.19.0 Mar 12 01:22:01.681969 ignition[1009]: INFO : Stage: umount Mar 12 01:22:01.681969 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:22:01.681969 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:22:01.681969 ignition[1009]: INFO : umount: umount passed Mar 12 01:22:01.681969 ignition[1009]: INFO : Ignition finished successfully Mar 12 01:22:01.673490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:22:01.673874 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:22:01.680846 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:22:01.681000 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:22:01.684971 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:22:01.685125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:22:01.694590 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:22:01.694830 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:22:01.705150 systemd[1]: Stopped target network.target - Network. Mar 12 01:22:01.708915 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:22:01.708991 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:22:01.714027 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:22:01.714086 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:22:01.716283 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:22:01.716330 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:22:01.723220 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:22:01.723275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:22:01.730476 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:22:01.735571 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:22:01.744795 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 12 01:22:01.745596 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:22:01.746539 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:22:01.746894 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:22:01.755576 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:22:01.755884 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:22:01.760127 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:22:01.760177 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:22:01.775860 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:22:01.781575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:22:01.781760 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:22:01.788946 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:22:01.789029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:22:01.795853 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:22:01.795917 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:22:01.799467 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:22:01.923714 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 12 01:22:01.799541 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:22:01.807000 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:22:01.815251 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:22:01.815438 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:22:01.840182 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:22:01.840398 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:22:01.846351 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:22:01.846424 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:22:01.854800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:22:01.854853 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:22:01.856759 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:22:01.856820 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:22:01.857730 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:22:01.857779 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:22:01.858511 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:22:01.858554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:22:01.860331 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:22:01.860409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:22:01.862515 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:22:01.863382 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:22:01.863434 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:22:01.864308 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 01:22:01.864354 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:22:01.865188 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:22:01.865235 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:22:01.865676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:22:01.865724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:01.866566 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:22:01.866756 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:22:01.876793 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:22:01.876927 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:22:01.877339 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:22:01.879050 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:22:01.894220 systemd[1]: Switching root. Mar 12 01:22:02.057364 systemd-journald[194]: Journal stopped Mar 12 01:22:03.674243 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:22:03.674305 kernel: SELinux: policy capability open_perms=1 Mar 12 01:22:03.674317 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:22:03.674327 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:22:03.674341 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:22:03.674351 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:22:03.674366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:22:03.674380 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:22:03.674390 kernel: audit: type=1403 audit(1773278522.183:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:22:03.674406 systemd[1]: Successfully loaded SELinux policy in 75.175ms. Mar 12 01:22:03.674428 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.537ms. Mar 12 01:22:03.674439 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:22:03.674451 systemd[1]: Detected virtualization kvm. Mar 12 01:22:03.674464 systemd[1]: Detected architecture x86-64. Mar 12 01:22:03.674475 systemd[1]: Detected first boot. Mar 12 01:22:03.674489 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:22:03.674500 zram_generator::config[1052]: No configuration found. Mar 12 01:22:03.674512 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:22:03.674523 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:22:03.674536 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:22:03.674551 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:22:03.674565 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:22:03.674575 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:22:03.674586 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:22:03.674629 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:22:03.674643 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:22:03.674688 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:22:03.674700 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:22:03.674710 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:22:03.674721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:22:03.674735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:22:03.674746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:22:03.674757 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:22:03.674767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:22:03.674778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:22:03.674789 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:22:03.674801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:22:03.674812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:22:03.674822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:22:03.674836 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:22:03.674848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:22:03.674859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:22:03.674870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:22:03.674881 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:22:03.674892 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:22:03.674902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:22:03.674913 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:22:03.674926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:22:03.674937 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:22:03.674948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:22:03.674958 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:22:03.674969 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:22:03.674979 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:22:03.674990 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:22:03.675000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:03.675011 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:22:03.675024 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:22:03.675035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:22:03.675045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:22:03.675057 systemd[1]: Reached target machines.target - Containers. Mar 12 01:22:03.675067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:22:03.675079 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:22:03.675089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:22:03.675100 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:22:03.675114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:22:03.675124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:22:03.675135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:22:03.675146 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:22:03.675157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:22:03.675167 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:22:03.675179 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:22:03.675189 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:22:03.675202 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:22:03.675213 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:22:03.675224 kernel: fuse: init (API version 7.39) Mar 12 01:22:03.675235 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:22:03.675245 kernel: ACPI: bus type drm_connector registered Mar 12 01:22:03.675255 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:22:03.675266 kernel: loop: module loaded Mar 12 01:22:03.675276 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:22:03.675287 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:22:03.675318 systemd-journald[1137]: Collecting audit messages is disabled. Mar 12 01:22:03.675341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:22:03.675353 systemd-journald[1137]: Journal started Mar 12 01:22:03.675371 systemd-journald[1137]: Runtime Journal (/run/log/journal/0735765f4a8847a7b45e8dfbced7ff38) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:22:03.168064 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:22:03.189094 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:22:03.189814 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:22:03.190182 systemd[1]: systemd-journald.service: Consumed 1.435s CPU time. Mar 12 01:22:03.686718 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:22:03.686759 systemd[1]: Stopped verity-setup.service. Mar 12 01:22:03.695885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:03.701588 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:22:03.702745 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:22:03.705681 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:22:03.708777 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:22:03.711477 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:22:03.714546 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:22:03.717644 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:22:03.720488 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:22:03.724153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:22:03.728008 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:22:03.728208 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:22:03.731809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:22:03.732006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:22:03.735467 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:22:03.735720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:22:03.738965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:22:03.739159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:22:03.742988 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:22:03.743186 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:22:03.746580 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:22:03.746855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:22:03.750396 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:22:03.753981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:22:03.758259 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:22:03.779184 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:22:03.791015 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:22:03.794546 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:22:03.798082 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:22:03.798132 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:22:03.801135 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:22:03.806836 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:22:03.811751 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:22:03.815400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:22:03.817707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:22:03.822798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:22:03.824385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:22:03.826227 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:22:03.833834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:22:03.837138 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:22:03.846977 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:22:03.856190 systemd-journald[1137]: Time spent on flushing to /var/log/journal/0735765f4a8847a7b45e8dfbced7ff38 is 60.952ms for 986 entries. Mar 12 01:22:03.856190 systemd-journald[1137]: System Journal (/var/log/journal/0735765f4a8847a7b45e8dfbced7ff38) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:22:03.931771 systemd-journald[1137]: Received client request to flush runtime journal. Mar 12 01:22:03.931806 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 01:22:03.931821 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:22:03.857419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:22:03.871097 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:22:03.876187 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:22:03.881099 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:22:03.886632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:22:03.892043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:22:03.902235 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:22:03.908312 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 12 01:22:03.908330 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 12 01:22:03.920926 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:22:03.933168 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:22:03.937398 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:22:03.943295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:22:03.949760 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:22:03.956910 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:22:03.957783 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:22:03.975998 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:22:03.982400 kernel: loop1: detected capacity change from 0 to 140768 Mar 12 01:22:03.980454 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 12 01:22:04.009690 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:22:04.019986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:22:04.024183 kernel: loop2: detected capacity change from 0 to 228704 Mar 12 01:22:04.053905 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 12 01:22:04.053924 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 12 01:22:04.060567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:22:04.070727 kernel: loop3: detected capacity change from 0 to 142488 Mar 12 01:22:04.096716 kernel: loop4: detected capacity change from 0 to 140768 Mar 12 01:22:04.132044 kernel: loop5: detected capacity change from 0 to 228704 Mar 12 01:22:04.148585 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:22:04.149474 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 12 01:22:04.155210 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:22:04.155413 systemd[1]: Reloading... Mar 12 01:22:04.228698 zram_generator::config[1218]: No configuration found. Mar 12 01:22:04.362711 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:22:04.403715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:22:04.447465 systemd[1]: Reloading finished in 291 ms. Mar 12 01:22:04.492042 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:22:04.496147 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:22:04.515015 systemd[1]: Starting ensure-sysext.service... Mar 12 01:22:04.519764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:22:04.527803 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:22:04.527846 systemd[1]: Reloading... Mar 12 01:22:04.550182 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:22:04.550568 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:22:04.551734 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:22:04.552046 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 12 01:22:04.552150 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 12 01:22:04.556867 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:22:04.556908 systemd-tmpfiles[1260]: Skipping /boot Mar 12 01:22:04.575473 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:22:04.575493 systemd-tmpfiles[1260]: Skipping /boot Mar 12 01:22:04.594719 zram_generator::config[1287]: No configuration found. Mar 12 01:22:04.713060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:22:04.772194 systemd[1]: Reloading finished in 243 ms. Mar 12 01:22:04.793207 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:22:04.806535 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:22:04.838058 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:22:04.844581 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:22:04.851046 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:22:04.859425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:22:04.869757 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:22:04.875877 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:22:04.883412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:04.883771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:22:04.885734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:22:04.896256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:22:04.901130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:22:04.905343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:22:04.908070 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:22:04.914977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:04.917092 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:22:04.931979 augenrules[1351]: No rules Mar 12 01:22:04.932055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:22:04.932320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:22:04.936524 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Mar 12 01:22:04.937336 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:22:04.942231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:22:04.942492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:22:04.947968 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:22:04.952192 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:22:04.952393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:22:04.967947 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:22:04.973236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:22:04.980239 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:22:04.990071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:04.990340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:22:04.995927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:22:05.003952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:22:05.010978 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:22:05.015985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:22:05.018522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:22:05.031235 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:22:05.034225 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:22:05.034378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:05.036181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:22:05.036395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:22:05.040312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:22:05.040534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:22:05.044574 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:22:05.044876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:22:05.053766 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:22:05.077418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:05.077823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:22:05.090332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:22:05.096724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:22:05.103351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:22:05.108789 systemd-resolved[1336]: Positive Trust Anchors: Mar 12 01:22:05.108819 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:22:05.108846 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:22:05.109761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:22:05.115977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:22:05.116061 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:22:05.116091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:22:05.117322 systemd[1]: Finished ensure-sysext.service. Mar 12 01:22:05.119959 systemd-resolved[1336]: Defaulting to hostname 'linux'. Mar 12 01:22:05.120472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:22:05.120846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:22:05.125469 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:22:05.125803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:22:05.137151 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1364) Mar 12 01:22:05.135367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:22:05.140400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:22:05.140746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:22:05.146637 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:22:05.146998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:22:05.153900 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:22:05.180759 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:22:05.187006 systemd-networkd[1389]: lo: Link UP Mar 12 01:22:05.187039 systemd-networkd[1389]: lo: Gained carrier Mar 12 01:22:05.188014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:22:05.188856 systemd-networkd[1389]: Enumeration completed Mar 12 01:22:05.189843 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:22:05.189869 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:22:05.190956 systemd-networkd[1389]: eth0: Link UP Mar 12 01:22:05.190980 systemd-networkd[1389]: eth0: Gained carrier Mar 12 01:22:05.190992 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:22:05.191869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:22:05.191989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:22:05.201714 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:22:05.202807 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:22:05.202974 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:22:05.205750 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:22:05.206739 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:22:05.214760 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 12 01:22:05.215229 systemd[1]: Reached target network.target - Network. Mar 12 01:22:05.221750 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:22:05.222044 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:22:05.224694 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:22:05.236905 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:22:05.246423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:22:05.267780 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:22:05.268531 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:22:05.320013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:05.325506 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:22:05.331562 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:22:06.841616 systemd-resolved[1336]: Clock change detected. Flushing caches. Mar 12 01:22:06.841828 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:22:06.844016 systemd-timesyncd[1414]: Initial clock synchronization to Thu 2026-03-12 01:22:06.841556 UTC. Mar 12 01:22:06.869237 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:22:06.876460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:22:06.876773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:06.900998 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:22:06.946269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:06.966623 kernel: kvm_amd: TSC scaling supported Mar 12 01:22:06.966709 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:22:06.966732 kernel: kvm_amd: Nested Paging enabled Mar 12 01:22:06.970020 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:22:06.970064 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:22:07.024999 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:22:07.055443 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:22:07.081274 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:22:07.086680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:07.097744 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:22:07.131478 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:22:07.136759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:22:07.141368 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:22:07.145693 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:22:07.150469 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:22:07.155555 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:22:07.160698 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:22:07.171334 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:22:07.176342 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:22:07.176405 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:22:07.180023 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:22:07.184611 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:22:07.191220 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:22:07.204139 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:22:07.210178 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:22:07.213780 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:22:07.217119 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:22:07.219702 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:22:07.222358 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:22:07.222717 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:22:07.222771 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:22:07.224261 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:22:07.228989 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:22:07.236096 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:22:07.241827 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:22:07.244776 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:22:07.248177 jq[1442]: false Mar 12 01:22:07.248461 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:22:07.252966 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:22:07.259634 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:22:07.277103 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:22:07.279410 dbus-daemon[1441]: [system] SELinux support is enabled Mar 12 01:22:07.283662 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:22:07.284846 extend-filesystems[1443]: Found loop3 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found loop4 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found loop5 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found sr0 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda1 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda2 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda3 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found usr Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda4 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda6 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda7 Mar 12 01:22:07.290133 extend-filesystems[1443]: Found vda9 Mar 12 01:22:07.290133 extend-filesystems[1443]: Checking size of /dev/vda9 Mar 12 01:22:07.375767 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1375) Mar 12 01:22:07.375801 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:22:07.287244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:22:07.376026 extend-filesystems[1443]: Resized partition /dev/vda9 Mar 12 01:22:07.287673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:22:07.386225 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:22:07.389863 update_engine[1459]: I20260312 01:22:07.335914 1459 main.cc:92] Flatcar Update Engine starting Mar 12 01:22:07.389863 update_engine[1459]: I20260312 01:22:07.338089 1459 update_check_scheduler.cc:74] Next update check in 11m58s Mar 12 01:22:07.303083 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:22:07.310109 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:22:07.391513 jq[1462]: true Mar 12 01:22:07.317274 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:22:07.348481 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:22:07.380005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:22:07.380233 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:22:07.380583 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:22:07.380761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:22:07.390197 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:22:07.390463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:22:07.406050 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:22:07.424356 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:22:07.438655 jq[1468]: true Mar 12 01:22:07.444290 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:22:07.444290 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:22:07.444290 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:22:07.472096 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Mar 12 01:22:07.445652 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:22:07.445681 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:22:07.447679 systemd-logind[1454]: New seat seat0. Mar 12 01:22:07.449815 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:22:07.450211 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:22:07.458384 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:22:07.469318 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:22:07.483691 tar[1467]: linux-amd64/LICENSE Mar 12 01:22:07.484121 tar[1467]: linux-amd64/helm Mar 12 01:22:07.486970 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:22:07.487230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:22:07.492225 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:22:07.492444 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:22:07.505087 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:22:07.526390 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:22:07.533575 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:22:07.545772 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:22:07.581474 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:22:07.751493 containerd[1469]: time="2026-03-12T01:22:07.751351650Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:22:07.786830 containerd[1469]: time="2026-03-12T01:22:07.786753588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.789556 containerd[1469]: time="2026-03-12T01:22:07.789512983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:22:07.789644 containerd[1469]: time="2026-03-12T01:22:07.789626515Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:22:07.790131 containerd[1469]: time="2026-03-12T01:22:07.790108154Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:22:07.790466 containerd[1469]: time="2026-03-12T01:22:07.790444732Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:22:07.790536 containerd[1469]: time="2026-03-12T01:22:07.790521546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.790748 containerd[1469]: time="2026-03-12T01:22:07.790726168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:22:07.790812 containerd[1469]: time="2026-03-12T01:22:07.790797551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.791297 containerd[1469]: time="2026-03-12T01:22:07.791270044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:22:07.791384 containerd[1469]: time="2026-03-12T01:22:07.791366804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.791457 containerd[1469]: time="2026-03-12T01:22:07.791437446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:22:07.791510 containerd[1469]: time="2026-03-12T01:22:07.791496527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.791687 containerd[1469]: time="2026-03-12T01:22:07.791668057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.792171 containerd[1469]: time="2026-03-12T01:22:07.792148624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:22:07.792519 containerd[1469]: time="2026-03-12T01:22:07.792497025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:22:07.792584 containerd[1469]: time="2026-03-12T01:22:07.792569721Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:22:07.792867 containerd[1469]: time="2026-03-12T01:22:07.792801223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:22:07.793147 containerd[1469]: time="2026-03-12T01:22:07.793127853Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:22:07.809299 containerd[1469]: time="2026-03-12T01:22:07.809249301Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:22:07.809535 containerd[1469]: time="2026-03-12T01:22:07.809514526Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:22:07.809681 containerd[1469]: time="2026-03-12T01:22:07.809666690Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:22:07.809736 containerd[1469]: time="2026-03-12T01:22:07.809724117Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:22:07.809783 containerd[1469]: time="2026-03-12T01:22:07.809772417Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:22:07.810097 containerd[1469]: time="2026-03-12T01:22:07.810080032Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:22:07.810359 containerd[1469]: time="2026-03-12T01:22:07.810342321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:22:07.810591 containerd[1469]: time="2026-03-12T01:22:07.810573313Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810633545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810654004Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810668190Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810681875Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810694048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810706441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810718494Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810734013Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810752147Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810770371Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810795377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810815325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810834290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811219 containerd[1469]: time="2026-03-12T01:22:07.810853636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.810866400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.810977718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.810991955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811004058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811014787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811036828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811054291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811070361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811095217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811114744Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811143448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811158115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.811455 containerd[1469]: time="2026-03-12T01:22:07.811168364Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:22:07.811973 containerd[1469]: time="2026-03-12T01:22:07.811764948Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:22:07.812142 containerd[1469]: time="2026-03-12T01:22:07.812080037Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:22:07.812228 containerd[1469]: time="2026-03-12T01:22:07.812208667Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:22:07.812281 containerd[1469]: time="2026-03-12T01:22:07.812268328Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:22:07.812320 containerd[1469]: time="2026-03-12T01:22:07.812309615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.812377 containerd[1469]: time="2026-03-12T01:22:07.812364929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:22:07.812425 containerd[1469]: time="2026-03-12T01:22:07.812415343Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:22:07.812496 containerd[1469]: time="2026-03-12T01:22:07.812475635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:22:07.813003 containerd[1469]: time="2026-03-12T01:22:07.812909155Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:22:07.814088 containerd[1469]: time="2026-03-12T01:22:07.813231246Z" level=info msg="Connect containerd service" Mar 12 01:22:07.814088 containerd[1469]: time="2026-03-12T01:22:07.813285037Z" level=info msg="using legacy CRI server" Mar 12 01:22:07.814088 containerd[1469]: time="2026-03-12T01:22:07.813295646Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:22:07.814088 containerd[1469]: time="2026-03-12T01:22:07.813418025Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:22:07.814561 containerd[1469]: time="2026-03-12T01:22:07.814533868Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:22:07.814793 containerd[1469]: time="2026-03-12T01:22:07.814762185Z" level=info msg="Start subscribing containerd event" Mar 12 01:22:07.815111 containerd[1469]: time="2026-03-12T01:22:07.815091130Z" level=info msg="Start recovering state" Mar 12 01:22:07.815287 containerd[1469]: time="2026-03-12T01:22:07.815266236Z" level=info msg="Start event monitor" Mar 12 01:22:07.815555 containerd[1469]: time="2026-03-12T01:22:07.815491266Z" level=info msg="Start snapshots syncer" Mar 12 01:22:07.816056 containerd[1469]: time="2026-03-12T01:22:07.816034862Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:22:07.816283 containerd[1469]: time="2026-03-12T01:22:07.815858392Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:22:07.816524 containerd[1469]: time="2026-03-12T01:22:07.816427174Z" level=info msg="Start streaming server" Mar 12 01:22:07.816797 containerd[1469]: time="2026-03-12T01:22:07.816780414Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:22:07.817041 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:22:07.822689 containerd[1469]: time="2026-03-12T01:22:07.822665393Z" level=info msg="containerd successfully booted in 0.072393s" Mar 12 01:22:07.952826 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:22:07.992023 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:22:08.008373 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:22:08.018554 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:22:08.019061 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:22:08.026085 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:22:08.059192 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:22:08.086308 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:22:08.091715 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:22:08.096254 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:22:08.135571 tar[1467]: linux-amd64/README.md Mar 12 01:22:08.151599 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:22:08.301467 systemd-networkd[1389]: eth0: Gained IPv6LL Mar 12 01:22:08.305413 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:22:08.310289 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:22:08.328469 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:22:08.334659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:08.340583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:22:08.383508 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:22:08.388337 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:22:08.388644 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:22:08.394202 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:22:09.441226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:09.445806 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:22:09.449312 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:22:09.450069 systemd[1]: Startup finished in 1.358s (kernel) + 7.330s (initrd) + 5.835s (userspace) = 14.524s. Mar 12 01:22:10.074099 kubelet[1555]: E0312 01:22:10.073995 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:22:10.078518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:22:10.078839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:22:10.079396 systemd[1]: kubelet.service: Consumed 1.395s CPU time. Mar 12 01:22:10.248739 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:22:10.263854 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:58464.service - OpenSSH per-connection server daemon (10.0.0.1:58464). Mar 12 01:22:10.318036 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 58464 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:10.320374 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:10.329869 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:22:10.341341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:22:10.343341 systemd-logind[1454]: New session 1 of user core. Mar 12 01:22:10.356675 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:22:10.372344 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:22:10.376301 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:22:10.497691 systemd[1572]: Queued start job for default target default.target. Mar 12 01:22:10.510548 systemd[1572]: Created slice app.slice - User Application Slice. Mar 12 01:22:10.510617 systemd[1572]: Reached target paths.target - Paths. Mar 12 01:22:10.510639 systemd[1572]: Reached target timers.target - Timers. Mar 12 01:22:10.512585 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:22:10.528123 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:22:10.528335 systemd[1572]: Reached target sockets.target - Sockets. Mar 12 01:22:10.528396 systemd[1572]: Reached target basic.target - Basic System. Mar 12 01:22:10.528485 systemd[1572]: Reached target default.target - Main User Target. Mar 12 01:22:10.528544 systemd[1572]: Startup finished in 143ms. Mar 12 01:22:10.528657 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:22:10.530775 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:22:10.605842 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:58476.service - OpenSSH per-connection server daemon (10.0.0.1:58476). Mar 12 01:22:10.645265 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 58476 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:10.647416 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:10.653117 systemd-logind[1454]: New session 2 of user core. Mar 12 01:22:10.673219 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:22:10.733559 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 12 01:22:10.749105 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:58476.service: Deactivated successfully. Mar 12 01:22:10.750691 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:22:10.752701 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:22:10.771455 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Mar 12 01:22:10.773105 systemd-logind[1454]: Removed session 2. Mar 12 01:22:10.807129 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:10.809298 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:10.815254 systemd-logind[1454]: New session 3 of user core. Mar 12 01:22:10.825176 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:22:10.878381 sshd[1590]: pam_unix(sshd:session): session closed for user core Mar 12 01:22:10.893249 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:58490.service: Deactivated successfully. Mar 12 01:22:10.895461 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:22:10.897717 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:22:10.918969 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:58502.service - OpenSSH per-connection server daemon (10.0.0.1:58502). Mar 12 01:22:10.920548 systemd-logind[1454]: Removed session 3. Mar 12 01:22:10.959286 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 58502 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:10.961771 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:10.972006 systemd-logind[1454]: New session 4 of user core. Mar 12 01:22:10.983269 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:22:11.043268 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 12 01:22:11.058711 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:58502.service: Deactivated successfully. Mar 12 01:22:11.061238 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:22:11.066060 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:22:11.068258 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:58510.service - OpenSSH per-connection server daemon (10.0.0.1:58510). Mar 12 01:22:11.069413 systemd-logind[1454]: Removed session 4. Mar 12 01:22:11.108276 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 58510 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:11.109846 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:11.115486 systemd-logind[1454]: New session 5 of user core. Mar 12 01:22:11.130120 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:22:11.199588 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:22:11.200055 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:22:11.227914 sudo[1607]: pam_unix(sudo:session): session closed for user root Mar 12 01:22:11.230398 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 12 01:22:11.249333 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:58510.service: Deactivated successfully. Mar 12 01:22:11.252001 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:22:11.254370 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:22:11.257194 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). Mar 12 01:22:11.258422 systemd-logind[1454]: Removed session 5. Mar 12 01:22:11.818683 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:11.820823 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:11.829608 systemd-logind[1454]: New session 6 of user core. Mar 12 01:22:11.844312 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:22:11.916531 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:22:11.917032 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:22:11.921730 sudo[1616]: pam_unix(sudo:session): session closed for user root Mar 12 01:22:11.928837 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:22:11.929350 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:22:11.949282 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:22:11.952104 auditctl[1619]: No rules Mar 12 01:22:11.952625 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:22:11.953037 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:22:11.956271 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:22:12.012854 augenrules[1637]: No rules Mar 12 01:22:12.014311 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:22:12.015575 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 12 01:22:12.018129 sshd[1612]: pam_unix(sshd:session): session closed for user core Mar 12 01:22:12.035169 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:58524.service: Deactivated successfully. Mar 12 01:22:12.038105 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:22:12.040504 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:22:12.061588 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:54872.service - OpenSSH per-connection server daemon (10.0.0.1:54872). Mar 12 01:22:12.063746 systemd-logind[1454]: Removed session 6. Mar 12 01:22:12.113612 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 54872 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:22:12.115492 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:22:12.120317 systemd-logind[1454]: New session 7 of user core. Mar 12 01:22:12.130155 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:22:12.187082 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:22:12.187454 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:22:13.633313 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:22:13.635394 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:22:15.350962 dockerd[1666]: time="2026-03-12T01:22:15.350603712Z" level=info msg="Starting up" Mar 12 01:22:15.924234 dockerd[1666]: time="2026-03-12T01:22:15.924101743Z" level=info msg="Loading containers: start." Mar 12 01:22:16.275006 kernel: Initializing XFRM netlink socket Mar 12 01:22:16.471547 systemd-networkd[1389]: docker0: Link UP Mar 12 01:22:16.520417 dockerd[1666]: time="2026-03-12T01:22:16.520224444Z" level=info msg="Loading containers: done." Mar 12 01:22:16.579273 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck948386780-merged.mount: Deactivated successfully. Mar 12 01:22:16.581525 dockerd[1666]: time="2026-03-12T01:22:16.581399319Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:22:16.581901 dockerd[1666]: time="2026-03-12T01:22:16.581816358Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:22:16.582295 dockerd[1666]: time="2026-03-12T01:22:16.582165250Z" level=info msg="Daemon has completed initialization" Mar 12 01:22:16.640516 dockerd[1666]: time="2026-03-12T01:22:16.640238234Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:22:16.640616 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:22:17.985295 containerd[1469]: time="2026-03-12T01:22:17.985227543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 01:22:18.591866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357836224.mount: Deactivated successfully. Mar 12 01:22:19.825316 containerd[1469]: time="2026-03-12T01:22:19.825248716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:19.826245 containerd[1469]: time="2026-03-12T01:22:19.826202574Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 12 01:22:19.827319 containerd[1469]: time="2026-03-12T01:22:19.827270308Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:19.830175 containerd[1469]: time="2026-03-12T01:22:19.830127581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:19.831127 containerd[1469]: time="2026-03-12T01:22:19.831087479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.845818739s" Mar 12 01:22:19.831185 containerd[1469]: time="2026-03-12T01:22:19.831130550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 01:22:19.832122 containerd[1469]: time="2026-03-12T01:22:19.831785698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 01:22:20.329219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:22:20.340272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:20.553489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:20.558331 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:22:20.692981 kubelet[1880]: E0312 01:22:20.692655 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:22:20.702234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:22:20.702453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:22:21.662537 containerd[1469]: time="2026-03-12T01:22:21.662340425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:21.663594 containerd[1469]: time="2026-03-12T01:22:21.663455826Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 12 01:22:21.664852 containerd[1469]: time="2026-03-12T01:22:21.664772377Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:21.668477 containerd[1469]: time="2026-03-12T01:22:21.668437471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:21.670156 containerd[1469]: time="2026-03-12T01:22:21.669972730Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.837989614s" Mar 12 01:22:21.670156 containerd[1469]: time="2026-03-12T01:22:21.670004970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 01:22:21.673685 containerd[1469]: time="2026-03-12T01:22:21.673530413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 01:22:23.539571 containerd[1469]: time="2026-03-12T01:22:23.539264684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:23.540784 containerd[1469]: time="2026-03-12T01:22:23.540610937Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 12 01:22:23.542438 containerd[1469]: time="2026-03-12T01:22:23.542301164Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:23.545756 containerd[1469]: time="2026-03-12T01:22:23.545663905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:23.547006 containerd[1469]: time="2026-03-12T01:22:23.546844914Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.873287951s" Mar 12 01:22:23.547006 containerd[1469]: time="2026-03-12T01:22:23.546998891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 01:22:23.548713 containerd[1469]: time="2026-03-12T01:22:23.548665145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 01:22:25.622356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025361476.mount: Deactivated successfully. Mar 12 01:22:26.396443 containerd[1469]: time="2026-03-12T01:22:26.396354774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:26.397279 containerd[1469]: time="2026-03-12T01:22:26.397203504Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 12 01:22:26.398392 containerd[1469]: time="2026-03-12T01:22:26.398300627Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:26.401680 containerd[1469]: time="2026-03-12T01:22:26.401614618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:26.402291 containerd[1469]: time="2026-03-12T01:22:26.402236638Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.853518633s" Mar 12 01:22:26.402291 containerd[1469]: time="2026-03-12T01:22:26.402285539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 01:22:26.403789 containerd[1469]: time="2026-03-12T01:22:26.403622946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 01:22:26.880477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641197960.mount: Deactivated successfully. Mar 12 01:22:27.980351 containerd[1469]: time="2026-03-12T01:22:27.980246817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:27.981355 containerd[1469]: time="2026-03-12T01:22:27.981238744Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 12 01:22:27.982517 containerd[1469]: time="2026-03-12T01:22:27.982454643Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:27.985821 containerd[1469]: time="2026-03-12T01:22:27.985757330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:27.987091 containerd[1469]: time="2026-03-12T01:22:27.986867783Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.583217786s" Mar 12 01:22:27.987091 containerd[1469]: time="2026-03-12T01:22:27.986977989Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 01:22:27.987675 containerd[1469]: time="2026-03-12T01:22:27.987615491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 01:22:28.412449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211605564.mount: Deactivated successfully. Mar 12 01:22:28.420118 containerd[1469]: time="2026-03-12T01:22:28.419871028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:28.420913 containerd[1469]: time="2026-03-12T01:22:28.420801738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 12 01:22:28.422693 containerd[1469]: time="2026-03-12T01:22:28.422646102Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:28.427025 containerd[1469]: time="2026-03-12T01:22:28.426984399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:28.428443 containerd[1469]: time="2026-03-12T01:22:28.428377872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 440.704832ms" Mar 12 01:22:28.428443 containerd[1469]: time="2026-03-12T01:22:28.428434217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 01:22:28.429567 containerd[1469]: time="2026-03-12T01:22:28.429306873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 01:22:28.911281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497165260.mount: Deactivated successfully. Mar 12 01:22:29.811388 containerd[1469]: time="2026-03-12T01:22:29.811295305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:29.812227 containerd[1469]: time="2026-03-12T01:22:29.812084624Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 12 01:22:29.813629 containerd[1469]: time="2026-03-12T01:22:29.813538449Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:29.817497 containerd[1469]: time="2026-03-12T01:22:29.817445665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:22:29.818595 containerd[1469]: time="2026-03-12T01:22:29.818553077Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.38919544s" Mar 12 01:22:29.818595 containerd[1469]: time="2026-03-12T01:22:29.818593593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 01:22:30.953038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:22:30.966218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:31.128869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:31.134128 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:22:31.178972 kubelet[2057]: E0312 01:22:31.178799 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:22:31.182719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:22:31.183011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:22:31.848312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:31.865216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:31.893405 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-7.scope)... Mar 12 01:22:31.893439 systemd[1]: Reloading... Mar 12 01:22:32.041000 zram_generator::config[2116]: No configuration found. Mar 12 01:22:32.203824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:22:32.301083 systemd[1]: Reloading finished in 407 ms. Mar 12 01:22:32.357357 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 01:22:32.357507 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 01:22:32.357869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:32.361559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:32.643270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:32.648617 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:22:33.018068 kubelet[2162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:22:33.018068 kubelet[2162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:22:33.018068 kubelet[2162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:22:33.018782 kubelet[2162]: I0312 01:22:33.018664 2162 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:22:33.806559 kubelet[2162]: I0312 01:22:33.806372 2162 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:22:33.806559 kubelet[2162]: I0312 01:22:33.806427 2162 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:22:33.806834 kubelet[2162]: I0312 01:22:33.806736 2162 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:22:33.864708 kubelet[2162]: I0312 01:22:33.864652 2162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:22:33.866220 kubelet[2162]: E0312 01:22:33.866148 2162 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:22:33.876540 kubelet[2162]: E0312 01:22:33.876457 2162 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:22:33.876540 kubelet[2162]: I0312 01:22:33.876522 2162 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:22:33.883985 kubelet[2162]: I0312 01:22:33.883872 2162 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:22:33.884316 kubelet[2162]: I0312 01:22:33.884223 2162 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:22:33.884499 kubelet[2162]: I0312 01:22:33.884293 2162 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:22:33.884499 kubelet[2162]: I0312 01:22:33.884480 2162 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:22:33.884499 kubelet[2162]: I0312 01:22:33.884490 2162 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:22:33.884762 kubelet[2162]: I0312 01:22:33.884724 2162 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:22:33.890966 kubelet[2162]: I0312 01:22:33.890804 2162 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:22:33.890966 kubelet[2162]: I0312 01:22:33.890836 2162 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:22:33.890966 kubelet[2162]: I0312 01:22:33.890863 2162 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:22:33.890966 kubelet[2162]: I0312 01:22:33.890955 2162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:22:33.899276 kubelet[2162]: I0312 01:22:33.897480 2162 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:22:33.902412 kubelet[2162]: I0312 01:22:33.898468 2162 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:22:33.905348 kubelet[2162]: E0312 01:22:33.905279 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:22:33.905513 kubelet[2162]: E0312 01:22:33.905452 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:22:33.905650 kubelet[2162]: W0312 01:22:33.905542 2162 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:22:33.913173 kubelet[2162]: I0312 01:22:33.913113 2162 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:22:33.913306 kubelet[2162]: I0312 01:22:33.913201 2162 server.go:1289] "Started kubelet" Mar 12 01:22:33.915396 kubelet[2162]: I0312 01:22:33.914628 2162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:22:33.915396 kubelet[2162]: I0312 01:22:33.915380 2162 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:22:33.915550 kubelet[2162]: I0312 01:22:33.915443 2162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:22:33.915550 kubelet[2162]: I0312 01:22:33.915444 2162 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:22:33.918991 kubelet[2162]: I0312 01:22:33.918821 2162 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:22:33.919622 kubelet[2162]: E0312 01:22:33.917242 2162 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf37315cfec1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:22:33.913158683 +0000 UTC m=+1.260043766,LastTimestamp:2026-03-12 01:22:33.913158683 +0000 UTC m=+1.260043766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:22:33.920528 kubelet[2162]: I0312 01:22:33.920287 2162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:22:33.920598 kubelet[2162]: E0312 01:22:33.920539 2162 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:22:33.921164 kubelet[2162]: E0312 01:22:33.921056 2162 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:22:33.921164 kubelet[2162]: I0312 01:22:33.921159 2162 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:22:33.922003 kubelet[2162]: I0312 01:22:33.921346 2162 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:22:33.922003 kubelet[2162]: I0312 01:22:33.921415 2162 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:22:33.922003 kubelet[2162]: E0312 01:22:33.921812 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:22:33.922636 kubelet[2162]: E0312 01:22:33.922546 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Mar 12 01:22:33.926007 kubelet[2162]: I0312 01:22:33.925301 2162 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:22:33.926007 kubelet[2162]: I0312 01:22:33.925333 2162 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:22:33.926007 kubelet[2162]: I0312 01:22:33.925483 2162 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:22:33.948750 kubelet[2162]: I0312 01:22:33.948673 2162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:22:33.952060 kubelet[2162]: I0312 01:22:33.951815 2162 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:22:33.952060 kubelet[2162]: I0312 01:22:33.951975 2162 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:22:33.952060 kubelet[2162]: I0312 01:22:33.951996 2162 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:22:33.952060 kubelet[2162]: I0312 01:22:33.952003 2162 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:22:33.952357 kubelet[2162]: E0312 01:22:33.952043 2162 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:22:33.953545 kubelet[2162]: E0312 01:22:33.953327 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:22:33.956121 kubelet[2162]: I0312 01:22:33.956097 2162 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:22:33.956470 kubelet[2162]: I0312 01:22:33.956230 2162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:22:33.956591 kubelet[2162]: I0312 01:22:33.956575 2162 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:22:34.021475 kubelet[2162]: E0312 01:22:34.021337 2162 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:22:34.048792 kubelet[2162]: I0312 01:22:34.048649 2162 policy_none.go:49] "None policy: Start" Mar 12 01:22:34.048792 kubelet[2162]: I0312 01:22:34.048735 2162 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:22:34.048792 kubelet[2162]: I0312 01:22:34.048760 2162 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:22:34.052834 kubelet[2162]: E0312 01:22:34.052720 2162 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 01:22:34.060759 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:22:34.088256 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:22:34.091732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:22:34.103249 kubelet[2162]: E0312 01:22:34.103145 2162 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:22:34.103368 kubelet[2162]: I0312 01:22:34.103354 2162 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:22:34.103429 kubelet[2162]: I0312 01:22:34.103367 2162 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:22:34.103660 kubelet[2162]: I0312 01:22:34.103616 2162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:22:34.105512 kubelet[2162]: E0312 01:22:34.105479 2162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:22:34.105569 kubelet[2162]: E0312 01:22:34.105524 2162 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:22:34.124392 kubelet[2162]: E0312 01:22:34.124306 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Mar 12 01:22:34.352419 kubelet[2162]: I0312 01:22:34.213613 2162 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:22:34.352419 kubelet[2162]: E0312 01:22:34.352184 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 12 01:22:34.373368 systemd[1]: Created slice kubepods-burstable-pod61686d6b3eb6402066dce161029dcca0.slice - libcontainer container kubepods-burstable-pod61686d6b3eb6402066dce161029dcca0.slice. Mar 12 01:22:34.406182 kubelet[2162]: E0312 01:22:34.406082 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:34.409182 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 12 01:22:34.411324 kubelet[2162]: E0312 01:22:34.411283 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:34.420216 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 12 01:22:34.424247 kubelet[2162]: E0312 01:22:34.424161 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:34.451571 kubelet[2162]: I0312 01:22:34.451462 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61686d6b3eb6402066dce161029dcca0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61686d6b3eb6402066dce161029dcca0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:34.451571 kubelet[2162]: I0312 01:22:34.451504 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61686d6b3eb6402066dce161029dcca0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61686d6b3eb6402066dce161029dcca0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:34.451571 kubelet[2162]: I0312 01:22:34.451551 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:34.451571 kubelet[2162]: I0312 01:22:34.451565 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:34.451571 kubelet[2162]: I0312 01:22:34.451582 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:34.451790 kubelet[2162]: I0312 01:22:34.451594 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61686d6b3eb6402066dce161029dcca0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61686d6b3eb6402066dce161029dcca0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:34.451790 kubelet[2162]: I0312 01:22:34.451622 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:34.451790 kubelet[2162]: I0312 01:22:34.451659 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:34.451790 kubelet[2162]: I0312 01:22:34.451682 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:34.526220 kubelet[2162]: E0312 01:22:34.526095 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Mar 12 01:22:34.554236 kubelet[2162]: I0312 01:22:34.554121 2162 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:22:34.554584 kubelet[2162]: E0312 01:22:34.554539 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 12 01:22:34.708489 kubelet[2162]: E0312 01:22:34.708202 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:34.709699 containerd[1469]: time="2026-03-12T01:22:34.709598729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61686d6b3eb6402066dce161029dcca0,Namespace:kube-system,Attempt:0,}" Mar 12 01:22:34.712238 kubelet[2162]: E0312 01:22:34.712132 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:34.712764 containerd[1469]: time="2026-03-12T01:22:34.712712783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 12 01:22:34.729206 kubelet[2162]: E0312 01:22:34.729049 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:34.730392 containerd[1469]: time="2026-03-12T01:22:34.730334857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 12 01:22:34.866215 kubelet[2162]: E0312 01:22:34.866062 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:22:34.957777 kubelet[2162]: I0312 01:22:34.957708 2162 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:22:34.960107 kubelet[2162]: E0312 01:22:34.959256 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 12 01:22:35.023788 kubelet[2162]: E0312 01:22:35.023711 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:22:35.088828 kubelet[2162]: E0312 01:22:35.088740 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:22:35.221778 kubelet[2162]: E0312 01:22:35.221604 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:22:35.327696 kubelet[2162]: E0312 01:22:35.327549 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Mar 12 01:22:35.387604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215734611.mount: Deactivated successfully. Mar 12 01:22:35.395492 containerd[1469]: time="2026-03-12T01:22:35.395393522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:22:35.399493 containerd[1469]: time="2026-03-12T01:22:35.399377685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:22:35.400553 containerd[1469]: time="2026-03-12T01:22:35.400447353Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:22:35.401698 containerd[1469]: time="2026-03-12T01:22:35.401603347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:22:35.402610 containerd[1469]: time="2026-03-12T01:22:35.402574615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:22:35.403904 containerd[1469]: time="2026-03-12T01:22:35.403829649Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:22:35.404959 containerd[1469]: time="2026-03-12T01:22:35.404865643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:22:35.407825 containerd[1469]: time="2026-03-12T01:22:35.407755680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:22:35.409170 containerd[1469]: time="2026-03-12T01:22:35.409097381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 696.294289ms" Mar 12 01:22:35.412597 containerd[1469]: time="2026-03-12T01:22:35.412337924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.667ms" Mar 12 01:22:35.414785 containerd[1469]: time="2026-03-12T01:22:35.414721660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 684.294982ms" Mar 12 01:22:35.763044 kubelet[2162]: I0312 01:22:35.762984 2162 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:22:35.763586 kubelet[2162]: E0312 01:22:35.763525 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 12 01:22:35.909464 kernel: hrtimer: interrupt took 11952720 ns Mar 12 01:22:35.971200 containerd[1469]: time="2026-03-12T01:22:35.969157497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:22:35.971200 containerd[1469]: time="2026-03-12T01:22:35.971032308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:22:35.971200 containerd[1469]: time="2026-03-12T01:22:35.971052636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:35.996839 containerd[1469]: time="2026-03-12T01:22:35.995794307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:22:35.998263 containerd[1469]: time="2026-03-12T01:22:35.997611079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:22:35.998263 containerd[1469]: time="2026-03-12T01:22:35.997661593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:35.998263 containerd[1469]: time="2026-03-12T01:22:35.997838734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:36.204069 containerd[1469]: time="2026-03-12T01:22:36.203057890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:22:36.204069 containerd[1469]: time="2026-03-12T01:22:36.203663249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:22:36.204069 containerd[1469]: time="2026-03-12T01:22:36.203696372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:36.207368 containerd[1469]: time="2026-03-12T01:22:36.204859907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:36.207368 containerd[1469]: time="2026-03-12T01:22:36.204691850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:36.208748 kubelet[2162]: E0312 01:22:36.208641 2162 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:22:36.444015 systemd[1]: Started cri-containerd-b63eac5b57b52af22d11f530a503168e919f6578af1b94e107df31f368e8793c.scope - libcontainer container b63eac5b57b52af22d11f530a503168e919f6578af1b94e107df31f368e8793c. Mar 12 01:22:36.471524 systemd[1]: Started cri-containerd-53419f3e9947b3cc54b2e76cf303f1fd9c53893d375e9f8d377c41e9ee5e9d6c.scope - libcontainer container 53419f3e9947b3cc54b2e76cf303f1fd9c53893d375e9f8d377c41e9ee5e9d6c. Mar 12 01:22:36.475164 systemd[1]: Started cri-containerd-d3e028753f4943cc82120f36f0d06baa6b2dc534bdff96e1a9a4a44529f6f006.scope - libcontainer container d3e028753f4943cc82120f36f0d06baa6b2dc534bdff96e1a9a4a44529f6f006. Mar 12 01:22:36.611720 kubelet[2162]: E0312 01:22:36.609426 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:22:36.637838 containerd[1469]: time="2026-03-12T01:22:36.637791045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61686d6b3eb6402066dce161029dcca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"53419f3e9947b3cc54b2e76cf303f1fd9c53893d375e9f8d377c41e9ee5e9d6c\"" Mar 12 01:22:36.638397 containerd[1469]: time="2026-03-12T01:22:36.638322644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"b63eac5b57b52af22d11f530a503168e919f6578af1b94e107df31f368e8793c\"" Mar 12 01:22:36.639254 containerd[1469]: time="2026-03-12T01:22:36.639201425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e028753f4943cc82120f36f0d06baa6b2dc534bdff96e1a9a4a44529f6f006\"" Mar 12 01:22:36.639360 kubelet[2162]: E0312 01:22:36.639241 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:36.640260 kubelet[2162]: E0312 01:22:36.640229 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:36.641464 kubelet[2162]: E0312 01:22:36.641432 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:36.649529 containerd[1469]: time="2026-03-12T01:22:36.649493857Z" level=info msg="CreateContainer within sandbox \"53419f3e9947b3cc54b2e76cf303f1fd9c53893d375e9f8d377c41e9ee5e9d6c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:22:36.653287 containerd[1469]: time="2026-03-12T01:22:36.653243306Z" level=info msg="CreateContainer within sandbox \"b63eac5b57b52af22d11f530a503168e919f6578af1b94e107df31f368e8793c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:22:36.657128 containerd[1469]: time="2026-03-12T01:22:36.657102228Z" level=info msg="CreateContainer within sandbox \"d3e028753f4943cc82120f36f0d06baa6b2dc534bdff96e1a9a4a44529f6f006\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:22:36.680567 containerd[1469]: time="2026-03-12T01:22:36.680324189Z" level=info msg="CreateContainer within sandbox \"53419f3e9947b3cc54b2e76cf303f1fd9c53893d375e9f8d377c41e9ee5e9d6c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e6825e6d5af7a41a472807c50f761cadca377e2c01eee6707082059d0b9bf28\"" Mar 12 01:22:36.681638 containerd[1469]: time="2026-03-12T01:22:36.681590711Z" level=info msg="StartContainer for \"9e6825e6d5af7a41a472807c50f761cadca377e2c01eee6707082059d0b9bf28\"" Mar 12 01:22:36.833708 containerd[1469]: time="2026-03-12T01:22:36.833387053Z" level=info msg="CreateContainer within sandbox \"b63eac5b57b52af22d11f530a503168e919f6578af1b94e107df31f368e8793c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"29d813c2191f8da10191d2559d1d6620e434e798244d95aaa6c9621107782128\"" Mar 12 01:22:36.834178 containerd[1469]: time="2026-03-12T01:22:36.834134208Z" level=info msg="CreateContainer within sandbox \"d3e028753f4943cc82120f36f0d06baa6b2dc534bdff96e1a9a4a44529f6f006\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a74d053e931a43430eb9f98c129af9cc92546b9b958d3cfbe26258c39b95e842\"" Mar 12 01:22:36.847377 containerd[1469]: time="2026-03-12T01:22:36.847170771Z" level=info msg="StartContainer for \"29d813c2191f8da10191d2559d1d6620e434e798244d95aaa6c9621107782128\"" Mar 12 01:22:36.848167 containerd[1469]: time="2026-03-12T01:22:36.848106764Z" level=info msg="StartContainer for \"a74d053e931a43430eb9f98c129af9cc92546b9b958d3cfbe26258c39b95e842\"" Mar 12 01:22:36.882111 systemd[1]: Started cri-containerd-9e6825e6d5af7a41a472807c50f761cadca377e2c01eee6707082059d0b9bf28.scope - libcontainer container 9e6825e6d5af7a41a472807c50f761cadca377e2c01eee6707082059d0b9bf28. Mar 12 01:22:36.910118 systemd[1]: Started cri-containerd-a74d053e931a43430eb9f98c129af9cc92546b9b958d3cfbe26258c39b95e842.scope - libcontainer container a74d053e931a43430eb9f98c129af9cc92546b9b958d3cfbe26258c39b95e842. Mar 12 01:22:36.997238 kubelet[2162]: E0312 01:22:36.997059 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="3.2s" Mar 12 01:22:37.018483 systemd[1]: Started cri-containerd-29d813c2191f8da10191d2559d1d6620e434e798244d95aaa6c9621107782128.scope - libcontainer container 29d813c2191f8da10191d2559d1d6620e434e798244d95aaa6c9621107782128. Mar 12 01:22:37.078726 containerd[1469]: time="2026-03-12T01:22:37.077550812Z" level=info msg="StartContainer for \"9e6825e6d5af7a41a472807c50f761cadca377e2c01eee6707082059d0b9bf28\" returns successfully" Mar 12 01:22:37.285644 containerd[1469]: time="2026-03-12T01:22:37.098396988Z" level=info msg="StartContainer for \"a74d053e931a43430eb9f98c129af9cc92546b9b958d3cfbe26258c39b95e842\" returns successfully" Mar 12 01:22:37.288588 kubelet[2162]: E0312 01:22:37.288467 2162 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:22:37.306281 containerd[1469]: time="2026-03-12T01:22:37.306209859Z" level=info msg="StartContainer for \"29d813c2191f8da10191d2559d1d6620e434e798244d95aaa6c9621107782128\" returns successfully" Mar 12 01:22:37.373692 kubelet[2162]: I0312 01:22:37.373631 2162 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:22:37.588984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569626280.mount: Deactivated successfully. Mar 12 01:22:38.045161 kubelet[2162]: E0312 01:22:38.045021 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:38.045276 kubelet[2162]: E0312 01:22:38.045259 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:38.049981 kubelet[2162]: E0312 01:22:38.049473 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:38.049981 kubelet[2162]: E0312 01:22:38.049679 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:38.054386 kubelet[2162]: E0312 01:22:38.054341 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:38.054634 kubelet[2162]: E0312 01:22:38.054581 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:39.058104 kubelet[2162]: E0312 01:22:39.057738 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:39.062522 kubelet[2162]: E0312 01:22:39.060098 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:39.062522 kubelet[2162]: E0312 01:22:39.060219 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:39.062522 kubelet[2162]: E0312 01:22:39.061442 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:39.063051 kubelet[2162]: E0312 01:22:39.062806 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:39.069472 kubelet[2162]: E0312 01:22:39.069210 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:40.204478 kubelet[2162]: E0312 01:22:40.204445 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:40.205542 kubelet[2162]: E0312 01:22:40.205378 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:40.205873 kubelet[2162]: E0312 01:22:40.205823 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:40.206124 kubelet[2162]: E0312 01:22:40.206073 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:41.457405 kubelet[2162]: E0312 01:22:41.457033 2162 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:22:41.457405 kubelet[2162]: E0312 01:22:41.457308 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:42.023114 kubelet[2162]: E0312 01:22:42.022908 2162 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:22:42.125415 kubelet[2162]: I0312 01:22:42.125327 2162 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:22:42.125595 kubelet[2162]: E0312 01:22:42.125581 2162 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 12 01:22:42.147566 kubelet[2162]: E0312 01:22:42.147504 2162 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:22:42.248620 kubelet[2162]: E0312 01:22:42.248503 2162 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:22:42.350608 kubelet[2162]: E0312 01:22:42.349798 2162 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:22:42.423268 kubelet[2162]: I0312 01:22:42.422676 2162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:42.474413 kubelet[2162]: E0312 01:22:42.474183 2162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:42.474413 kubelet[2162]: I0312 01:22:42.474287 2162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:42.506229 kubelet[2162]: E0312 01:22:42.506021 2162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:42.506229 kubelet[2162]: I0312 01:22:42.506130 2162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:42.518083 kubelet[2162]: E0312 01:22:42.517805 2162 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:42.921181 kubelet[2162]: I0312 01:22:42.920899 2162 apiserver.go:52] "Watching apiserver" Mar 12 01:22:43.022121 kubelet[2162]: I0312 01:22:43.021989 2162 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:22:44.473673 kubelet[2162]: I0312 01:22:44.473591 2162 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:44.487731 kubelet[2162]: E0312 01:22:44.487526 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:44.823302 systemd[1]: Reloading requested from client PID 2454 ('systemctl') (unit session-7.scope)... Mar 12 01:22:44.823346 systemd[1]: Reloading... Mar 12 01:22:44.974820 zram_generator::config[2490]: No configuration found. Mar 12 01:22:45.158451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:22:45.218065 kubelet[2162]: E0312 01:22:45.217907 2162 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:45.315752 systemd[1]: Reloading finished in 491 ms. Mar 12 01:22:45.393483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:45.407371 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:22:45.407647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:45.407705 systemd[1]: kubelet.service: Consumed 4.117s CPU time, 134.3M memory peak, 0B memory swap peak. Mar 12 01:22:45.422254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:22:45.614777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:22:45.620343 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:22:45.693594 kubelet[2538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:22:45.693594 kubelet[2538]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:22:45.693594 kubelet[2538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:22:45.693594 kubelet[2538]: I0312 01:22:45.693573 2538 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:22:45.701857 kubelet[2538]: I0312 01:22:45.701804 2538 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:22:45.701857 kubelet[2538]: I0312 01:22:45.701859 2538 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:22:45.702155 kubelet[2538]: I0312 01:22:45.702132 2538 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:22:45.703314 kubelet[2538]: I0312 01:22:45.703276 2538 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:22:45.705231 kubelet[2538]: I0312 01:22:45.705204 2538 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:22:45.802412 kubelet[2538]: E0312 01:22:45.802354 2538 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:22:45.802412 kubelet[2538]: I0312 01:22:45.802391 2538 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:22:45.811426 kubelet[2538]: I0312 01:22:45.811357 2538 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:22:45.811865 kubelet[2538]: I0312 01:22:45.811796 2538 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:22:45.812194 kubelet[2538]: I0312 01:22:45.811873 2538 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:22:45.812306 kubelet[2538]: I0312 01:22:45.812198 2538 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:22:45.812306 kubelet[2538]: I0312 01:22:45.812213 2538 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:22:45.812306 kubelet[2538]: I0312 01:22:45.812279 2538 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:22:45.812648 kubelet[2538]: I0312 01:22:45.812599 2538 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:22:45.812648 kubelet[2538]: I0312 01:22:45.812641 2538 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:22:45.812699 kubelet[2538]: I0312 01:22:45.812678 2538 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:22:45.812760 kubelet[2538]: I0312 01:22:45.812732 2538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:22:45.813403 sudo[2554]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 01:22:45.814253 sudo[2554]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 01:22:45.815819 kubelet[2538]: I0312 01:22:45.815138 2538 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:22:45.815819 kubelet[2538]: I0312 01:22:45.815655 2538 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:22:45.822410 kubelet[2538]: I0312 01:22:45.822354 2538 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:22:45.822461 kubelet[2538]: I0312 01:22:45.822420 2538 server.go:1289] "Started kubelet" Mar 12 01:22:45.823555 kubelet[2538]: I0312 01:22:45.823475 2538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:22:45.823771 kubelet[2538]: I0312 01:22:45.823749 2538 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:22:45.825813 kubelet[2538]: I0312 01:22:45.825736 2538 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:22:45.827414 kubelet[2538]: I0312 01:22:45.827300 2538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:22:45.829368 kubelet[2538]: I0312 01:22:45.829304 2538 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:22:45.830486 kubelet[2538]: I0312 01:22:45.830192 2538 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:22:45.831569 kubelet[2538]: I0312 01:22:45.831459 2538 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:22:45.831793 kubelet[2538]: I0312 01:22:45.831613 2538 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:22:45.831793 kubelet[2538]: I0312 01:22:45.831716 2538 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:22:45.833043 kubelet[2538]: I0312 01:22:45.832486 2538 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:22:45.833043 kubelet[2538]: I0312 01:22:45.832632 2538 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:22:45.833761 kubelet[2538]: E0312 01:22:45.833548 2538 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:22:45.836328 kubelet[2538]: I0312 01:22:45.836197 2538 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:22:45.875636 kubelet[2538]: I0312 01:22:45.874470 2538 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:22:45.876672 kubelet[2538]: I0312 01:22:45.876478 2538 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:22:45.876672 kubelet[2538]: I0312 01:22:45.876497 2538 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:22:45.876672 kubelet[2538]: I0312 01:22:45.876516 2538 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:22:45.876672 kubelet[2538]: I0312 01:22:45.876522 2538 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:22:45.876672 kubelet[2538]: E0312 01:22:45.876565 2538 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:22:45.909903 kubelet[2538]: I0312 01:22:45.909525 2538 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:22:45.909903 kubelet[2538]: I0312 01:22:45.909550 2538 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:22:45.909903 kubelet[2538]: I0312 01:22:45.909574 2538 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:22:45.910101 kubelet[2538]: I0312 01:22:45.909782 2538 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 01:22:45.910101 kubelet[2538]: I0312 01:22:45.910073 2538 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 01:22:45.910152 kubelet[2538]: I0312 01:22:45.910107 2538 policy_none.go:49] "None policy: Start" Mar 12 01:22:45.910152 kubelet[2538]: I0312 01:22:45.910123 2538 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:22:45.910152 kubelet[2538]: I0312 01:22:45.910141 2538 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:22:45.910910 kubelet[2538]: I0312 01:22:45.910315 2538 state_mem.go:75] "Updated machine memory state" Mar 12 01:22:45.916565 kubelet[2538]: E0312 01:22:45.916371 2538 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:22:45.916652 kubelet[2538]: I0312 01:22:45.916573 2538 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:22:45.916652 kubelet[2538]: I0312 01:22:45.916588 2538 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:22:45.917247 kubelet[2538]: I0312 01:22:45.916816 2538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:22:45.918238 kubelet[2538]: E0312 01:22:45.917664 2538 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:22:45.978108 kubelet[2538]: I0312 01:22:45.977910 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:45.978557 kubelet[2538]: I0312 01:22:45.978250 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:45.978557 kubelet[2538]: I0312 01:22:45.977912 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:45.990521 kubelet[2538]: E0312 01:22:45.990469 2538 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:46.028059 kubelet[2538]: I0312 01:22:46.028029 2538 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:22:46.032774 kubelet[2538]: I0312 01:22:46.032609 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61686d6b3eb6402066dce161029dcca0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61686d6b3eb6402066dce161029dcca0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:46.033172 kubelet[2538]: I0312 01:22:46.033043 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61686d6b3eb6402066dce161029dcca0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61686d6b3eb6402066dce161029dcca0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:46.033271 kubelet[2538]: I0312 01:22:46.033244 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:46.033407 kubelet[2538]: I0312 01:22:46.033273 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:46.033449 kubelet[2538]: I0312 01:22:46.033383 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61686d6b3eb6402066dce161029dcca0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61686d6b3eb6402066dce161029dcca0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:46.033601 kubelet[2538]: I0312 01:22:46.033587 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:46.033778 kubelet[2538]: I0312 01:22:46.033741 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:46.033881 kubelet[2538]: I0312 01:22:46.033789 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:22:46.033881 kubelet[2538]: I0312 01:22:46.033887 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:46.039281 kubelet[2538]: I0312 01:22:46.039264 2538 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 01:22:46.040279 kubelet[2538]: I0312 01:22:46.039486 2538 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:22:46.292603 kubelet[2538]: E0312 01:22:46.292049 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:46.292716 kubelet[2538]: E0312 01:22:46.292668 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:46.293021 kubelet[2538]: E0312 01:22:46.292860 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:46.509884 sudo[2554]: pam_unix(sudo:session): session closed for user root Mar 12 01:22:46.828086 kubelet[2538]: I0312 01:22:46.827137 2538 apiserver.go:52] "Watching apiserver" Mar 12 01:22:46.896480 kubelet[2538]: I0312 01:22:46.896404 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:46.897138 kubelet[2538]: I0312 01:22:46.897083 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:46.898437 kubelet[2538]: E0312 01:22:46.897705 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:46.907160 kubelet[2538]: E0312 01:22:46.907093 2538 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:22:46.907351 kubelet[2538]: E0312 01:22:46.907288 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:46.909677 kubelet[2538]: E0312 01:22:46.908869 2538 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 12 01:22:46.909677 kubelet[2538]: E0312 01:22:46.909099 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:46.931157 kubelet[2538]: I0312 01:22:46.931077 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.931063242 podStartE2EDuration="2.931063242s" podCreationTimestamp="2026-03-12 01:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:22:46.929964819 +0000 UTC m=+1.297554112" watchObservedRunningTime="2026-03-12 01:22:46.931063242 +0000 UTC m=+1.298652535" Mar 12 01:22:46.933125 kubelet[2538]: I0312 01:22:46.933079 2538 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:22:46.944732 kubelet[2538]: I0312 01:22:46.944627 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9446131819999999 podStartE2EDuration="1.944613182s" podCreationTimestamp="2026-03-12 01:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:22:46.944447287 +0000 UTC m=+1.312036580" watchObservedRunningTime="2026-03-12 01:22:46.944613182 +0000 UTC m=+1.312202474" Mar 12 01:22:46.984260 kubelet[2538]: I0312 01:22:46.984212 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.984193834 podStartE2EDuration="1.984193834s" podCreationTimestamp="2026-03-12 01:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:22:46.956364368 +0000 UTC m=+1.323953662" watchObservedRunningTime="2026-03-12 01:22:46.984193834 +0000 UTC m=+1.351783127" Mar 12 01:22:47.898215 kubelet[2538]: E0312 01:22:47.898154 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:47.898688 kubelet[2538]: E0312 01:22:47.898362 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:47.898843 kubelet[2538]: E0312 01:22:47.898717 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:48.901229 kubelet[2538]: E0312 01:22:48.901142 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:48.938059 sudo[1648]: pam_unix(sudo:session): session closed for user root Mar 12 01:22:48.941586 sshd[1645]: pam_unix(sshd:session): session closed for user core Mar 12 01:22:48.948058 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:54872.service: Deactivated successfully. Mar 12 01:22:48.951179 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:22:48.951479 systemd[1]: session-7.scope: Consumed 7.914s CPU time, 165.3M memory peak, 0B memory swap peak. Mar 12 01:22:48.952379 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:22:48.954190 systemd-logind[1454]: Removed session 7. Mar 12 01:22:50.022710 kubelet[2538]: I0312 01:22:50.022655 2538 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:22:50.023384 containerd[1469]: time="2026-03-12T01:22:50.023329206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:22:50.023693 kubelet[2538]: I0312 01:22:50.023598 2538 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:22:50.949512 kubelet[2538]: I0312 01:22:50.949481 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-etc-cni-netd\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.949870 kubelet[2538]: I0312 01:22:50.949704 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-xtables-lock\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.949870 kubelet[2538]: I0312 01:22:50.949785 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-config-path\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950184 kubelet[2538]: I0312 01:22:50.950045 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-kernel\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950248 kubelet[2538]: I0312 01:22:50.950066 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slpsz\" (UniqueName: \"kubernetes.io/projected/2eeaee54-0d99-416b-8671-87d26c63a573-kube-api-access-slpsz\") pod \"kube-proxy-66znn\" (UID: \"2eeaee54-0d99-416b-8671-87d26c63a573\") " pod="kube-system/kube-proxy-66znn" Mar 12 01:22:50.950321 kubelet[2538]: I0312 01:22:50.950308 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2eeaee54-0d99-416b-8671-87d26c63a573-kube-proxy\") pod \"kube-proxy-66znn\" (UID: \"2eeaee54-0d99-416b-8671-87d26c63a573\") " pod="kube-system/kube-proxy-66znn" Mar 12 01:22:50.950375 kubelet[2538]: I0312 01:22:50.950365 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hostproc\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950650 kubelet[2538]: I0312 01:22:50.950418 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-cgroup\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950650 kubelet[2538]: I0312 01:22:50.950433 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41dc5ac5-c30f-43c1-8629-e6a2575f1107-clustermesh-secrets\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950650 kubelet[2538]: I0312 01:22:50.950446 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-net\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950650 kubelet[2538]: I0312 01:22:50.950459 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cni-path\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950650 kubelet[2538]: I0312 01:22:50.950473 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hubble-tls\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950650 kubelet[2538]: I0312 01:22:50.950486 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6ljh\" (UniqueName: \"kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-kube-api-access-k6ljh\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950836 kubelet[2538]: I0312 01:22:50.950531 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-run\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950836 kubelet[2538]: I0312 01:22:50.950545 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eeaee54-0d99-416b-8671-87d26c63a573-xtables-lock\") pod \"kube-proxy-66znn\" (UID: \"2eeaee54-0d99-416b-8671-87d26c63a573\") " pod="kube-system/kube-proxy-66znn" Mar 12 01:22:50.950836 kubelet[2538]: I0312 01:22:50.950558 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eeaee54-0d99-416b-8671-87d26c63a573-lib-modules\") pod \"kube-proxy-66znn\" (UID: \"2eeaee54-0d99-416b-8671-87d26c63a573\") " pod="kube-system/kube-proxy-66znn" Mar 12 01:22:50.950836 kubelet[2538]: I0312 01:22:50.950577 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-bpf-maps\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.950836 kubelet[2538]: I0312 01:22:50.950588 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-lib-modules\") pod \"cilium-nb2tc\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " pod="kube-system/cilium-nb2tc" Mar 12 01:22:50.953216 systemd[1]: Created slice kubepods-besteffort-pod2eeaee54_0d99_416b_8671_87d26c63a573.slice - libcontainer container kubepods-besteffort-pod2eeaee54_0d99_416b_8671_87d26c63a573.slice. Mar 12 01:22:50.973236 systemd[1]: Created slice kubepods-burstable-pod41dc5ac5_c30f_43c1_8629_e6a2575f1107.slice - libcontainer container kubepods-burstable-pod41dc5ac5_c30f_43c1_8629_e6a2575f1107.slice. Mar 12 01:22:51.012392 systemd[1]: Created slice kubepods-besteffort-podd4db1f92_06d8_4bb3_8517_97d7485789b9.slice - libcontainer container kubepods-besteffort-podd4db1f92_06d8_4bb3_8517_97d7485789b9.slice. Mar 12 01:22:51.051483 kubelet[2538]: I0312 01:22:51.051073 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh58r\" (UniqueName: \"kubernetes.io/projected/d4db1f92-06d8-4bb3-8517-97d7485789b9-kube-api-access-gh58r\") pod \"cilium-operator-6c4d7847fc-554df\" (UID: \"d4db1f92-06d8-4bb3-8517-97d7485789b9\") " pod="kube-system/cilium-operator-6c4d7847fc-554df" Mar 12 01:22:51.051483 kubelet[2538]: I0312 01:22:51.051315 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4db1f92-06d8-4bb3-8517-97d7485789b9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-554df\" (UID: \"d4db1f92-06d8-4bb3-8517-97d7485789b9\") " pod="kube-system/cilium-operator-6c4d7847fc-554df" Mar 12 01:22:51.270626 kubelet[2538]: E0312 01:22:51.270348 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.271548 containerd[1469]: time="2026-03-12T01:22:51.271247368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66znn,Uid:2eeaee54-0d99-416b-8671-87d26c63a573,Namespace:kube-system,Attempt:0,}" Mar 12 01:22:51.277875 kubelet[2538]: E0312 01:22:51.277738 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.278684 containerd[1469]: time="2026-03-12T01:22:51.278614857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nb2tc,Uid:41dc5ac5-c30f-43c1-8629-e6a2575f1107,Namespace:kube-system,Attempt:0,}" Mar 12 01:22:51.315630 kubelet[2538]: E0312 01:22:51.315597 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.321159 containerd[1469]: time="2026-03-12T01:22:51.316614279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-554df,Uid:d4db1f92-06d8-4bb3-8517-97d7485789b9,Namespace:kube-system,Attempt:0,}" Mar 12 01:22:51.352756 containerd[1469]: time="2026-03-12T01:22:51.352456484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:22:51.352756 containerd[1469]: time="2026-03-12T01:22:51.352525732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:22:51.352756 containerd[1469]: time="2026-03-12T01:22:51.352543666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:51.353127 containerd[1469]: time="2026-03-12T01:22:51.352733530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:51.359880 containerd[1469]: time="2026-03-12T01:22:51.358262937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:22:51.359880 containerd[1469]: time="2026-03-12T01:22:51.359437866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:22:51.359880 containerd[1469]: time="2026-03-12T01:22:51.359464054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:51.359880 containerd[1469]: time="2026-03-12T01:22:51.359570893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:51.361609 containerd[1469]: time="2026-03-12T01:22:51.361447323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:22:51.361609 containerd[1469]: time="2026-03-12T01:22:51.361523495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:22:51.361609 containerd[1469]: time="2026-03-12T01:22:51.361536790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:51.363419 containerd[1469]: time="2026-03-12T01:22:51.363166409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:22:51.391131 systemd[1]: Started cri-containerd-a2ef5707a09dc0dfdddc6098c10d445cb2d6e895f6dcb5421e17f71a78166178.scope - libcontainer container a2ef5707a09dc0dfdddc6098c10d445cb2d6e895f6dcb5421e17f71a78166178. Mar 12 01:22:51.398344 systemd[1]: Started cri-containerd-5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea.scope - libcontainer container 5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea. Mar 12 01:22:51.401124 systemd[1]: Started cri-containerd-b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2.scope - libcontainer container b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2. Mar 12 01:22:51.446700 containerd[1469]: time="2026-03-12T01:22:51.446572218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nb2tc,Uid:41dc5ac5-c30f-43c1-8629-e6a2575f1107,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\"" Mar 12 01:22:51.448733 kubelet[2538]: E0312 01:22:51.448665 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.451984 containerd[1469]: time="2026-03-12T01:22:51.450323189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66znn,Uid:2eeaee54-0d99-416b-8671-87d26c63a573,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2ef5707a09dc0dfdddc6098c10d445cb2d6e895f6dcb5421e17f71a78166178\"" Mar 12 01:22:51.451984 containerd[1469]: time="2026-03-12T01:22:51.451124201Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 01:22:51.454478 kubelet[2538]: E0312 01:22:51.454406 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.465150 containerd[1469]: time="2026-03-12T01:22:51.464747287Z" level=info msg="CreateContainer within sandbox \"a2ef5707a09dc0dfdddc6098c10d445cb2d6e895f6dcb5421e17f71a78166178\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:22:51.493603 containerd[1469]: time="2026-03-12T01:22:51.493524077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-554df,Uid:d4db1f92-06d8-4bb3-8517-97d7485789b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2\"" Mar 12 01:22:51.494670 kubelet[2538]: E0312 01:22:51.494578 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.501436 containerd[1469]: time="2026-03-12T01:22:51.501395095Z" level=info msg="CreateContainer within sandbox \"a2ef5707a09dc0dfdddc6098c10d445cb2d6e895f6dcb5421e17f71a78166178\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f3523747871b56b1900c6b2e4f5a7d59b1d085c766c42c641be6d2c413e1cc1\"" Mar 12 01:22:51.502745 containerd[1469]: time="2026-03-12T01:22:51.502545622Z" level=info msg="StartContainer for \"0f3523747871b56b1900c6b2e4f5a7d59b1d085c766c42c641be6d2c413e1cc1\"" Mar 12 01:22:51.515275 kubelet[2538]: E0312 01:22:51.515195 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.543236 systemd[1]: Started cri-containerd-0f3523747871b56b1900c6b2e4f5a7d59b1d085c766c42c641be6d2c413e1cc1.scope - libcontainer container 0f3523747871b56b1900c6b2e4f5a7d59b1d085c766c42c641be6d2c413e1cc1. Mar 12 01:22:51.604992 containerd[1469]: time="2026-03-12T01:22:51.604861444Z" level=info msg="StartContainer for \"0f3523747871b56b1900c6b2e4f5a7d59b1d085c766c42c641be6d2c413e1cc1\" returns successfully" Mar 12 01:22:51.927567 kubelet[2538]: E0312 01:22:51.927531 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.927875 kubelet[2538]: E0312 01:22:51.927598 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:51.950535 kubelet[2538]: I0312 01:22:51.950434 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66znn" podStartSLOduration=1.950415863 podStartE2EDuration="1.950415863s" podCreationTimestamp="2026-03-12 01:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:22:51.944773968 +0000 UTC m=+6.312363302" watchObservedRunningTime="2026-03-12 01:22:51.950415863 +0000 UTC m=+6.318005156" Mar 12 01:22:52.532834 kubelet[2538]: E0312 01:22:52.531716 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:52.927073 update_engine[1459]: I20260312 01:22:52.926956 1459 update_attempter.cc:509] Updating boot flags... Mar 12 01:22:52.929881 kubelet[2538]: E0312 01:22:52.929745 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:22:52.996050 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2924) Mar 12 01:22:53.102029 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2927) Mar 12 01:23:03.437716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371132052.mount: Deactivated successfully. Mar 12 01:23:05.485112 containerd[1469]: time="2026-03-12T01:23:05.485042600Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:05.486196 containerd[1469]: time="2026-03-12T01:23:05.486140410Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 12 01:23:05.487541 containerd[1469]: time="2026-03-12T01:23:05.487469017Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:05.489038 containerd[1469]: time="2026-03-12T01:23:05.488885293Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.037628324s" Mar 12 01:23:05.489038 containerd[1469]: time="2026-03-12T01:23:05.488964231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 12 01:23:05.492132 containerd[1469]: time="2026-03-12T01:23:05.492003897Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 01:23:05.497608 containerd[1469]: time="2026-03-12T01:23:05.497550553Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 01:23:05.516911 containerd[1469]: time="2026-03-12T01:23:05.516782928Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\"" Mar 12 01:23:05.517491 containerd[1469]: time="2026-03-12T01:23:05.517458599Z" level=info msg="StartContainer for \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\"" Mar 12 01:23:05.556215 systemd[1]: Started cri-containerd-12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd.scope - libcontainer container 12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd. Mar 12 01:23:05.592298 containerd[1469]: time="2026-03-12T01:23:05.592262380Z" level=info msg="StartContainer for \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\" returns successfully" Mar 12 01:23:05.611028 systemd[1]: cri-containerd-12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd.scope: Deactivated successfully. Mar 12 01:23:05.688437 containerd[1469]: time="2026-03-12T01:23:05.688265258Z" level=info msg="shim disconnected" id=12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd namespace=k8s.io Mar 12 01:23:05.688437 containerd[1469]: time="2026-03-12T01:23:05.688312655Z" level=warning msg="cleaning up after shim disconnected" id=12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd namespace=k8s.io Mar 12 01:23:05.688437 containerd[1469]: time="2026-03-12T01:23:05.688323846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:23:05.970659 kubelet[2538]: E0312 01:23:05.970215 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:05.979126 containerd[1469]: time="2026-03-12T01:23:05.978901522Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 01:23:06.002028 containerd[1469]: time="2026-03-12T01:23:06.001895799Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\"" Mar 12 01:23:06.002851 containerd[1469]: time="2026-03-12T01:23:06.002825598Z" level=info msg="StartContainer for \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\"" Mar 12 01:23:06.050259 systemd[1]: Started cri-containerd-75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b.scope - libcontainer container 75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b. Mar 12 01:23:06.093916 containerd[1469]: time="2026-03-12T01:23:06.093734946Z" level=info msg="StartContainer for \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\" returns successfully" Mar 12 01:23:06.107573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:23:06.107889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:23:06.108017 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:23:06.118373 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:23:06.118725 systemd[1]: cri-containerd-75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b.scope: Deactivated successfully. Mar 12 01:23:06.144155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:23:06.150540 containerd[1469]: time="2026-03-12T01:23:06.150407706Z" level=info msg="shim disconnected" id=75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b namespace=k8s.io Mar 12 01:23:06.150540 containerd[1469]: time="2026-03-12T01:23:06.150472637Z" level=warning msg="cleaning up after shim disconnected" id=75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b namespace=k8s.io Mar 12 01:23:06.150540 containerd[1469]: time="2026-03-12T01:23:06.150488566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:23:06.512630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd-rootfs.mount: Deactivated successfully. Mar 12 01:23:06.880790 containerd[1469]: time="2026-03-12T01:23:06.880590516Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:06.881855 containerd[1469]: time="2026-03-12T01:23:06.881786943Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 12 01:23:06.883646 containerd[1469]: time="2026-03-12T01:23:06.883558919Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:06.886155 containerd[1469]: time="2026-03-12T01:23:06.886072488Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.394001556s" Mar 12 01:23:06.886155 containerd[1469]: time="2026-03-12T01:23:06.886137640Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 12 01:23:06.893431 containerd[1469]: time="2026-03-12T01:23:06.893244302Z" level=info msg="CreateContainer within sandbox \"b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 01:23:06.919978 containerd[1469]: time="2026-03-12T01:23:06.919852699Z" level=info msg="CreateContainer within sandbox \"b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\"" Mar 12 01:23:06.920632 containerd[1469]: time="2026-03-12T01:23:06.920515495Z" level=info msg="StartContainer for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\"" Mar 12 01:23:06.962135 systemd[1]: Started cri-containerd-b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c.scope - libcontainer container b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c. Mar 12 01:23:07.004025 kubelet[2538]: E0312 01:23:07.003711 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:07.013828 containerd[1469]: time="2026-03-12T01:23:07.013778512Z" level=info msg="StartContainer for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" returns successfully" Mar 12 01:23:07.014393 containerd[1469]: time="2026-03-12T01:23:07.014099731Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 01:23:07.043681 containerd[1469]: time="2026-03-12T01:23:07.043569178Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\"" Mar 12 01:23:07.044432 containerd[1469]: time="2026-03-12T01:23:07.044379821Z" level=info msg="StartContainer for \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\"" Mar 12 01:23:07.110570 systemd[1]: Started cri-containerd-b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb.scope - libcontainer container b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb. Mar 12 01:23:07.203587 systemd[1]: cri-containerd-b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb.scope: Deactivated successfully. Mar 12 01:23:07.219542 containerd[1469]: time="2026-03-12T01:23:07.219421834Z" level=info msg="StartContainer for \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\" returns successfully" Mar 12 01:23:07.279637 containerd[1469]: time="2026-03-12T01:23:07.279555336Z" level=info msg="shim disconnected" id=b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb namespace=k8s.io Mar 12 01:23:07.279637 containerd[1469]: time="2026-03-12T01:23:07.279624184Z" level=warning msg="cleaning up after shim disconnected" id=b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb namespace=k8s.io Mar 12 01:23:07.279637 containerd[1469]: time="2026-03-12T01:23:07.279634703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:23:07.992080 kubelet[2538]: E0312 01:23:07.992016 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:07.997104 kubelet[2538]: E0312 01:23:07.997047 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:08.007894 containerd[1469]: time="2026-03-12T01:23:08.007827078Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 01:23:08.021198 kubelet[2538]: I0312 01:23:08.021087 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-554df" podStartSLOduration=2.629413424 podStartE2EDuration="18.021067089s" podCreationTimestamp="2026-03-12 01:22:50 +0000 UTC" firstStartedPulling="2026-03-12 01:22:51.495715161 +0000 UTC m=+5.863304454" lastFinishedPulling="2026-03-12 01:23:06.887368827 +0000 UTC m=+21.254958119" observedRunningTime="2026-03-12 01:23:08.019809639 +0000 UTC m=+22.387398933" watchObservedRunningTime="2026-03-12 01:23:08.021067089 +0000 UTC m=+22.388656432" Mar 12 01:23:08.035165 containerd[1469]: time="2026-03-12T01:23:08.035088786Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\"" Mar 12 01:23:08.036239 containerd[1469]: time="2026-03-12T01:23:08.036164814Z" level=info msg="StartContainer for \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\"" Mar 12 01:23:08.110129 systemd[1]: Started cri-containerd-a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085.scope - libcontainer container a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085. Mar 12 01:23:08.144002 systemd[1]: cri-containerd-a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085.scope: Deactivated successfully. Mar 12 01:23:08.149624 containerd[1469]: time="2026-03-12T01:23:08.149422025Z" level=info msg="StartContainer for \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\" returns successfully" Mar 12 01:23:08.190778 containerd[1469]: time="2026-03-12T01:23:08.190695433Z" level=info msg="shim disconnected" id=a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085 namespace=k8s.io Mar 12 01:23:08.190778 containerd[1469]: time="2026-03-12T01:23:08.190775623Z" level=warning msg="cleaning up after shim disconnected" id=a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085 namespace=k8s.io Mar 12 01:23:08.191033 containerd[1469]: time="2026-03-12T01:23:08.190785982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:23:08.513805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085-rootfs.mount: Deactivated successfully. Mar 12 01:23:09.003044 kubelet[2538]: E0312 01:23:09.002873 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:09.003734 kubelet[2538]: E0312 01:23:09.003664 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:09.016241 containerd[1469]: time="2026-03-12T01:23:09.016025563Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 01:23:09.044222 containerd[1469]: time="2026-03-12T01:23:09.044104560Z" level=info msg="CreateContainer within sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\"" Mar 12 01:23:09.044853 containerd[1469]: time="2026-03-12T01:23:09.044698486Z" level=info msg="StartContainer for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\"" Mar 12 01:23:09.113211 systemd[1]: Started cri-containerd-489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec.scope - libcontainer container 489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec. Mar 12 01:23:09.154412 containerd[1469]: time="2026-03-12T01:23:09.154262118Z" level=info msg="StartContainer for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" returns successfully" Mar 12 01:23:09.378261 kubelet[2538]: I0312 01:23:09.378154 2538 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 01:23:09.467169 systemd[1]: Created slice kubepods-burstable-pod84febb58_9084_4f19_a1da_83ec95aa75a5.slice - libcontainer container kubepods-burstable-pod84febb58_9084_4f19_a1da_83ec95aa75a5.slice. Mar 12 01:23:09.486726 kubelet[2538]: I0312 01:23:09.486310 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa059d8d-0ae1-45df-b550-4e593421a308-config-volume\") pod \"coredns-674b8bbfcf-2rhgk\" (UID: \"aa059d8d-0ae1-45df-b550-4e593421a308\") " pod="kube-system/coredns-674b8bbfcf-2rhgk" Mar 12 01:23:09.486726 kubelet[2538]: I0312 01:23:09.486395 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7txxl\" (UniqueName: \"kubernetes.io/projected/84febb58-9084-4f19-a1da-83ec95aa75a5-kube-api-access-7txxl\") pod \"coredns-674b8bbfcf-pwlpb\" (UID: \"84febb58-9084-4f19-a1da-83ec95aa75a5\") " pod="kube-system/coredns-674b8bbfcf-pwlpb" Mar 12 01:23:09.486726 kubelet[2538]: I0312 01:23:09.486485 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-625pr\" (UniqueName: \"kubernetes.io/projected/aa059d8d-0ae1-45df-b550-4e593421a308-kube-api-access-625pr\") pod \"coredns-674b8bbfcf-2rhgk\" (UID: \"aa059d8d-0ae1-45df-b550-4e593421a308\") " pod="kube-system/coredns-674b8bbfcf-2rhgk" Mar 12 01:23:09.486726 kubelet[2538]: I0312 01:23:09.486586 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84febb58-9084-4f19-a1da-83ec95aa75a5-config-volume\") pod \"coredns-674b8bbfcf-pwlpb\" (UID: \"84febb58-9084-4f19-a1da-83ec95aa75a5\") " pod="kube-system/coredns-674b8bbfcf-pwlpb" Mar 12 01:23:09.490555 systemd[1]: Created slice kubepods-burstable-podaa059d8d_0ae1_45df_b550_4e593421a308.slice - libcontainer container kubepods-burstable-podaa059d8d_0ae1_45df_b550_4e593421a308.slice. Mar 12 01:23:09.513586 systemd[1]: run-containerd-runc-k8s.io-489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec-runc.dl4OGO.mount: Deactivated successfully. Mar 12 01:23:09.786086 kubelet[2538]: E0312 01:23:09.785231 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:09.796376 kubelet[2538]: E0312 01:23:09.796280 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:09.798725 containerd[1469]: time="2026-03-12T01:23:09.798677424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pwlpb,Uid:84febb58-9084-4f19-a1da-83ec95aa75a5,Namespace:kube-system,Attempt:0,}" Mar 12 01:23:09.800221 containerd[1469]: time="2026-03-12T01:23:09.800187107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rhgk,Uid:aa059d8d-0ae1-45df-b550-4e593421a308,Namespace:kube-system,Attempt:0,}" Mar 12 01:23:10.009106 kubelet[2538]: E0312 01:23:10.009016 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:10.031080 kubelet[2538]: I0312 01:23:10.029566 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nb2tc" podStartSLOduration=5.987621661 podStartE2EDuration="20.029549709s" podCreationTimestamp="2026-03-12 01:22:50 +0000 UTC" firstStartedPulling="2026-03-12 01:22:51.449860546 +0000 UTC m=+5.817449839" lastFinishedPulling="2026-03-12 01:23:05.491788594 +0000 UTC m=+19.859377887" observedRunningTime="2026-03-12 01:23:10.02954343 +0000 UTC m=+24.397132723" watchObservedRunningTime="2026-03-12 01:23:10.029549709 +0000 UTC m=+24.397139002" Mar 12 01:23:11.016491 kubelet[2538]: E0312 01:23:11.016311 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:11.332190 systemd-networkd[1389]: cilium_host: Link UP Mar 12 01:23:11.332511 systemd-networkd[1389]: cilium_net: Link UP Mar 12 01:23:11.332882 systemd-networkd[1389]: cilium_net: Gained carrier Mar 12 01:23:11.333242 systemd-networkd[1389]: cilium_host: Gained carrier Mar 12 01:23:11.333469 systemd-networkd[1389]: cilium_net: Gained IPv6LL Mar 12 01:23:11.333824 systemd-networkd[1389]: cilium_host: Gained IPv6LL Mar 12 01:23:11.597419 systemd-networkd[1389]: cilium_vxlan: Link UP Mar 12 01:23:11.597460 systemd-networkd[1389]: cilium_vxlan: Gained carrier Mar 12 01:23:11.966360 kernel: NET: Registered PF_ALG protocol family Mar 12 01:23:12.017864 kubelet[2538]: E0312 01:23:12.017701 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:12.962052 systemd-networkd[1389]: lxc_health: Link UP Mar 12 01:23:12.974228 systemd-networkd[1389]: lxc_health: Gained carrier Mar 12 01:23:13.280683 kubelet[2538]: E0312 01:23:13.279905 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:13.418093 kernel: eth0: renamed from tmp51e67 Mar 12 01:23:13.421873 systemd-networkd[1389]: lxcccd705dcd849: Link UP Mar 12 01:23:13.422857 systemd-networkd[1389]: lxcccd705dcd849: Gained carrier Mar 12 01:23:13.447106 systemd-networkd[1389]: lxcbd9328aa8ba6: Link UP Mar 12 01:23:13.461240 kernel: eth0: renamed from tmpb561a Mar 12 01:23:13.469559 systemd-networkd[1389]: lxcbd9328aa8ba6: Gained carrier Mar 12 01:23:13.645195 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL Mar 12 01:23:14.023392 kubelet[2538]: E0312 01:23:14.023230 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:14.607134 systemd-networkd[1389]: lxcccd705dcd849: Gained IPv6LL Mar 12 01:23:14.928058 systemd-networkd[1389]: lxc_health: Gained IPv6LL Mar 12 01:23:15.025600 kubelet[2538]: E0312 01:23:15.025539 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:15.440028 systemd-networkd[1389]: lxcbd9328aa8ba6: Gained IPv6LL Mar 12 01:23:18.248570 containerd[1469]: time="2026-03-12T01:23:18.247573742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:23:18.249308 containerd[1469]: time="2026-03-12T01:23:18.248610808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:23:18.249308 containerd[1469]: time="2026-03-12T01:23:18.248859833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:18.249511 containerd[1469]: time="2026-03-12T01:23:18.249365456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:18.292115 containerd[1469]: time="2026-03-12T01:23:18.291847317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:23:18.292717 containerd[1469]: time="2026-03-12T01:23:18.292327203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:23:18.292995 containerd[1469]: time="2026-03-12T01:23:18.292566098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:18.293427 containerd[1469]: time="2026-03-12T01:23:18.293269242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:18.316113 systemd[1]: Started cri-containerd-b561a4d5c948fa589b3dcb8726cf8b822a4a00b8f3a851d6ecbba7d7c3d0a346.scope - libcontainer container b561a4d5c948fa589b3dcb8726cf8b822a4a00b8f3a851d6ecbba7d7c3d0a346. Mar 12 01:23:18.327309 systemd[1]: Started cri-containerd-51e6769dfc25071af559a03a90f43727ffdc696f51ff211c9e69c615537b9407.scope - libcontainer container 51e6769dfc25071af559a03a90f43727ffdc696f51ff211c9e69c615537b9407. Mar 12 01:23:18.338766 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:23:18.349358 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:23:18.412288 containerd[1469]: time="2026-03-12T01:23:18.412132531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2rhgk,Uid:aa059d8d-0ae1-45df-b550-4e593421a308,Namespace:kube-system,Attempt:0,} returns sandbox id \"b561a4d5c948fa589b3dcb8726cf8b822a4a00b8f3a851d6ecbba7d7c3d0a346\"" Mar 12 01:23:18.415147 kubelet[2538]: E0312 01:23:18.414998 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:18.424799 containerd[1469]: time="2026-03-12T01:23:18.424689233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pwlpb,Uid:84febb58-9084-4f19-a1da-83ec95aa75a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"51e6769dfc25071af559a03a90f43727ffdc696f51ff211c9e69c615537b9407\"" Mar 12 01:23:18.425494 kubelet[2538]: E0312 01:23:18.425429 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:18.427661 containerd[1469]: time="2026-03-12T01:23:18.427066260Z" level=info msg="CreateContainer within sandbox \"b561a4d5c948fa589b3dcb8726cf8b822a4a00b8f3a851d6ecbba7d7c3d0a346\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:23:18.434837 containerd[1469]: time="2026-03-12T01:23:18.433367972Z" level=info msg="CreateContainer within sandbox \"51e6769dfc25071af559a03a90f43727ffdc696f51ff211c9e69c615537b9407\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:23:18.457678 containerd[1469]: time="2026-03-12T01:23:18.457553533Z" level=info msg="CreateContainer within sandbox \"b561a4d5c948fa589b3dcb8726cf8b822a4a00b8f3a851d6ecbba7d7c3d0a346\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41a0359b83981408c02ee799ea41ba2e96cb57bc3972f121b8d18440529c2e88\"" Mar 12 01:23:18.458490 containerd[1469]: time="2026-03-12T01:23:18.458393489Z" level=info msg="StartContainer for \"41a0359b83981408c02ee799ea41ba2e96cb57bc3972f121b8d18440529c2e88\"" Mar 12 01:23:18.469700 containerd[1469]: time="2026-03-12T01:23:18.469162897Z" level=info msg="CreateContainer within sandbox \"51e6769dfc25071af559a03a90f43727ffdc696f51ff211c9e69c615537b9407\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a07e6d95274c949872896a54bb424e349b77eb1664903300d122a348cd77041\"" Mar 12 01:23:18.486624 containerd[1469]: time="2026-03-12T01:23:18.474371749Z" level=info msg="StartContainer for \"8a07e6d95274c949872896a54bb424e349b77eb1664903300d122a348cd77041\"" Mar 12 01:23:18.529271 systemd[1]: Started cri-containerd-41a0359b83981408c02ee799ea41ba2e96cb57bc3972f121b8d18440529c2e88.scope - libcontainer container 41a0359b83981408c02ee799ea41ba2e96cb57bc3972f121b8d18440529c2e88. Mar 12 01:23:18.545294 systemd[1]: Started cri-containerd-8a07e6d95274c949872896a54bb424e349b77eb1664903300d122a348cd77041.scope - libcontainer container 8a07e6d95274c949872896a54bb424e349b77eb1664903300d122a348cd77041. Mar 12 01:23:18.608358 containerd[1469]: time="2026-03-12T01:23:18.608179340Z" level=info msg="StartContainer for \"41a0359b83981408c02ee799ea41ba2e96cb57bc3972f121b8d18440529c2e88\" returns successfully" Mar 12 01:23:18.615122 containerd[1469]: time="2026-03-12T01:23:18.615027250Z" level=info msg="StartContainer for \"8a07e6d95274c949872896a54bb424e349b77eb1664903300d122a348cd77041\" returns successfully" Mar 12 01:23:19.039995 kubelet[2538]: E0312 01:23:19.039865 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:19.046600 kubelet[2538]: E0312 01:23:19.045838 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:19.055760 kubelet[2538]: I0312 01:23:19.055568 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2rhgk" podStartSLOduration=29.055552517 podStartE2EDuration="29.055552517s" podCreationTimestamp="2026-03-12 01:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:23:19.054093934 +0000 UTC m=+33.421683237" watchObservedRunningTime="2026-03-12 01:23:19.055552517 +0000 UTC m=+33.423141830" Mar 12 01:23:19.258486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539735251.mount: Deactivated successfully. Mar 12 01:23:20.048235 kubelet[2538]: E0312 01:23:20.048145 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:20.049293 kubelet[2538]: E0312 01:23:20.049160 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:21.050676 kubelet[2538]: E0312 01:23:21.050519 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:21.051575 kubelet[2538]: E0312 01:23:21.050888 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:38.169231 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Mar 12 01:23:38.226479 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:38.228848 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:38.235092 systemd-logind[1454]: New session 8 of user core. Mar 12 01:23:38.246264 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:23:38.773547 sshd[3954]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:38.778407 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:55174.service: Deactivated successfully. Mar 12 01:23:38.781219 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:23:38.783308 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:23:38.785203 systemd-logind[1454]: Removed session 8. Mar 12 01:23:43.796532 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:55690.service - OpenSSH per-connection server daemon (10.0.0.1:55690). Mar 12 01:23:43.843549 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 55690 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:43.846789 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:43.858286 systemd-logind[1454]: New session 9 of user core. Mar 12 01:23:43.868426 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:23:44.014644 sshd[3976]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:44.019865 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:55690.service: Deactivated successfully. Mar 12 01:23:44.022320 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:23:44.023514 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:23:44.025176 systemd-logind[1454]: Removed session 9. Mar 12 01:23:49.032783 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:55692.service - OpenSSH per-connection server daemon (10.0.0.1:55692). Mar 12 01:23:49.085996 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 55692 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:49.087834 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:49.093441 systemd-logind[1454]: New session 10 of user core. Mar 12 01:23:49.100095 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:23:49.232568 sshd[3994]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:49.237182 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:55692.service: Deactivated successfully. Mar 12 01:23:49.239824 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:23:49.241016 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:23:49.242611 systemd-logind[1454]: Removed session 10. Mar 12 01:23:54.251377 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:34826.service - OpenSSH per-connection server daemon (10.0.0.1:34826). Mar 12 01:23:54.308546 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 34826 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:54.310432 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:54.316278 systemd-logind[1454]: New session 11 of user core. Mar 12 01:23:54.323102 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:23:54.493818 sshd[4011]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:54.498676 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:34826.service: Deactivated successfully. Mar 12 01:23:54.501383 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:23:54.502434 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:23:54.504106 systemd-logind[1454]: Removed session 11. Mar 12 01:23:54.877479 kubelet[2538]: E0312 01:23:54.877361 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:55.879131 kubelet[2538]: E0312 01:23:55.878863 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:59.513093 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:34834.service - OpenSSH per-connection server daemon (10.0.0.1:34834). Mar 12 01:23:59.551784 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 34834 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:59.553728 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:59.558870 systemd-logind[1454]: New session 12 of user core. Mar 12 01:23:59.573167 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:23:59.712368 sshd[4026]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:59.716341 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:34834.service: Deactivated successfully. Mar 12 01:23:59.718238 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:23:59.719014 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:23:59.720146 systemd-logind[1454]: Removed session 12. Mar 12 01:24:04.733139 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:42088.service - OpenSSH per-connection server daemon (10.0.0.1:42088). Mar 12 01:24:04.776856 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 42088 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:04.778540 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:04.784565 systemd-logind[1454]: New session 13 of user core. Mar 12 01:24:04.792318 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:24:04.925116 sshd[4041]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:04.938638 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:42088.service: Deactivated successfully. Mar 12 01:24:04.941603 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:24:04.943867 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:24:04.953463 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:42094.service - OpenSSH per-connection server daemon (10.0.0.1:42094). Mar 12 01:24:04.955296 systemd-logind[1454]: Removed session 13. Mar 12 01:24:04.990055 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 42094 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:04.991915 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:04.997977 systemd-logind[1454]: New session 14 of user core. Mar 12 01:24:05.010215 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:24:05.203852 sshd[4056]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:05.217012 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:42094.service: Deactivated successfully. Mar 12 01:24:05.220274 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:24:05.225061 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:24:05.239500 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:42106.service - OpenSSH per-connection server daemon (10.0.0.1:42106). Mar 12 01:24:05.241803 systemd-logind[1454]: Removed session 14. Mar 12 01:24:05.281180 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 42106 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:05.283523 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:05.290825 systemd-logind[1454]: New session 15 of user core. Mar 12 01:24:05.309334 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:24:05.443187 sshd[4069]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:05.448262 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:42106.service: Deactivated successfully. Mar 12 01:24:05.450891 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:24:05.451976 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:24:05.453848 systemd-logind[1454]: Removed session 15. Mar 12 01:24:10.458711 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:42112.service - OpenSSH per-connection server daemon (10.0.0.1:42112). Mar 12 01:24:10.504014 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 42112 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:10.506020 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:10.512727 systemd-logind[1454]: New session 16 of user core. Mar 12 01:24:10.528189 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:24:10.654326 sshd[4084]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:10.659475 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:42112.service: Deactivated successfully. Mar 12 01:24:10.680352 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:24:10.681582 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:24:10.683725 systemd-logind[1454]: Removed session 16. Mar 12 01:24:10.884206 kubelet[2538]: E0312 01:24:10.884150 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:10.884837 kubelet[2538]: E0312 01:24:10.884400 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:15.701450 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:33508.service - OpenSSH per-connection server daemon (10.0.0.1:33508). Mar 12 01:24:15.741396 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 33508 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:15.743466 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:15.750555 systemd-logind[1454]: New session 17 of user core. Mar 12 01:24:15.761208 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:24:15.928126 sshd[4098]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:15.934328 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:33508.service: Deactivated successfully. Mar 12 01:24:15.940504 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:24:15.943699 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:24:15.945539 systemd-logind[1454]: Removed session 17. Mar 12 01:24:20.878276 kubelet[2538]: E0312 01:24:20.878085 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:20.939064 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:33514.service - OpenSSH per-connection server daemon (10.0.0.1:33514). Mar 12 01:24:20.997184 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 33514 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:20.999388 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:21.006414 systemd-logind[1454]: New session 18 of user core. Mar 12 01:24:21.016456 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:24:21.141258 sshd[4112]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:21.147036 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:33514.service: Deactivated successfully. Mar 12 01:24:21.149252 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:24:21.150564 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:24:21.152837 systemd-logind[1454]: Removed session 18. Mar 12 01:24:23.877897 kubelet[2538]: E0312 01:24:23.877749 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:26.154027 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:45826.service - OpenSSH per-connection server daemon (10.0.0.1:45826). Mar 12 01:24:26.197244 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 45826 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:26.199180 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:26.204464 systemd-logind[1454]: New session 19 of user core. Mar 12 01:24:26.212166 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:24:26.326624 sshd[4130]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:26.340128 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:45826.service: Deactivated successfully. Mar 12 01:24:26.342808 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:24:26.345005 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:24:26.350429 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:45832.service - OpenSSH per-connection server daemon (10.0.0.1:45832). Mar 12 01:24:26.352720 systemd-logind[1454]: Removed session 19. Mar 12 01:24:26.385436 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 45832 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:26.387217 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:26.393478 systemd-logind[1454]: New session 20 of user core. Mar 12 01:24:26.401210 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:24:26.702392 sshd[4145]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:26.713082 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:45832.service: Deactivated successfully. Mar 12 01:24:26.715462 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:24:26.717553 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:24:26.731263 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:45846.service - OpenSSH per-connection server daemon (10.0.0.1:45846). Mar 12 01:24:26.732621 systemd-logind[1454]: Removed session 20. Mar 12 01:24:26.770458 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 45846 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:26.772877 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:26.778345 systemd-logind[1454]: New session 21 of user core. Mar 12 01:24:26.787104 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:24:27.415414 sshd[4158]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:27.432860 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:45846.service: Deactivated successfully. Mar 12 01:24:27.436366 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:24:27.440023 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:24:27.446404 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:45858.service - OpenSSH per-connection server daemon (10.0.0.1:45858). Mar 12 01:24:27.447649 systemd-logind[1454]: Removed session 21. Mar 12 01:24:27.500430 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 45858 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:27.502999 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:27.510190 systemd-logind[1454]: New session 22 of user core. Mar 12 01:24:27.517182 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 01:24:27.790227 sshd[4179]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:27.800232 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:45858.service: Deactivated successfully. Mar 12 01:24:27.802088 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 01:24:27.806662 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Mar 12 01:24:27.813551 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:45866.service - OpenSSH per-connection server daemon (10.0.0.1:45866). Mar 12 01:24:27.816845 systemd-logind[1454]: Removed session 22. Mar 12 01:24:27.852893 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 45866 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:27.855295 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:27.860983 systemd-logind[1454]: New session 23 of user core. Mar 12 01:24:27.870215 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 01:24:28.001305 sshd[4192]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:28.004873 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:45866.service: Deactivated successfully. Mar 12 01:24:28.007748 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 01:24:28.009887 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Mar 12 01:24:28.013179 systemd-logind[1454]: Removed session 23. Mar 12 01:24:32.887329 kubelet[2538]: E0312 01:24:32.887222 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:33.015520 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:49712.service - OpenSSH per-connection server daemon (10.0.0.1:49712). Mar 12 01:24:33.061719 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 49712 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:33.065616 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:33.075914 systemd-logind[1454]: New session 24 of user core. Mar 12 01:24:33.084195 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 01:24:33.214397 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:33.219836 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:49712.service: Deactivated successfully. Mar 12 01:24:33.222175 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 01:24:33.223200 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Mar 12 01:24:33.224821 systemd-logind[1454]: Removed session 24. Mar 12 01:24:38.304711 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:49724.service - OpenSSH per-connection server daemon (10.0.0.1:49724). Mar 12 01:24:38.410447 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 49724 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:38.415903 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:38.433642 systemd-logind[1454]: New session 25 of user core. Mar 12 01:24:38.445693 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 01:24:38.730328 sshd[4223]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:38.741464 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:49724.service: Deactivated successfully. Mar 12 01:24:38.746621 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 01:24:38.748213 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Mar 12 01:24:38.749727 systemd-logind[1454]: Removed session 25. Mar 12 01:24:43.767535 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:54224.service - OpenSSH per-connection server daemon (10.0.0.1:54224). Mar 12 01:24:43.883872 kubelet[2538]: E0312 01:24:43.883089 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:43.899508 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:43.902568 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:43.922590 systemd-logind[1454]: New session 26 of user core. Mar 12 01:24:43.930602 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 01:24:44.210468 sshd[4237]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:44.228194 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:54224.service: Deactivated successfully. Mar 12 01:24:44.234573 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 01:24:44.250236 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Mar 12 01:24:44.254037 systemd-logind[1454]: Removed session 26. Mar 12 01:24:49.226264 systemd[1]: Started sshd@26-10.0.0.45:22-10.0.0.1:54226.service - OpenSSH per-connection server daemon (10.0.0.1:54226). Mar 12 01:24:49.274132 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 54226 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:49.276365 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:49.283033 systemd-logind[1454]: New session 27 of user core. Mar 12 01:24:49.292234 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 12 01:24:49.460134 sshd[4254]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:49.473348 systemd[1]: sshd@26-10.0.0.45:22-10.0.0.1:54226.service: Deactivated successfully. Mar 12 01:24:49.477481 systemd[1]: session-27.scope: Deactivated successfully. Mar 12 01:24:49.485547 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Mar 12 01:24:49.502011 systemd[1]: Started sshd@27-10.0.0.45:22-10.0.0.1:54236.service - OpenSSH per-connection server daemon (10.0.0.1:54236). Mar 12 01:24:49.509357 systemd-logind[1454]: Removed session 27. Mar 12 01:24:49.547717 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 54236 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:49.550882 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:49.560707 systemd-logind[1454]: New session 28 of user core. Mar 12 01:24:49.573306 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 12 01:24:51.128682 kubelet[2538]: I0312 01:24:51.123708 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pwlpb" podStartSLOduration=121.123688752 podStartE2EDuration="2m1.123688752s" podCreationTimestamp="2026-03-12 01:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:23:19.100216544 +0000 UTC m=+33.467805836" watchObservedRunningTime="2026-03-12 01:24:51.123688752 +0000 UTC m=+125.491278046" Mar 12 01:24:51.220296 containerd[1469]: time="2026-03-12T01:24:51.220085190Z" level=info msg="StopContainer for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" with timeout 30 (s)" Mar 12 01:24:51.221637 containerd[1469]: time="2026-03-12T01:24:51.221254206Z" level=info msg="Stop container \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" with signal terminated" Mar 12 01:24:51.300211 systemd[1]: cri-containerd-b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c.scope: Deactivated successfully. Mar 12 01:24:51.362224 containerd[1469]: time="2026-03-12T01:24:51.361882430Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:24:51.402445 containerd[1469]: time="2026-03-12T01:24:51.401224360Z" level=info msg="StopContainer for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" with timeout 2 (s)" Mar 12 01:24:51.410103 containerd[1469]: time="2026-03-12T01:24:51.406187704Z" level=info msg="Stop container \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" with signal terminated" Mar 12 01:24:51.414092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c-rootfs.mount: Deactivated successfully. Mar 12 01:24:51.438591 systemd-networkd[1389]: lxc_health: Link DOWN Mar 12 01:24:51.438606 systemd-networkd[1389]: lxc_health: Lost carrier Mar 12 01:24:51.457195 containerd[1469]: time="2026-03-12T01:24:51.454219613Z" level=info msg="shim disconnected" id=b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c namespace=k8s.io Mar 12 01:24:51.457195 containerd[1469]: time="2026-03-12T01:24:51.454313257Z" level=warning msg="cleaning up after shim disconnected" id=b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c namespace=k8s.io Mar 12 01:24:51.457195 containerd[1469]: time="2026-03-12T01:24:51.454337002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:51.523706 systemd[1]: cri-containerd-489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec.scope: Deactivated successfully. Mar 12 01:24:51.525475 systemd[1]: cri-containerd-489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec.scope: Consumed 11.047s CPU time. Mar 12 01:24:51.534022 containerd[1469]: time="2026-03-12T01:24:51.533718419Z" level=info msg="StopContainer for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" returns successfully" Mar 12 01:24:51.539405 containerd[1469]: time="2026-03-12T01:24:51.539176011Z" level=info msg="StopPodSandbox for \"b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2\"" Mar 12 01:24:51.539405 containerd[1469]: time="2026-03-12T01:24:51.539248136Z" level=info msg="Container to stop \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:24:51.545493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2-shm.mount: Deactivated successfully. Mar 12 01:24:51.554656 systemd[1]: cri-containerd-b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2.scope: Deactivated successfully. Mar 12 01:24:51.594243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec-rootfs.mount: Deactivated successfully. Mar 12 01:24:51.611498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2-rootfs.mount: Deactivated successfully. Mar 12 01:24:51.613210 containerd[1469]: time="2026-03-12T01:24:51.612423042Z" level=info msg="shim disconnected" id=489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec namespace=k8s.io Mar 12 01:24:51.613210 containerd[1469]: time="2026-03-12T01:24:51.612530683Z" level=warning msg="cleaning up after shim disconnected" id=489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec namespace=k8s.io Mar 12 01:24:51.613210 containerd[1469]: time="2026-03-12T01:24:51.612548927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:51.618196 containerd[1469]: time="2026-03-12T01:24:51.617830102Z" level=info msg="shim disconnected" id=b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2 namespace=k8s.io Mar 12 01:24:51.618196 containerd[1469]: time="2026-03-12T01:24:51.617893761Z" level=warning msg="cleaning up after shim disconnected" id=b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2 namespace=k8s.io Mar 12 01:24:51.618196 containerd[1469]: time="2026-03-12T01:24:51.617911534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.655414898Z" level=info msg="StopContainer for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" returns successfully" Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.656178925Z" level=info msg="StopPodSandbox for \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\"" Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.656219612Z" level=info msg="Container to stop \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.656238367Z" level=info msg="Container to stop \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.656256500Z" level=info msg="Container to stop \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.656271408Z" level=info msg="Container to stop \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:24:51.657025 containerd[1469]: time="2026-03-12T01:24:51.656287327Z" level=info msg="Container to stop \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:24:51.676547 systemd[1]: cri-containerd-5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea.scope: Deactivated successfully. Mar 12 01:24:51.681870 containerd[1469]: time="2026-03-12T01:24:51.680450883Z" level=info msg="TearDown network for sandbox \"b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2\" successfully" Mar 12 01:24:51.681870 containerd[1469]: time="2026-03-12T01:24:51.680522015Z" level=info msg="StopPodSandbox for \"b1c7542655d72c900c7b165a2602f8fdec08d2e2815ebefecdfdfe89c9ccb6e2\" returns successfully" Mar 12 01:24:51.749237 kubelet[2538]: I0312 01:24:51.749047 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh58r\" (UniqueName: \"kubernetes.io/projected/d4db1f92-06d8-4bb3-8517-97d7485789b9-kube-api-access-gh58r\") pod \"d4db1f92-06d8-4bb3-8517-97d7485789b9\" (UID: \"d4db1f92-06d8-4bb3-8517-97d7485789b9\") " Mar 12 01:24:51.749237 kubelet[2538]: I0312 01:24:51.749126 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4db1f92-06d8-4bb3-8517-97d7485789b9-cilium-config-path\") pod \"d4db1f92-06d8-4bb3-8517-97d7485789b9\" (UID: \"d4db1f92-06d8-4bb3-8517-97d7485789b9\") " Mar 12 01:24:51.757976 containerd[1469]: time="2026-03-12T01:24:51.755378852Z" level=info msg="shim disconnected" id=5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea namespace=k8s.io Mar 12 01:24:51.757976 containerd[1469]: time="2026-03-12T01:24:51.755447440Z" level=warning msg="cleaning up after shim disconnected" id=5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea namespace=k8s.io Mar 12 01:24:51.757976 containerd[1469]: time="2026-03-12T01:24:51.755461266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:51.758599 kubelet[2538]: I0312 01:24:51.758548 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4db1f92-06d8-4bb3-8517-97d7485789b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4db1f92-06d8-4bb3-8517-97d7485789b9" (UID: "d4db1f92-06d8-4bb3-8517-97d7485789b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:24:51.767487 kubelet[2538]: I0312 01:24:51.767412 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4db1f92-06d8-4bb3-8517-97d7485789b9-kube-api-access-gh58r" (OuterVolumeSpecName: "kube-api-access-gh58r") pod "d4db1f92-06d8-4bb3-8517-97d7485789b9" (UID: "d4db1f92-06d8-4bb3-8517-97d7485789b9"). InnerVolumeSpecName "kube-api-access-gh58r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:24:51.789839 containerd[1469]: time="2026-03-12T01:24:51.789641521Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:24:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:24:51.792553 containerd[1469]: time="2026-03-12T01:24:51.792369565Z" level=info msg="TearDown network for sandbox \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" successfully" Mar 12 01:24:51.792680 containerd[1469]: time="2026-03-12T01:24:51.792594326Z" level=info msg="StopPodSandbox for \"5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea\" returns successfully" Mar 12 01:24:51.852999 kubelet[2538]: I0312 01:24:51.849901 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-etc-cni-netd\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.852999 kubelet[2538]: I0312 01:24:51.850045 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-cgroup\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.852999 kubelet[2538]: I0312 01:24:51.850085 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6ljh\" (UniqueName: \"kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-kube-api-access-k6ljh\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.852999 kubelet[2538]: I0312 01:24:51.850119 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-bpf-maps\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.852999 kubelet[2538]: I0312 01:24:51.850140 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hostproc\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.852999 kubelet[2538]: I0312 01:24:51.850163 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-xtables-lock\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853340 kubelet[2538]: I0312 01:24:51.850189 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41dc5ac5-c30f-43c1-8629-e6a2575f1107-clustermesh-secrets\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853340 kubelet[2538]: I0312 01:24:51.850213 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hubble-tls\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853340 kubelet[2538]: I0312 01:24:51.850244 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-config-path\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853340 kubelet[2538]: I0312 01:24:51.850268 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-net\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853340 kubelet[2538]: I0312 01:24:51.850293 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-lib-modules\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853340 kubelet[2538]: I0312 01:24:51.850322 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cni-path\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853554 kubelet[2538]: I0312 01:24:51.850347 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-kernel\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853554 kubelet[2538]: I0312 01:24:51.850370 2538 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-run\") pod \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\" (UID: \"41dc5ac5-c30f-43c1-8629-e6a2575f1107\") " Mar 12 01:24:51.853554 kubelet[2538]: I0312 01:24:51.850417 2538 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4db1f92-06d8-4bb3-8517-97d7485789b9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.853554 kubelet[2538]: I0312 01:24:51.850434 2538 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gh58r\" (UniqueName: \"kubernetes.io/projected/d4db1f92-06d8-4bb3-8517-97d7485789b9-kube-api-access-gh58r\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.853554 kubelet[2538]: I0312 01:24:51.850495 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.853554 kubelet[2538]: I0312 01:24:51.850541 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.853759 kubelet[2538]: I0312 01:24:51.850566 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.853759 kubelet[2538]: I0312 01:24:51.851573 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hostproc" (OuterVolumeSpecName: "hostproc") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.853759 kubelet[2538]: I0312 01:24:51.851608 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.853759 kubelet[2538]: I0312 01:24:51.851634 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.854575 kubelet[2538]: I0312 01:24:51.854539 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.854697 kubelet[2538]: I0312 01:24:51.854676 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cni-path" (OuterVolumeSpecName: "cni-path") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.854846 kubelet[2538]: I0312 01:24:51.854825 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.858457 kubelet[2538]: I0312 01:24:51.858415 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 01:24:51.859861 kubelet[2538]: I0312 01:24:51.859829 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dc5ac5-c30f-43c1-8629-e6a2575f1107-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:24:51.866223 kubelet[2538]: I0312 01:24:51.862901 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:24:51.866223 kubelet[2538]: I0312 01:24:51.862042 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-kube-api-access-k6ljh" (OuterVolumeSpecName: "kube-api-access-k6ljh") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "kube-api-access-k6ljh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:24:51.867282 kubelet[2538]: I0312 01:24:51.867193 2538 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41dc5ac5-c30f-43c1-8629-e6a2575f1107" (UID: "41dc5ac5-c30f-43c1-8629-e6a2575f1107"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:24:51.903112 systemd[1]: Removed slice kubepods-besteffort-podd4db1f92_06d8_4bb3_8517_97d7485789b9.slice - libcontainer container kubepods-besteffort-podd4db1f92_06d8_4bb3_8517_97d7485789b9.slice. Mar 12 01:24:51.906209 systemd[1]: Removed slice kubepods-burstable-pod41dc5ac5_c30f_43c1_8629_e6a2575f1107.slice - libcontainer container kubepods-burstable-pod41dc5ac5_c30f_43c1_8629_e6a2575f1107.slice. Mar 12 01:24:51.906543 systemd[1]: kubepods-burstable-pod41dc5ac5_c30f_43c1_8629_e6a2575f1107.slice: Consumed 11.202s CPU time. Mar 12 01:24:51.952132 kubelet[2538]: I0312 01:24:51.951664 2538 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41dc5ac5-c30f-43c1-8629-e6a2575f1107-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.952132 kubelet[2538]: I0312 01:24:51.951736 2538 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.952132 kubelet[2538]: I0312 01:24:51.951751 2538 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953084 2538 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953134 2538 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953150 2538 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953166 2538 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953178 2538 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953189 2538 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953200 2538 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.953985 kubelet[2538]: I0312 01:24:51.953212 2538 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6ljh\" (UniqueName: \"kubernetes.io/projected/41dc5ac5-c30f-43c1-8629-e6a2575f1107-kube-api-access-k6ljh\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.954289 kubelet[2538]: I0312 01:24:51.953224 2538 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.954289 kubelet[2538]: I0312 01:24:51.953235 2538 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:51.954289 kubelet[2538]: I0312 01:24:51.953246 2538 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41dc5ac5-c30f-43c1-8629-e6a2575f1107-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 12 01:24:52.291700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea-rootfs.mount: Deactivated successfully. Mar 12 01:24:52.293375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5eedbd998f1d113a906699bbde4b051bbd00dd57a4ae13de112a6e0e762950ea-shm.mount: Deactivated successfully. Mar 12 01:24:52.293556 systemd[1]: var-lib-kubelet-pods-d4db1f92\x2d06d8\x2d4bb3\x2d8517\x2d97d7485789b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgh58r.mount: Deactivated successfully. Mar 12 01:24:52.293691 systemd[1]: var-lib-kubelet-pods-41dc5ac5\x2dc30f\x2d43c1\x2d8629\x2de6a2575f1107-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6ljh.mount: Deactivated successfully. Mar 12 01:24:52.293901 systemd[1]: var-lib-kubelet-pods-41dc5ac5\x2dc30f\x2d43c1\x2d8629\x2de6a2575f1107-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 01:24:52.294133 systemd[1]: var-lib-kubelet-pods-41dc5ac5\x2dc30f\x2d43c1\x2d8629\x2de6a2575f1107-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 01:24:52.521425 kubelet[2538]: I0312 01:24:52.521382 2538 scope.go:117] "RemoveContainer" containerID="b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c" Mar 12 01:24:52.548743 containerd[1469]: time="2026-03-12T01:24:52.540499788Z" level=info msg="RemoveContainer for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\"" Mar 12 01:24:52.572002 containerd[1469]: time="2026-03-12T01:24:52.571631882Z" level=info msg="RemoveContainer for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" returns successfully" Mar 12 01:24:52.574114 kubelet[2538]: I0312 01:24:52.572758 2538 scope.go:117] "RemoveContainer" containerID="b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c" Mar 12 01:24:52.585211 containerd[1469]: time="2026-03-12T01:24:52.584903782Z" level=error msg="ContainerStatus for \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\": not found" Mar 12 01:24:52.639013 kubelet[2538]: E0312 01:24:52.636444 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\": not found" containerID="b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c" Mar 12 01:24:52.639013 kubelet[2538]: I0312 01:24:52.636537 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c"} err="failed to get container status \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b19c47e5bed8babb9fa08a5c4ba114a95140c6702e84477b51736daa9147510c\": not found" Mar 12 01:24:52.639013 kubelet[2538]: I0312 01:24:52.636612 2538 scope.go:117] "RemoveContainer" containerID="489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec" Mar 12 01:24:52.640500 containerd[1469]: time="2026-03-12T01:24:52.640463928Z" level=info msg="RemoveContainer for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\"" Mar 12 01:24:52.654548 containerd[1469]: time="2026-03-12T01:24:52.654249036Z" level=info msg="RemoveContainer for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" returns successfully" Mar 12 01:24:52.654733 kubelet[2538]: I0312 01:24:52.654639 2538 scope.go:117] "RemoveContainer" containerID="a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085" Mar 12 01:24:52.671083 containerd[1469]: time="2026-03-12T01:24:52.670445887Z" level=info msg="RemoveContainer for \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\"" Mar 12 01:24:52.685005 containerd[1469]: time="2026-03-12T01:24:52.684831404Z" level=info msg="RemoveContainer for \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\" returns successfully" Mar 12 01:24:52.686250 kubelet[2538]: I0312 01:24:52.686171 2538 scope.go:117] "RemoveContainer" containerID="b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb" Mar 12 01:24:52.691245 containerd[1469]: time="2026-03-12T01:24:52.691157379Z" level=info msg="RemoveContainer for \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\"" Mar 12 01:24:52.706474 containerd[1469]: time="2026-03-12T01:24:52.706320309Z" level=info msg="RemoveContainer for \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\" returns successfully" Mar 12 01:24:52.707642 kubelet[2538]: I0312 01:24:52.706707 2538 scope.go:117] "RemoveContainer" containerID="75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b" Mar 12 01:24:52.710034 containerd[1469]: time="2026-03-12T01:24:52.709565189Z" level=info msg="RemoveContainer for \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\"" Mar 12 01:24:52.733336 containerd[1469]: time="2026-03-12T01:24:52.733157853Z" level=info msg="RemoveContainer for \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\" returns successfully" Mar 12 01:24:52.733891 kubelet[2538]: I0312 01:24:52.733690 2538 scope.go:117] "RemoveContainer" containerID="12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd" Mar 12 01:24:52.737183 containerd[1469]: time="2026-03-12T01:24:52.737145931Z" level=info msg="RemoveContainer for \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\"" Mar 12 01:24:52.750675 containerd[1469]: time="2026-03-12T01:24:52.750292033Z" level=info msg="RemoveContainer for \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\" returns successfully" Mar 12 01:24:52.751242 kubelet[2538]: I0312 01:24:52.751137 2538 scope.go:117] "RemoveContainer" containerID="489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec" Mar 12 01:24:52.751613 containerd[1469]: time="2026-03-12T01:24:52.751480303Z" level=error msg="ContainerStatus for \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\": not found" Mar 12 01:24:52.751686 kubelet[2538]: E0312 01:24:52.751644 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\": not found" containerID="489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec" Mar 12 01:24:52.751742 kubelet[2538]: I0312 01:24:52.751683 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec"} err="failed to get container status \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"489121dd90057dc6d41e44e56e730f7221efe284fa90083b2e17a93d2efef8ec\": not found" Mar 12 01:24:52.751742 kubelet[2538]: I0312 01:24:52.751720 2538 scope.go:117] "RemoveContainer" containerID="a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085" Mar 12 01:24:52.753992 containerd[1469]: time="2026-03-12T01:24:52.752119046Z" level=error msg="ContainerStatus for \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\": not found" Mar 12 01:24:52.753992 containerd[1469]: time="2026-03-12T01:24:52.752641261Z" level=error msg="ContainerStatus for \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\": not found" Mar 12 01:24:52.753992 containerd[1469]: time="2026-03-12T01:24:52.753335718Z" level=error msg="ContainerStatus for \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\": not found" Mar 12 01:24:52.753992 containerd[1469]: time="2026-03-12T01:24:52.753689189Z" level=error msg="ContainerStatus for \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\": not found" Mar 12 01:24:52.754203 kubelet[2538]: E0312 01:24:52.752346 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\": not found" containerID="a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085" Mar 12 01:24:52.754203 kubelet[2538]: I0312 01:24:52.752376 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085"} err="failed to get container status \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\": rpc error: code = NotFound desc = an error occurred when try to find container \"a99dc366b3139eb6337ec8a1606dd7f09769113cb5450e5d445ba871bdad2085\": not found" Mar 12 01:24:52.754203 kubelet[2538]: I0312 01:24:52.752430 2538 scope.go:117] "RemoveContainer" containerID="b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb" Mar 12 01:24:52.754203 kubelet[2538]: E0312 01:24:52.753092 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\": not found" containerID="b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb" Mar 12 01:24:52.754203 kubelet[2538]: I0312 01:24:52.753121 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb"} err="failed to get container status \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2852a2acdf585f2cf77d5e38b79a825868c72721e25998993a7334492a30ddb\": not found" Mar 12 01:24:52.754203 kubelet[2538]: I0312 01:24:52.753142 2538 scope.go:117] "RemoveContainer" containerID="75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b" Mar 12 01:24:52.754430 kubelet[2538]: E0312 01:24:52.753444 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\": not found" containerID="75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b" Mar 12 01:24:52.754430 kubelet[2538]: I0312 01:24:52.753469 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b"} err="failed to get container status \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\": rpc error: code = NotFound desc = an error occurred when try to find container \"75a49a4502858c9fe47a95241a41e0ce28f5a694664191c12a6807099135495b\": not found" Mar 12 01:24:52.754430 kubelet[2538]: I0312 01:24:52.753504 2538 scope.go:117] "RemoveContainer" containerID="12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd" Mar 12 01:24:52.757486 kubelet[2538]: E0312 01:24:52.756454 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\": not found" containerID="12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd" Mar 12 01:24:52.758894 kubelet[2538]: I0312 01:24:52.758705 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd"} err="failed to get container status \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"12878764854cb9fda593eeb6405d4a5267fc940860dbaa238de1dd0de1242cbd\": not found" Mar 12 01:24:53.003198 sshd[4268]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:53.030561 systemd[1]: Started sshd@28-10.0.0.45:22-10.0.0.1:33402.service - OpenSSH per-connection server daemon (10.0.0.1:33402). Mar 12 01:24:53.031733 systemd[1]: sshd@27-10.0.0.45:22-10.0.0.1:54236.service: Deactivated successfully. Mar 12 01:24:53.041450 systemd[1]: session-28.scope: Deactivated successfully. Mar 12 01:24:53.047054 systemd-logind[1454]: Session 28 logged out. Waiting for processes to exit. Mar 12 01:24:53.054155 systemd-logind[1454]: Removed session 28. Mar 12 01:24:53.103459 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:53.110171 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:53.131374 systemd-logind[1454]: New session 29 of user core. Mar 12 01:24:53.149511 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 12 01:24:53.884604 kubelet[2538]: I0312 01:24:53.884520 2538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dc5ac5-c30f-43c1-8629-e6a2575f1107" path="/var/lib/kubelet/pods/41dc5ac5-c30f-43c1-8629-e6a2575f1107/volumes" Mar 12 01:24:53.886381 kubelet[2538]: I0312 01:24:53.886312 2538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4db1f92-06d8-4bb3-8517-97d7485789b9" path="/var/lib/kubelet/pods/d4db1f92-06d8-4bb3-8517-97d7485789b9/volumes" Mar 12 01:24:54.495486 sshd[4432]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:54.513262 systemd[1]: sshd@28-10.0.0.45:22-10.0.0.1:33402.service: Deactivated successfully. Mar 12 01:24:54.518173 systemd[1]: session-29.scope: Deactivated successfully. Mar 12 01:24:54.523979 systemd-logind[1454]: Session 29 logged out. Waiting for processes to exit. Mar 12 01:24:54.538031 systemd[1]: Started sshd@29-10.0.0.45:22-10.0.0.1:33414.service - OpenSSH per-connection server daemon (10.0.0.1:33414). Mar 12 01:24:54.544723 systemd-logind[1454]: Removed session 29. Mar 12 01:24:54.588362 systemd[1]: Created slice kubepods-burstable-pode15b3ff8_beaf_4f96_a5ad_ff279729ae2a.slice - libcontainer container kubepods-burstable-pode15b3ff8_beaf_4f96_a5ad_ff279729ae2a.slice. Mar 12 01:24:54.623748 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 33414 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:54.625884 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:54.638042 systemd-logind[1454]: New session 30 of user core. Mar 12 01:24:54.652533 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 12 01:24:54.701312 kubelet[2538]: I0312 01:24:54.701151 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-xtables-lock\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701312 kubelet[2538]: I0312 01:24:54.701260 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-host-proc-sys-net\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701504 kubelet[2538]: I0312 01:24:54.701349 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-cilium-cgroup\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701504 kubelet[2538]: I0312 01:24:54.701375 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-cilium-run\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701504 kubelet[2538]: I0312 01:24:54.701402 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-hostproc\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701504 kubelet[2538]: I0312 01:24:54.701424 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-host-proc-sys-kernel\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701504 kubelet[2538]: I0312 01:24:54.701445 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4bl\" (UniqueName: \"kubernetes.io/projected/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-kube-api-access-jh4bl\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.701504 kubelet[2538]: I0312 01:24:54.701469 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-etc-cni-netd\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.703969 kubelet[2538]: I0312 01:24:54.701491 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-bpf-maps\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.703969 kubelet[2538]: I0312 01:24:54.703109 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-clustermesh-secrets\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.703969 kubelet[2538]: I0312 01:24:54.703135 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-cilium-config-path\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.703969 kubelet[2538]: I0312 01:24:54.703158 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-cilium-ipsec-secrets\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.703969 kubelet[2538]: I0312 01:24:54.703188 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-hubble-tls\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.703969 kubelet[2538]: I0312 01:24:54.703213 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-cni-path\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.704211 kubelet[2538]: I0312 01:24:54.703232 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e15b3ff8-beaf-4f96-a5ad-ff279729ae2a-lib-modules\") pod \"cilium-splgs\" (UID: \"e15b3ff8-beaf-4f96-a5ad-ff279729ae2a\") " pod="kube-system/cilium-splgs" Mar 12 01:24:54.725062 sshd[4447]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:54.749910 systemd[1]: sshd@29-10.0.0.45:22-10.0.0.1:33414.service: Deactivated successfully. Mar 12 01:24:54.755155 systemd[1]: session-30.scope: Deactivated successfully. Mar 12 01:24:54.769324 systemd-logind[1454]: Session 30 logged out. Waiting for processes to exit. Mar 12 01:24:54.783080 systemd[1]: Started sshd@30-10.0.0.45:22-10.0.0.1:33430.service - OpenSSH per-connection server daemon (10.0.0.1:33430). Mar 12 01:24:54.786037 systemd-logind[1454]: Removed session 30. Mar 12 01:24:54.897583 kubelet[2538]: E0312 01:24:54.895299 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:54.898222 containerd[1469]: time="2026-03-12T01:24:54.897862300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-splgs,Uid:e15b3ff8-beaf-4f96-a5ad-ff279729ae2a,Namespace:kube-system,Attempt:0,}" Mar 12 01:24:54.906039 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 33430 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:24:54.905862 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:24:54.928226 systemd-logind[1454]: New session 31 of user core. Mar 12 01:24:54.949265 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 12 01:24:54.985808 containerd[1469]: time="2026-03-12T01:24:54.983104076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:24:54.985808 containerd[1469]: time="2026-03-12T01:24:54.985375078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:24:54.985808 containerd[1469]: time="2026-03-12T01:24:54.985396879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:54.985808 containerd[1469]: time="2026-03-12T01:24:54.985535126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:55.040543 systemd[1]: Started cri-containerd-5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2.scope - libcontainer container 5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2. Mar 12 01:24:55.101801 containerd[1469]: time="2026-03-12T01:24:55.101626408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-splgs,Uid:e15b3ff8-beaf-4f96-a5ad-ff279729ae2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\"" Mar 12 01:24:55.102655 kubelet[2538]: E0312 01:24:55.102626 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:55.119840 containerd[1469]: time="2026-03-12T01:24:55.119743008Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 01:24:55.172517 containerd[1469]: time="2026-03-12T01:24:55.172424843Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985\"" Mar 12 01:24:55.175338 containerd[1469]: time="2026-03-12T01:24:55.174578415Z" level=info msg="StartContainer for \"23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985\"" Mar 12 01:24:55.243718 systemd[1]: Started cri-containerd-23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985.scope - libcontainer container 23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985. Mar 12 01:24:55.311876 containerd[1469]: time="2026-03-12T01:24:55.311707279Z" level=info msg="StartContainer for \"23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985\" returns successfully" Mar 12 01:24:55.474731 systemd[1]: cri-containerd-23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985.scope: Deactivated successfully. Mar 12 01:24:55.638026 kubelet[2538]: E0312 01:24:55.634658 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:55.708196 containerd[1469]: time="2026-03-12T01:24:55.706579186Z" level=info msg="shim disconnected" id=23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985 namespace=k8s.io Mar 12 01:24:55.708196 containerd[1469]: time="2026-03-12T01:24:55.706642083Z" level=warning msg="cleaning up after shim disconnected" id=23a7abbf533ad4341f5028da99f42f70c9e7d075fdb67f1793baab07e155f985 namespace=k8s.io Mar 12 01:24:55.708196 containerd[1469]: time="2026-03-12T01:24:55.706657432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:55.990451 kubelet[2538]: E0312 01:24:55.990217 2538 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 01:24:56.641719 kubelet[2538]: E0312 01:24:56.641606 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:56.663313 containerd[1469]: time="2026-03-12T01:24:56.663148650Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 01:24:56.708149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038144608.mount: Deactivated successfully. Mar 12 01:24:56.715486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220052099.mount: Deactivated successfully. Mar 12 01:24:56.741633 containerd[1469]: time="2026-03-12T01:24:56.738302410Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3\"" Mar 12 01:24:56.741633 containerd[1469]: time="2026-03-12T01:24:56.739199766Z" level=info msg="StartContainer for \"afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3\"" Mar 12 01:24:56.827561 systemd[1]: Started cri-containerd-afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3.scope - libcontainer container afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3. Mar 12 01:24:56.922049 containerd[1469]: time="2026-03-12T01:24:56.921241006Z" level=info msg="StartContainer for \"afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3\" returns successfully" Mar 12 01:24:56.943272 systemd[1]: cri-containerd-afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3.scope: Deactivated successfully. Mar 12 01:24:57.012229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3-rootfs.mount: Deactivated successfully. Mar 12 01:24:57.033106 containerd[1469]: time="2026-03-12T01:24:57.033010421Z" level=info msg="shim disconnected" id=afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3 namespace=k8s.io Mar 12 01:24:57.033106 containerd[1469]: time="2026-03-12T01:24:57.033094949Z" level=warning msg="cleaning up after shim disconnected" id=afc9d84309d645a6650ca1140b266912e05eaa3b9c2a0208effd478b2e68d9a3 namespace=k8s.io Mar 12 01:24:57.033106 containerd[1469]: time="2026-03-12T01:24:57.033111240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:57.656902 kubelet[2538]: E0312 01:24:57.656223 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:57.692887 containerd[1469]: time="2026-03-12T01:24:57.690295882Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 01:24:57.771611 containerd[1469]: time="2026-03-12T01:24:57.771095984Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd\"" Mar 12 01:24:57.774041 containerd[1469]: time="2026-03-12T01:24:57.773833659Z" level=info msg="StartContainer for \"ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd\"" Mar 12 01:24:57.847370 systemd[1]: Started cri-containerd-ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd.scope - libcontainer container ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd. Mar 12 01:24:57.923348 containerd[1469]: time="2026-03-12T01:24:57.922112545Z" level=info msg="StartContainer for \"ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd\" returns successfully" Mar 12 01:24:57.922354 systemd[1]: cri-containerd-ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd.scope: Deactivated successfully. Mar 12 01:24:57.970479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd-rootfs.mount: Deactivated successfully. Mar 12 01:24:57.985595 containerd[1469]: time="2026-03-12T01:24:57.985085987Z" level=info msg="shim disconnected" id=ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd namespace=k8s.io Mar 12 01:24:57.985595 containerd[1469]: time="2026-03-12T01:24:57.985184221Z" level=warning msg="cleaning up after shim disconnected" id=ceb807e2f220cf4b1baf81812c20dfe49795dc72945a4bf18aab7690154b21bd namespace=k8s.io Mar 12 01:24:57.985595 containerd[1469]: time="2026-03-12T01:24:57.985227472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:58.010838 containerd[1469]: time="2026-03-12T01:24:58.010706225Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:24:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:24:58.665499 kubelet[2538]: E0312 01:24:58.664415 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:58.680221 containerd[1469]: time="2026-03-12T01:24:58.680169957Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 01:24:58.713811 containerd[1469]: time="2026-03-12T01:24:58.713681145Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388\"" Mar 12 01:24:58.715328 containerd[1469]: time="2026-03-12T01:24:58.715124591Z" level=info msg="StartContainer for \"86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388\"" Mar 12 01:24:58.765205 systemd[1]: Started cri-containerd-86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388.scope - libcontainer container 86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388. Mar 12 01:24:58.801090 systemd[1]: cri-containerd-86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388.scope: Deactivated successfully. Mar 12 01:24:58.807683 containerd[1469]: time="2026-03-12T01:24:58.807372184Z" level=info msg="StartContainer for \"86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388\" returns successfully" Mar 12 01:24:58.812503 kubelet[2538]: I0312 01:24:58.812356 2538 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T01:24:58Z","lastTransitionTime":"2026-03-12T01:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 12 01:24:58.852128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388-rootfs.mount: Deactivated successfully. Mar 12 01:24:58.872639 containerd[1469]: time="2026-03-12T01:24:58.872567765Z" level=info msg="shim disconnected" id=86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388 namespace=k8s.io Mar 12 01:24:58.872639 containerd[1469]: time="2026-03-12T01:24:58.872628789Z" level=warning msg="cleaning up after shim disconnected" id=86569fe786996c4d16754736123456cc1eb40ab380fb71ad3594747553d97388 namespace=k8s.io Mar 12 01:24:58.872639 containerd[1469]: time="2026-03-12T01:24:58.872638426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:59.678566 kubelet[2538]: E0312 01:24:59.678356 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:59.700571 containerd[1469]: time="2026-03-12T01:24:59.698100851Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 01:24:59.736085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665125705.mount: Deactivated successfully. Mar 12 01:24:59.746084 containerd[1469]: time="2026-03-12T01:24:59.745464243Z" level=info msg="CreateContainer within sandbox \"5ef2cd6521bd3fe573e24bb66a9764e11a8aa619875ee4dcfce5a01c752374b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3\"" Mar 12 01:24:59.747138 containerd[1469]: time="2026-03-12T01:24:59.746649597Z" level=info msg="StartContainer for \"dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3\"" Mar 12 01:24:59.827454 systemd[1]: Started cri-containerd-dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3.scope - libcontainer container dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3. Mar 12 01:24:59.891537 containerd[1469]: time="2026-03-12T01:24:59.891388746Z" level=info msg="StartContainer for \"dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3\" returns successfully" Mar 12 01:25:00.556482 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 12 01:25:00.689966 kubelet[2538]: E0312 01:25:00.689747 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:00.742161 kubelet[2538]: I0312 01:25:00.742073 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-splgs" podStartSLOduration=6.742048596 podStartE2EDuration="6.742048596s" podCreationTimestamp="2026-03-12 01:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:25:00.735349881 +0000 UTC m=+135.102939174" watchObservedRunningTime="2026-03-12 01:25:00.742048596 +0000 UTC m=+135.109637889" Mar 12 01:25:01.698612 kubelet[2538]: E0312 01:25:01.697149 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:04.049235 systemd[1]: run-containerd-runc-k8s.io-dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3-runc.VX8FCh.mount: Deactivated successfully. Mar 12 01:25:06.472252 systemd-networkd[1389]: lxc_health: Link UP Mar 12 01:25:06.490575 systemd-networkd[1389]: lxc_health: Gained carrier Mar 12 01:25:06.552650 systemd[1]: run-containerd-runc-k8s.io-dcb719a227908133660abcd5686bb618465b81f2565568e1826aab084800d3f3-runc.giP8xG.mount: Deactivated successfully. Mar 12 01:25:06.902743 kubelet[2538]: E0312 01:25:06.902530 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:07.694175 systemd-networkd[1389]: lxc_health: Gained IPv6LL Mar 12 01:25:07.727015 kubelet[2538]: E0312 01:25:07.726258 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:08.727289 kubelet[2538]: E0312 01:25:08.727171 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:08.878244 kubelet[2538]: E0312 01:25:08.878130 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:13.698579 sshd[4455]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:13.704844 systemd[1]: sshd@30-10.0.0.45:22-10.0.0.1:33430.service: Deactivated successfully. Mar 12 01:25:13.708652 systemd[1]: session-31.scope: Deactivated successfully. Mar 12 01:25:13.710052 systemd-logind[1454]: Session 31 logged out. Waiting for processes to exit. Mar 12 01:25:13.713525 systemd-logind[1454]: Removed session 31.