Mar 2 12:58:37.860269 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 10:28:24 -00 2026 Mar 2 12:58:37.860344 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 12:58:37.860360 kernel: BIOS-provided physical RAM map: Mar 2 12:58:37.860369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 12:58:37.860379 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 12:58:37.860389 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 12:58:37.860400 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 12:58:37.860409 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 12:58:37.860455 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 12:58:37.860465 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 12:58:37.860474 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 12:58:37.860487 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 12:58:37.860496 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 12:58:37.860504 kernel: NX (Execute Disable) protection: active Mar 2 12:58:37.860514 kernel: APIC: Static calls initialized Mar 2 12:58:37.860525 kernel: SMBIOS 2.8 present. Mar 2 12:58:37.860574 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 12:58:37.860585 kernel: DMI: Memory slots populated: 1/1 Mar 2 12:58:37.860594 kernel: Hypervisor detected: KVM Mar 2 12:58:37.860602 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 12:58:37.860611 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 12:58:37.860620 kernel: kvm-clock: using sched offset of 20748224671 cycles Mar 2 12:58:37.860630 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 12:58:37.860639 kernel: tsc: Detected 2445.426 MHz processor Mar 2 12:58:37.860648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 12:58:37.860658 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 12:58:37.860672 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 12:58:37.860682 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 12:58:37.860692 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 12:58:37.860703 kernel: Using GB pages for direct mapping Mar 2 12:58:37.860715 kernel: ACPI: Early table checksum verification disabled Mar 2 12:58:37.860725 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 12:58:37.860735 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860747 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860757 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860776 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 12:58:37.860786 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860796 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860806 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860817 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:58:37.860832 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 12:58:37.860849 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 12:58:37.860860 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 12:58:37.860949 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 12:58:37.860962 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 12:58:37.860974 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 12:58:37.860984 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 12:58:37.860995 kernel: No NUMA configuration found Mar 2 12:58:37.861044 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 12:58:37.861375 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 2 12:58:37.861387 kernel: Zone ranges: Mar 2 12:58:37.861430 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 12:58:37.861467 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 12:58:37.861625 kernel: Normal empty Mar 2 12:58:37.861639 kernel: Device empty Mar 2 12:58:37.861677 kernel: Movable zone start for each node Mar 2 12:58:37.861718 kernel: Early memory node ranges Mar 2 12:58:37.861729 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 12:58:37.861740 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 12:58:37.861760 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 12:58:37.861770 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:58:37.861781 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 12:58:37.861831 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 12:58:37.861842 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 12:58:37.861852 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 12:58:37.861862 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 12:58:37.861959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 12:58:37.862018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 12:58:37.862042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 12:58:37.862053 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 12:58:37.862064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 12:58:37.862075 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 12:58:37.862085 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 12:58:37.862095 kernel: TSC deadline timer available Mar 2 12:58:37.862105 kernel: CPU topo: Max. logical packages: 1 Mar 2 12:58:37.862115 kernel: CPU topo: Max. logical dies: 1 Mar 2 12:58:37.862124 kernel: CPU topo: Max. dies per package: 1 Mar 2 12:58:37.862139 kernel: CPU topo: Max. threads per core: 1 Mar 2 12:58:37.862150 kernel: CPU topo: Num. cores per package: 4 Mar 2 12:58:37.862159 kernel: CPU topo: Num. threads per package: 4 Mar 2 12:58:37.862169 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 2 12:58:37.862178 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 12:58:37.862188 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 12:58:37.862198 kernel: kvm-guest: setup PV sched yield Mar 2 12:58:37.862208 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 12:58:37.862218 kernel: Booting paravirtualized kernel on KVM Mar 2 12:58:37.862232 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 12:58:37.862242 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 12:58:37.862252 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 2 12:58:37.862262 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 2 12:58:37.862271 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 12:58:37.862333 kernel: kvm-guest: PV spinlocks enabled Mar 2 12:58:37.862347 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 12:58:37.862361 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 12:58:37.862379 kernel: random: crng init done Mar 2 12:58:37.862391 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 12:58:37.862404 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 12:58:37.862414 kernel: Fallback order for Node 0: 0 Mar 2 12:58:37.862426 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 2 12:58:37.862436 kernel: Policy zone: DMA32 Mar 2 12:58:37.862447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 12:58:37.862458 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 12:58:37.862469 kernel: ftrace: allocating 40099 entries in 157 pages Mar 2 12:58:37.862486 kernel: ftrace: allocated 157 pages with 5 groups Mar 2 12:58:37.862496 kernel: Dynamic Preempt: voluntary Mar 2 12:58:37.862506 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 12:58:37.862517 kernel: rcu: RCU event tracing is enabled. Mar 2 12:58:37.862527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 12:58:37.862537 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 12:58:37.862585 kernel: Rude variant of Tasks RCU enabled. Mar 2 12:58:37.862596 kernel: Tracing variant of Tasks RCU enabled. Mar 2 12:58:37.862606 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 12:58:37.862615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 12:58:37.862630 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:58:37.862639 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:58:37.862649 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:58:37.862659 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 12:58:37.862670 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 12:58:37.862690 kernel: Console: colour VGA+ 80x25 Mar 2 12:58:37.862703 kernel: printk: legacy console [ttyS0] enabled Mar 2 12:58:37.862713 kernel: ACPI: Core revision 20240827 Mar 2 12:58:37.862723 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 12:58:37.862733 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 12:58:37.862744 kernel: x2apic enabled Mar 2 12:58:37.862761 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 12:58:37.862809 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 12:58:37.862822 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 12:58:37.862834 kernel: kvm-guest: setup PV IPIs Mar 2 12:58:37.862845 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 12:58:37.862861 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 12:58:37.862951 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 12:58:37.862965 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 12:58:37.862977 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 12:58:37.862989 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 12:58:37.863001 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 12:58:37.863013 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 12:58:37.863025 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 12:58:37.863036 kernel: Speculative Store Bypass: Vulnerable Mar 2 12:58:37.863053 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 12:58:37.863064 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 12:58:37.863078 kernel: active return thunk: srso_alias_return_thunk Mar 2 12:58:37.863090 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 12:58:37.863100 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 12:58:37.863112 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 12:58:37.863123 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 12:58:37.863135 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 12:58:37.863153 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 12:58:37.863164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 12:58:37.863174 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 12:58:37.863232 kernel: Freeing SMP alternatives memory: 32K Mar 2 12:58:37.863243 kernel: pid_max: default: 32768 minimum: 301 Mar 2 12:58:37.863253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 2 12:58:37.863264 kernel: landlock: Up and running. Mar 2 12:58:37.863274 kernel: SELinux: Initializing. Mar 2 12:58:37.863371 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:58:37.863390 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:58:37.863437 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 12:58:37.863449 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 12:58:37.863460 kernel: signal: max sigframe size: 1776 Mar 2 12:58:37.863472 kernel: rcu: Hierarchical SRCU implementation. Mar 2 12:58:37.863486 kernel: rcu: Max phase no-delay instances is 400. Mar 2 12:58:37.863497 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 2 12:58:37.863508 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 12:58:37.863519 kernel: smp: Bringing up secondary CPUs ... Mar 2 12:58:37.863539 kernel: smpboot: x86: Booting SMP configuration: Mar 2 12:58:37.863549 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 12:58:37.863562 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 12:58:37.863574 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 12:58:37.863588 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145096K reserved, 0K cma-reserved) Mar 2 12:58:37.863599 kernel: devtmpfs: initialized Mar 2 12:58:37.863611 kernel: x86/mm: Memory block size: 128MB Mar 2 12:58:37.863622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 12:58:37.863633 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 12:58:37.863648 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 12:58:37.863659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 12:58:37.863670 kernel: audit: initializing netlink subsys (disabled) Mar 2 12:58:37.863682 kernel: audit: type=2000 audit(1772456304.882:1): state=initialized audit_enabled=0 res=1 Mar 2 12:58:37.863693 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 12:58:37.863705 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 12:58:37.863719 kernel: cpuidle: using governor menu Mar 2 12:58:37.863729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 12:58:37.863743 kernel: dca service started, version 1.12.1 Mar 2 12:58:37.863761 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 2 12:58:37.863773 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 12:58:37.863786 kernel: PCI: Using configuration type 1 for base access Mar 2 12:58:37.863797 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 12:58:37.863809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 12:58:37.863819 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 12:58:37.863831 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 12:58:37.863842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 12:58:37.863854 kernel: ACPI: Added _OSI(Module Device) Mar 2 12:58:37.863947 kernel: ACPI: Added _OSI(Processor Device) Mar 2 12:58:37.863960 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 12:58:37.863972 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 12:58:37.863983 kernel: ACPI: Interpreter enabled Mar 2 12:58:37.863993 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 12:58:37.864004 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 12:58:37.864017 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 12:58:37.864029 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 12:58:37.864041 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 12:58:37.864060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 12:58:37.864953 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 12:58:37.865197 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 12:58:37.865475 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 12:58:37.865495 kernel: PCI host bridge to bus 0000:00 Mar 2 12:58:37.865856 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 12:58:37.866124 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 12:58:37.866364 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 12:58:37.866591 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 12:58:37.866779 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 12:58:37.867073 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 12:58:37.867355 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 12:58:37.867728 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 2 12:58:37.868166 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 2 12:58:37.868441 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 2 12:58:37.868687 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 2 12:58:37.868973 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 2 12:58:37.869189 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 12:58:37.869669 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 2 12:58:37.870003 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 2 12:58:37.870213 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 2 12:58:37.870767 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 12:58:37.871209 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 2 12:58:37.871479 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 2 12:58:37.871694 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 2 12:58:37.872225 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 12:58:37.872705 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 2 12:58:37.873010 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 2 12:58:37.873205 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 2 12:58:37.873448 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 12:58:37.873635 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 2 12:58:37.874030 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 2 12:58:37.874245 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 12:58:37.874652 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 2 12:58:37.874978 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 2 12:58:37.875196 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 2 12:58:37.876119 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 2 12:58:37.876390 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 2 12:58:37.876411 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 12:58:37.876424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 12:58:37.876447 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 12:58:37.876459 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 12:58:37.876469 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 12:58:37.876481 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 12:58:37.876494 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 12:58:37.876504 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 12:58:37.876516 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 12:58:37.876528 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 12:58:37.876540 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 12:58:37.876560 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 12:58:37.876570 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 12:58:37.876581 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 12:58:37.876593 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 12:58:37.876607 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 12:58:37.876617 kernel: iommu: Default domain type: Translated Mar 2 12:58:37.876628 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 12:58:37.876641 kernel: PCI: Using ACPI for IRQ routing Mar 2 12:58:37.876655 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 12:58:37.876673 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 12:58:37.876686 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 12:58:37.876990 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 12:58:37.877208 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 12:58:37.877474 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 12:58:37.877496 kernel: vgaarb: loaded Mar 2 12:58:37.877509 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 12:58:37.877520 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 12:58:37.877532 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 12:58:37.877551 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 12:58:37.877564 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 12:58:37.877578 kernel: pnp: PnP ACPI init Mar 2 12:58:37.878024 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 12:58:37.878044 kernel: pnp: PnP ACPI: found 6 devices Mar 2 12:58:37.878056 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 12:58:37.878070 kernel: NET: Registered PF_INET protocol family Mar 2 12:58:37.878082 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 12:58:37.878100 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 12:58:37.878113 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 12:58:37.878127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 12:58:37.878137 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 12:58:37.878149 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 12:58:37.878160 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:58:37.878171 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:58:37.878183 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 12:58:37.878194 kernel: NET: Registered PF_XDP protocol family Mar 2 12:58:37.878450 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 12:58:37.878635 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 12:58:37.878960 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 12:58:37.879192 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 12:58:37.879428 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 12:58:37.879667 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 12:58:37.879687 kernel: PCI: CLS 0 bytes, default 64 Mar 2 12:58:37.879699 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 12:58:37.879721 kernel: Initialise system trusted keyrings Mar 2 12:58:37.879732 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 12:58:37.879744 kernel: Key type asymmetric registered Mar 2 12:58:37.879756 kernel: Asymmetric key parser 'x509' registered Mar 2 12:58:37.879766 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 12:58:37.879779 kernel: io scheduler mq-deadline registered Mar 2 12:58:37.879791 kernel: io scheduler kyber registered Mar 2 12:58:37.879802 kernel: io scheduler bfq registered Mar 2 12:58:37.879813 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 12:58:37.879831 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 12:58:37.879843 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 12:58:37.879854 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 12:58:37.879865 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 12:58:37.880016 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 12:58:37.880027 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 12:58:37.880039 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 12:58:37.880052 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 12:58:37.880535 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 12:58:37.880799 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 12:58:37.881070 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T12:58:35 UTC (1772456315) Mar 2 12:58:37.881331 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 12:58:37.881354 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 12:58:37.881366 kernel: NET: Registered PF_INET6 protocol family Mar 2 12:58:37.881379 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 2 12:58:37.881392 kernel: Segment Routing with IPv6 Mar 2 12:58:37.881403 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 12:58:37.881423 kernel: NET: Registered PF_PACKET protocol family Mar 2 12:58:37.881436 kernel: Key type dns_resolver registered Mar 2 12:58:37.881447 kernel: IPI shorthand broadcast: enabled Mar 2 12:58:37.881461 kernel: sched_clock: Marking stable (9349032574, 1058589298)->(11506312994, -1098691122) Mar 2 12:58:37.881471 kernel: registered taskstats version 1 Mar 2 12:58:37.881484 kernel: Loading compiled-in X.509 certificates Mar 2 12:58:37.881495 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: ca052fea375a75b056ebd4154b64794dffb70b96' Mar 2 12:58:37.881506 kernel: Demotion targets for Node 0: null Mar 2 12:58:37.881519 kernel: Key type .fscrypt registered Mar 2 12:58:37.881539 kernel: Key type fscrypt-provisioning registered Mar 2 12:58:37.881549 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 12:58:37.881561 kernel: ima: Allocated hash algorithm: sha1 Mar 2 12:58:37.881574 kernel: ima: No architecture policies found Mar 2 12:58:37.881584 kernel: clk: Disabling unused clocks Mar 2 12:58:37.881596 kernel: Warning: unable to open an initial console. Mar 2 12:58:37.881610 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 2 12:58:37.881620 kernel: Write protecting the kernel read-only data: 40960k Mar 2 12:58:37.881638 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 2 12:58:37.881650 kernel: Run /init as init process Mar 2 12:58:37.881662 kernel: with arguments: Mar 2 12:58:37.881673 kernel: /init Mar 2 12:58:37.881686 kernel: with environment: Mar 2 12:58:37.881697 kernel: HOME=/ Mar 2 12:58:37.881710 kernel: TERM=linux Mar 2 12:58:37.881722 systemd[1]: Successfully made /usr/ read-only. Mar 2 12:58:37.881741 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 12:58:37.881758 systemd[1]: Detected virtualization kvm. Mar 2 12:58:37.881771 systemd[1]: Detected architecture x86-64. Mar 2 12:58:37.881782 systemd[1]: Running in initrd. Mar 2 12:58:37.881795 systemd[1]: No hostname configured, using default hostname. Mar 2 12:58:37.881808 systemd[1]: Hostname set to . Mar 2 12:58:37.881822 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:58:37.881834 systemd[1]: Queued start job for default target initrd.target. Mar 2 12:58:37.881853 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:58:37.881964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:58:37.881989 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 12:58:37.882003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:58:37.882015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 12:58:37.882030 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 12:58:37.882048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 12:58:37.882063 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 12:58:37.882078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:58:37.882089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:58:37.882102 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:58:37.882116 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:58:37.882127 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:58:37.882145 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:58:37.882158 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:58:37.882172 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:58:37.882186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:58:37.882199 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 2 12:58:37.882212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:58:37.882223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:58:37.882238 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:58:37.882249 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:58:37.882269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 12:58:37.882324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:58:37.882339 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 12:58:37.882352 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 2 12:58:37.882366 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 12:58:37.882378 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:58:37.882390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:58:37.882404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:58:37.882423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 12:58:37.882442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:58:37.882458 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 12:58:37.882472 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:58:37.882578 systemd-journald[203]: Collecting audit messages is disabled. Mar 2 12:58:37.882611 systemd-journald[203]: Journal started Mar 2 12:58:37.882642 systemd-journald[203]: Runtime Journal (/run/log/journal/ef2051a895554f87bcbc9d83113eeb15) is 6M, max 48.3M, 42.2M free. Mar 2 12:58:37.866533 systemd-modules-load[204]: Inserted module 'overlay' Mar 2 12:58:38.226955 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:58:38.227021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 12:58:38.227039 kernel: Bridge firewalling registered Mar 2 12:58:37.951362 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 2 12:58:38.249106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:58:38.259235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:58:38.269362 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:58:38.534964 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:58:38.550347 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:58:38.553084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:58:38.598638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:58:38.650833 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 2 12:58:38.659247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:58:38.672598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:58:38.677493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:58:38.690095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:58:38.729576 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 12:58:38.757400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:58:38.868589 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 12:58:38.899841 systemd-resolved[242]: Positive Trust Anchors: Mar 2 12:58:38.899856 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:58:38.899969 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:58:38.905813 systemd-resolved[242]: Defaulting to hostname 'linux'. Mar 2 12:58:38.910178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:58:38.929681 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:58:39.235106 kernel: SCSI subsystem initialized Mar 2 12:58:39.256429 kernel: Loading iSCSI transport class v2.0-870. Mar 2 12:58:39.288014 kernel: iscsi: registered transport (tcp) Mar 2 12:58:39.345251 kernel: iscsi: registered transport (qla4xxx) Mar 2 12:58:39.345770 kernel: QLogic iSCSI HBA Driver Mar 2 12:58:39.427797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:58:39.488015 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:58:39.505421 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:58:39.815262 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 12:58:39.834169 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 12:58:40.035040 kernel: raid6: avx2x4 gen() 7543 MB/s Mar 2 12:58:40.054065 kernel: raid6: avx2x2 gen() 10384 MB/s Mar 2 12:58:40.078953 kernel: raid6: avx2x1 gen() 7091 MB/s Mar 2 12:58:40.080711 kernel: raid6: using algorithm avx2x2 gen() 10384 MB/s Mar 2 12:58:40.103369 kernel: raid6: .... xor() 10033 MB/s, rmw enabled Mar 2 12:58:40.103501 kernel: raid6: using avx2x2 recovery algorithm Mar 2 12:58:40.185612 kernel: xor: automatically using best checksumming function avx Mar 2 12:58:41.029608 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 12:58:41.051031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:58:41.065977 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:58:41.152986 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 2 12:58:41.163519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:58:41.180007 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 12:58:41.338724 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Mar 2 12:58:41.523994 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:58:41.548755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:58:41.835608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:58:41.863085 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 12:58:41.964067 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 12:58:42.031789 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 12:58:42.059639 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 12:58:42.081686 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 12:58:42.081760 kernel: GPT:9289727 != 19775487 Mar 2 12:58:42.081795 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 12:58:42.081812 kernel: GPT:9289727 != 19775487 Mar 2 12:58:42.081826 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 12:58:42.081840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:58:42.089256 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:58:42.091753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:58:42.117448 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:58:42.131811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:58:42.144536 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:58:42.235979 kernel: libata version 3.00 loaded. Mar 2 12:58:42.316955 kernel: AES CTR mode by8 optimization enabled Mar 2 12:58:42.373076 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 12:58:42.378971 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 12:58:42.379569 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 12:58:42.398655 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 12:58:42.671232 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 2 12:58:42.672042 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 2 12:58:42.672388 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 12:58:42.672647 kernel: scsi host0: ahci Mar 2 12:58:42.673018 kernel: scsi host1: ahci Mar 2 12:58:42.673826 kernel: scsi host2: ahci Mar 2 12:58:42.674129 kernel: scsi host3: ahci Mar 2 12:58:42.674390 kernel: scsi host4: ahci Mar 2 12:58:42.674637 kernel: scsi host5: ahci Mar 2 12:58:42.674813 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 2 12:58:42.674826 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 2 12:58:42.674845 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 2 12:58:42.674864 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 2 12:58:42.674984 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 2 12:58:42.675003 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 2 12:58:42.693793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:58:42.763007 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 12:58:42.828082 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 12:58:42.828117 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 12:58:42.828132 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 12:58:42.828147 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 12:58:42.828161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 12:58:42.834029 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 12:58:42.841487 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 12:58:42.841557 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 12:58:42.841577 kernel: ata3.00: applying bridge limits Mar 2 12:58:42.851451 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 12:58:42.851515 kernel: ata3.00: configured for UDMA/100 Mar 2 12:58:42.859977 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 12:58:42.863611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:58:42.879973 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 12:58:42.889471 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 12:58:42.909416 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 12:58:42.992390 disk-uuid[619]: Primary Header is updated. Mar 2 12:58:42.992390 disk-uuid[619]: Secondary Entries is updated. Mar 2 12:58:42.992390 disk-uuid[619]: Secondary Header is updated. Mar 2 12:58:43.027093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:58:43.078231 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 12:58:43.078641 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 12:58:43.101140 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 12:58:43.697538 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 12:58:43.718735 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:58:43.733996 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:58:43.749608 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:58:43.753945 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 12:58:43.845261 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:58:44.056530 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:58:44.060501 disk-uuid[620]: The operation has completed successfully. Mar 2 12:58:44.182818 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 12:58:44.183721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 12:58:44.275372 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 12:58:44.314129 sh[649]: Success Mar 2 12:58:44.364424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 12:58:44.364522 kernel: device-mapper: uevent: version 1.0.3 Mar 2 12:58:44.370436 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 2 12:58:44.425989 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 2 12:58:44.524668 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 12:58:44.537382 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 12:58:44.578017 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 12:58:44.636021 kernel: BTRFS: device fsid 760529e6-8e55-47fc-ad5a-c1c1d184e50a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Mar 2 12:58:44.636060 kernel: BTRFS info (device dm-0): first mount of filesystem 760529e6-8e55-47fc-ad5a-c1c1d184e50a Mar 2 12:58:44.636078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:58:44.735525 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 2 12:58:44.736203 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 2 12:58:44.746775 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 12:58:44.758589 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 2 12:58:44.809433 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 12:58:44.817849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 12:58:44.847731 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 12:58:44.992271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (692) Mar 2 12:58:45.035233 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:58:45.035374 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:58:45.069720 kernel: BTRFS info (device vda6): turning on async discard Mar 2 12:58:45.069806 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 12:58:45.100811 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:58:45.146938 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 12:58:45.157047 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 12:58:45.771979 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:58:45.827259 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:58:46.086973 ignition[752]: Ignition 2.22.0 Mar 2 12:58:46.087261 ignition[752]: Stage: fetch-offline Mar 2 12:58:46.089126 ignition[752]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:58:46.096600 systemd-networkd[835]: lo: Link UP Mar 2 12:58:46.089152 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:58:46.096607 systemd-networkd[835]: lo: Gained carrier Mar 2 12:58:46.089527 ignition[752]: parsed url from cmdline: "" Mar 2 12:58:46.101457 systemd-networkd[835]: Enumeration completed Mar 2 12:58:46.089534 ignition[752]: no config URL provided Mar 2 12:58:46.102027 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:58:46.089543 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 12:58:46.103855 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:58:46.089557 ignition[752]: no config at "/usr/lib/ignition/user.ign" Mar 2 12:58:46.103864 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:58:46.089678 ignition[752]: op(1): [started] loading QEMU firmware config module Mar 2 12:58:46.120252 systemd-networkd[835]: eth0: Link UP Mar 2 12:58:46.089686 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 12:58:46.127556 systemd[1]: Reached target network.target - Network. Mar 2 12:58:46.194137 ignition[752]: op(1): [finished] loading QEMU firmware config module Mar 2 12:58:46.128162 systemd-networkd[835]: eth0: Gained carrier Mar 2 12:58:46.128185 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:58:46.211244 systemd-networkd[835]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:58:46.679094 ignition[752]: parsing config with SHA512: bb3667be8946a90b434bdf07856238f9415efbe330a4019989d811b9d211c21e28b4a32fee804bdf1e4ec7ee886caa5881d99598f540b1127bf78913058cd43b Mar 2 12:58:46.720995 unknown[752]: fetched base config from "system" Mar 2 12:58:46.721033 unknown[752]: fetched user config from "qemu" Mar 2 12:58:46.725033 ignition[752]: fetch-offline: fetch-offline passed Mar 2 12:58:46.725191 ignition[752]: Ignition finished successfully Mar 2 12:58:46.749281 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:58:46.750982 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 12:58:46.761600 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 12:58:47.056820 ignition[842]: Ignition 2.22.0 Mar 2 12:58:47.057265 ignition[842]: Stage: kargs Mar 2 12:58:47.057520 ignition[842]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:58:47.057537 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:58:47.090017 ignition[842]: kargs: kargs passed Mar 2 12:58:47.091135 ignition[842]: Ignition finished successfully Mar 2 12:58:47.101943 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 12:58:47.112235 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 12:58:47.276453 ignition[850]: Ignition 2.22.0 Mar 2 12:58:47.276510 ignition[850]: Stage: disks Mar 2 12:58:47.283767 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 12:58:47.276734 ignition[850]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:58:47.293171 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 12:58:47.276754 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:58:47.308529 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:58:47.278151 ignition[850]: disks: disks passed Mar 2 12:58:47.324496 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:58:47.278231 ignition[850]: Ignition finished successfully Mar 2 12:58:47.339167 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:58:47.346231 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:58:47.369831 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 12:58:47.489547 systemd-fsck[860]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 2 12:58:47.508410 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 12:58:47.529469 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 12:58:47.942487 systemd-networkd[835]: eth0: Gained IPv6LL Mar 2 12:58:48.638971 kernel: EXT4-fs (vda9): mounted filesystem 9d55f1a4-66ad-43d6-b325-f6b8d2d08c3e r/w with ordered data mode. Quota mode: none. Mar 2 12:58:48.647549 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 12:58:48.761283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 12:58:48.798558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:58:48.867160 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 12:58:48.872210 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 12:58:48.872299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 12:58:48.872345 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:58:48.981936 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 12:58:49.063861 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Mar 2 12:58:49.066495 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:58:49.066521 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:58:49.076185 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 12:58:49.126661 kernel: BTRFS info (device vda6): turning on async discard Mar 2 12:58:49.126752 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 12:58:49.149036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:58:49.405958 initrd-setup-root[892]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 12:58:49.485147 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Mar 2 12:58:49.538757 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 12:58:49.644694 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 12:58:50.674113 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 12:58:50.713794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 12:58:50.773323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 12:58:50.861074 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 12:58:50.897139 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:58:51.096142 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 12:58:51.339214 ignition[981]: INFO : Ignition 2.22.0 Mar 2 12:58:51.339214 ignition[981]: INFO : Stage: mount Mar 2 12:58:51.355611 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:58:51.355611 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:58:51.434436 ignition[981]: INFO : mount: mount passed Mar 2 12:58:51.434436 ignition[981]: INFO : Ignition finished successfully Mar 2 12:58:51.487359 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 12:58:51.549231 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 12:58:51.712855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:58:51.792627 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (994) Mar 2 12:58:51.799999 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 12:58:51.800069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:58:51.861325 kernel: BTRFS info (device vda6): turning on async discard Mar 2 12:58:51.861455 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 12:58:51.868357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:58:52.067483 ignition[1011]: INFO : Ignition 2.22.0 Mar 2 12:58:52.067483 ignition[1011]: INFO : Stage: files Mar 2 12:58:52.067483 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:58:52.067483 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:58:52.143023 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Mar 2 12:58:52.143023 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 12:58:52.143023 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 12:58:52.234732 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 12:58:52.253014 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 12:58:52.253014 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 12:58:52.247048 unknown[1011]: wrote ssh authorized keys file for user: core Mar 2 12:58:52.297155 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:58:52.297155 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 12:58:52.380551 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 12:58:52.782021 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:58:52.782021 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 12:58:52.782021 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 12:58:53.046101 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 12:58:54.049998 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 12:58:54.049998 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 12:58:54.049998 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 12:58:54.049998 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:58:54.156752 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 2 12:58:54.342858 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 12:58:55.457563 kernel: hrtimer: interrupt took 5781271 ns Mar 2 12:58:57.947334 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:58:57.947334 ignition[1011]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 12:58:57.989602 ignition[1011]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 12:58:58.039574 ignition[1011]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 12:58:58.278478 ignition[1011]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:58:58.436587 ignition[1011]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:58:58.447764 ignition[1011]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 12:58:58.447764 ignition[1011]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 12:58:58.447764 ignition[1011]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 12:58:58.447764 ignition[1011]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:58:58.447764 ignition[1011]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:58:58.447764 ignition[1011]: INFO : files: files passed Mar 2 12:58:58.447764 ignition[1011]: INFO : Ignition finished successfully Mar 2 12:58:58.459033 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 12:58:58.474305 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 12:58:58.538726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 12:58:58.573205 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 12:58:58.585010 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 12:58:58.652736 initrd-setup-root-after-ignition[1040]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 12:58:58.665490 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:58:58.665490 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:58:58.687565 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:58:58.692321 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:58:58.698038 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 12:58:58.735845 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 12:58:59.015124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 12:58:59.015372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 12:58:59.098600 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 12:58:59.129230 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 12:58:59.256301 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 12:58:59.370587 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 12:58:59.954315 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:59:00.081022 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 12:59:00.173187 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:59:00.197724 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:59:00.266148 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 12:59:00.283179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 12:59:00.283559 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:59:00.328694 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 12:59:00.342177 systemd[1]: Stopped target basic.target - Basic System. Mar 2 12:59:00.342376 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 12:59:00.342598 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:59:00.342752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 12:59:00.342991 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 2 12:59:00.343146 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 12:59:00.343288 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:59:00.343512 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 12:59:00.343674 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 12:59:00.343820 systemd[1]: Stopped target swap.target - Swaps. Mar 2 12:59:00.344021 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 12:59:00.344366 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:59:00.344770 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:59:00.505658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:59:00.535336 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 12:59:00.540288 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:59:00.565824 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 12:59:00.566216 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 12:59:00.804561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 12:59:00.804851 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:59:00.854325 systemd[1]: Stopped target paths.target - Path Units. Mar 2 12:59:00.872863 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 12:59:00.878069 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:59:00.934003 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 12:59:00.939181 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 12:59:00.939497 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 12:59:00.939854 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:59:00.940754 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 12:59:00.940982 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:59:00.941555 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 12:59:00.941749 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:59:00.943072 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 12:59:00.943384 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 12:59:00.967947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 12:59:01.036198 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 12:59:01.036575 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:59:01.099276 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 12:59:01.129782 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 12:59:01.130500 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:59:01.235688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 12:59:01.235978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:59:01.341716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 12:59:01.343274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 12:59:01.387993 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 12:59:01.407805 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 12:59:01.409676 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 12:59:01.456296 ignition[1066]: INFO : Ignition 2.22.0 Mar 2 12:59:01.456296 ignition[1066]: INFO : Stage: umount Mar 2 12:59:01.466222 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:59:01.466222 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:59:01.466222 ignition[1066]: INFO : umount: umount passed Mar 2 12:59:01.466222 ignition[1066]: INFO : Ignition finished successfully Mar 2 12:59:01.474128 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 12:59:01.475694 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 12:59:01.494855 systemd[1]: Stopped target network.target - Network. Mar 2 12:59:01.497405 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 12:59:01.497580 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 12:59:01.522261 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 12:59:01.522393 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 12:59:01.527705 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 12:59:01.527790 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 12:59:01.531807 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 12:59:01.531971 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 12:59:01.545200 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 12:59:01.545348 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 12:59:01.560798 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 12:59:01.570572 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 12:59:01.648781 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 12:59:01.649056 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 12:59:01.681688 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 2 12:59:01.682141 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 12:59:01.682355 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 12:59:01.729242 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 2 12:59:01.733352 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 2 12:59:01.758833 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 12:59:01.764784 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:59:01.793489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 12:59:01.803662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 12:59:01.803861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:59:01.823200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:59:01.823309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:59:01.849278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 12:59:01.849389 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 12:59:01.853758 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 12:59:01.853858 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:59:01.877401 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:59:01.892297 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 12:59:01.892475 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:59:01.942292 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 12:59:01.942723 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:59:01.955170 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 12:59:01.955316 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 12:59:01.964109 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 12:59:01.964204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:59:02.055478 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 12:59:02.055628 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:59:02.101530 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 12:59:02.101657 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 12:59:02.141349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:59:02.141633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:59:02.224195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 12:59:02.239813 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 2 12:59:02.240149 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:59:02.273466 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 12:59:02.273655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:59:02.291748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:59:02.291856 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:59:02.324094 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 2 12:59:02.324196 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 2 12:59:02.324267 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 12:59:02.325357 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 12:59:02.326111 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 12:59:02.359802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 12:59:02.361632 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 12:59:02.373357 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 12:59:02.440581 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 12:59:02.542182 systemd[1]: Switching root. Mar 2 12:59:02.638250 systemd-journald[203]: Journal stopped Mar 2 12:59:08.684009 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 2 12:59:08.684113 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 12:59:08.684143 kernel: SELinux: policy capability open_perms=1 Mar 2 12:59:08.684163 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 12:59:08.684180 kernel: SELinux: policy capability always_check_network=0 Mar 2 12:59:08.684203 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 12:59:08.684224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 12:59:08.684249 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 12:59:08.684264 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 12:59:08.684283 kernel: SELinux: policy capability userspace_initial_context=0 Mar 2 12:59:08.684308 kernel: audit: type=1403 audit(1772456343.673:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 12:59:08.684330 systemd[1]: Successfully loaded SELinux policy in 362.173ms. Mar 2 12:59:08.684364 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 31.344ms. Mar 2 12:59:08.684385 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 12:59:08.684409 systemd[1]: Detected virtualization kvm. Mar 2 12:59:08.684428 systemd[1]: Detected architecture x86-64. Mar 2 12:59:08.684515 systemd[1]: Detected first boot. Mar 2 12:59:08.684538 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:59:08.684558 zram_generator::config[1110]: No configuration found. Mar 2 12:59:08.684578 kernel: Guest personality initialized and is inactive Mar 2 12:59:08.684598 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 2 12:59:08.684616 kernel: Initialized host personality Mar 2 12:59:08.684640 kernel: NET: Registered PF_VSOCK protocol family Mar 2 12:59:08.684658 systemd[1]: Populated /etc with preset unit settings. Mar 2 12:59:08.684680 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 2 12:59:08.684697 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 12:59:08.684715 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 12:59:08.684738 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 12:59:08.684755 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 12:59:08.684775 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 12:59:08.684791 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 12:59:08.684817 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 12:59:08.684836 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 12:59:08.684857 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 12:59:08.684956 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 12:59:08.684976 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 12:59:08.684992 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:59:08.685010 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:59:08.685026 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 12:59:08.685047 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 12:59:08.685063 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 12:59:08.685080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:59:08.685095 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 12:59:08.685113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:59:08.685132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:59:08.685148 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 12:59:08.685167 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 12:59:08.685188 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 12:59:08.685206 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 12:59:08.685222 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:59:08.685238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:59:08.685254 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:59:08.685270 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:59:08.685286 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 12:59:08.685302 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 12:59:08.685318 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 2 12:59:08.685343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:59:08.685361 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:59:08.685380 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:59:08.685400 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 12:59:08.685420 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 12:59:08.685438 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 12:59:08.685505 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 12:59:08.685523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:08.685539 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 12:59:08.685561 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 12:59:08.685576 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 12:59:08.685597 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 12:59:08.685616 systemd[1]: Reached target machines.target - Containers. Mar 2 12:59:08.685637 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 12:59:08.685655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:59:08.685676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:59:08.685695 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 12:59:08.685715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:59:08.685740 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:59:08.685760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:59:08.685779 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 12:59:08.685797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:59:08.685815 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 12:59:08.685834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 12:59:08.685854 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 12:59:08.685937 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 12:59:08.685966 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 12:59:08.685988 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 12:59:08.686009 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:59:08.686028 kernel: loop: module loaded Mar 2 12:59:08.686046 kernel: ACPI: bus type drm_connector registered Mar 2 12:59:08.686066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:59:08.686085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:59:08.686104 kernel: fuse: init (API version 7.41) Mar 2 12:59:08.686126 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 12:59:08.686152 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 2 12:59:08.686172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:59:08.686192 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 12:59:08.686212 systemd[1]: Stopped verity-setup.service. Mar 2 12:59:08.686276 systemd-journald[1195]: Collecting audit messages is disabled. Mar 2 12:59:08.686331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:08.686354 systemd-journald[1195]: Journal started Mar 2 12:59:08.686387 systemd-journald[1195]: Runtime Journal (/run/log/journal/ef2051a895554f87bcbc9d83113eeb15) is 6M, max 48.3M, 42.2M free. Mar 2 12:59:06.801292 systemd[1]: Queued start job for default target multi-user.target. Mar 2 12:59:06.853224 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 12:59:06.858502 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 12:59:06.859855 systemd[1]: systemd-journald.service: Consumed 1.663s CPU time. Mar 2 12:59:08.720291 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:59:08.729816 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 12:59:08.740571 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 12:59:08.750666 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 12:59:08.756231 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 12:59:08.762645 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 12:59:08.768664 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 12:59:08.774616 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 12:59:08.781622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:59:08.788731 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 12:59:08.791230 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 12:59:08.800520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:59:08.801005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:59:08.810768 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:59:08.811283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:59:08.823194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:59:08.824689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:59:08.839719 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 12:59:08.841031 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 12:59:08.850509 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:59:08.851757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:59:08.864265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:59:08.884618 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:59:08.897041 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 12:59:08.911578 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 2 12:59:08.937293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:59:08.964005 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:59:08.973277 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 12:59:08.997632 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 12:59:09.003633 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 12:59:09.003729 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:59:09.014223 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 2 12:59:09.024194 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 12:59:09.040749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:59:09.050737 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 12:59:09.081653 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 12:59:09.125339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:59:09.134960 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 12:59:09.148066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:59:09.159335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:59:09.172253 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 12:59:09.201183 systemd-journald[1195]: Time spent on flushing to /var/log/journal/ef2051a895554f87bcbc9d83113eeb15 is 43.932ms for 979 entries. Mar 2 12:59:09.201183 systemd-journald[1195]: System Journal (/var/log/journal/ef2051a895554f87bcbc9d83113eeb15) is 8M, max 195.6M, 187.6M free. Mar 2 12:59:09.381395 systemd-journald[1195]: Received client request to flush runtime journal. Mar 2 12:59:09.381543 kernel: loop0: detected capacity change from 0 to 217752 Mar 2 12:59:09.381588 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 12:59:09.194150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 12:59:09.210067 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 12:59:09.224066 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 12:59:09.350153 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 12:59:09.373172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 12:59:09.392255 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 2 12:59:09.403189 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 12:59:09.447306 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:59:09.632579 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 12:59:09.646065 kernel: loop1: detected capacity change from 0 to 128560 Mar 2 12:59:09.667366 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 2 12:59:09.989641 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 12:59:10.136058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:59:10.203119 kernel: loop2: detected capacity change from 0 to 110984 Mar 2 12:59:10.959673 kernel: loop3: detected capacity change from 0 to 217752 Mar 2 12:59:11.195851 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 2 12:59:11.195960 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 2 12:59:11.362431 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:59:11.396992 kernel: loop4: detected capacity change from 0 to 128560 Mar 2 12:59:11.559325 kernel: loop5: detected capacity change from 0 to 110984 Mar 2 12:59:11.976733 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 12:59:11.979764 (sd-merge)[1252]: Merged extensions into '/usr'. Mar 2 12:59:12.002337 systemd[1]: Reload requested from client PID 1230 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 12:59:12.002361 systemd[1]: Reloading... Mar 2 12:59:12.315437 zram_generator::config[1275]: No configuration found. Mar 2 12:59:14.398691 systemd[1]: Reloading finished in 2395 ms. Mar 2 12:59:14.468823 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 12:59:14.485638 ldconfig[1225]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 12:59:14.490398 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 12:59:14.527965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 12:59:14.583820 systemd[1]: Starting ensure-sysext.service... Mar 2 12:59:14.601966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:59:14.651762 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:59:14.699233 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Mar 2 12:59:14.699278 systemd[1]: Reloading... Mar 2 12:59:14.726854 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 12:59:14.727032 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 12:59:14.727599 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 12:59:14.728217 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 12:59:14.734819 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 12:59:14.735525 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Mar 2 12:59:14.735670 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Mar 2 12:59:14.748271 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:59:14.748298 systemd-tmpfiles[1318]: Skipping /boot Mar 2 12:59:14.808785 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:59:14.808809 systemd-tmpfiles[1318]: Skipping /boot Mar 2 12:59:14.834266 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Mar 2 12:59:14.928987 zram_generator::config[1348]: No configuration found. Mar 2 12:59:15.450012 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 12:59:15.479418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 2 12:59:15.488834 kernel: ACPI: button: Power Button [PWRF] Mar 2 12:59:15.532380 systemd[1]: Reloading finished in 832 ms. Mar 2 12:59:15.538542 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 12:59:15.539058 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 12:59:15.549857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:59:15.558626 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:59:15.627965 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 12:59:15.690939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:59:15.719388 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 12:59:15.763526 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 12:59:15.778435 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 12:59:15.793024 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 12:59:15.813072 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:59:15.843362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:59:15.850535 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 12:59:15.866852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:15.867928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:59:15.879031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:59:15.898561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:59:15.925652 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:59:15.932590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:59:15.932781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 12:59:15.943325 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 12:59:15.952585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:15.963074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 12:59:15.976548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:59:15.977096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:59:15.988538 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 12:59:16.001378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:59:16.002147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:59:16.017323 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:59:16.018144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:59:16.099000 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 12:59:16.146733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:16.148309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:59:16.153375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:59:16.162079 augenrules[1471]: No rules Mar 2 12:59:16.173536 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:59:16.196033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:59:16.202966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:59:16.203506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 12:59:16.211462 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 12:59:16.226298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:59:16.230325 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:16.241998 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:59:16.242621 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 12:59:16.252960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 12:59:16.259302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:59:16.259853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:59:16.266679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:59:16.267171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:59:16.274831 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:59:16.275268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:59:16.288609 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 12:59:16.344095 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 12:59:16.435218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:16.448979 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 12:59:16.770797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:59:16.802255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:59:16.810102 systemd-networkd[1439]: lo: Link UP Mar 2 12:59:16.811435 systemd-networkd[1439]: lo: Gained carrier Mar 2 12:59:16.812809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:59:16.818956 systemd-networkd[1439]: Enumeration completed Mar 2 12:59:16.819983 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:59:16.820096 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:59:16.821591 systemd-networkd[1439]: eth0: Link UP Mar 2 12:59:16.823300 systemd-networkd[1439]: eth0: Gained carrier Mar 2 12:59:16.823333 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:59:16.838346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:59:16.861116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:59:16.885303 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:59:16.886206 systemd-resolved[1440]: Positive Trust Anchors: Mar 2 12:59:16.886222 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:59:16.886264 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:59:16.932696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:59:16.932833 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 12:59:16.933020 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 12:59:16.933195 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:59:16.946821 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:59:16.951172 systemd-resolved[1440]: Defaulting to hostname 'linux'. Mar 2 12:59:16.952101 augenrules[1498]: /sbin/augenrules: No change Mar 2 12:59:16.967539 systemd[1]: Finished ensure-sysext.service. Mar 2 12:59:16.972606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:59:16.981814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:59:16.993998 augenrules[1521]: No rules Mar 2 12:59:16.994445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:59:16.995249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:59:17.000691 kernel: kvm_amd: TSC scaling supported Mar 2 12:59:17.000797 kernel: kvm_amd: Nested Virtualization enabled Mar 2 12:59:17.000851 kernel: kvm_amd: Nested Paging enabled Mar 2 12:59:17.000920 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 12:59:17.019839 kernel: kvm_amd: PMU virtualization is disabled Mar 2 12:59:17.022017 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:59:17.022394 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 12:59:17.026615 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:59:17.027012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:59:17.034404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:59:17.034939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:59:17.041634 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:59:17.042229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:59:17.063210 systemd[1]: Reached target network.target - Network. Mar 2 12:59:17.070301 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:59:17.086107 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 2 12:59:17.099299 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 12:59:17.104189 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:59:17.104335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:59:17.113375 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 12:59:17.240261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 2 12:59:17.616400 kernel: EDAC MC: Ver: 3.0.0 Mar 2 12:59:17.618077 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 12:59:17.626161 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:59:17.630411 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 12:59:17.638020 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 12:59:18.151412 systemd-timesyncd[1535]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 12:59:18.152140 systemd-resolved[1440]: Clock change detected. Flushing caches. Mar 2 12:59:18.152399 systemd-timesyncd[1535]: Initial clock synchronization to Mon 2026-03-02 12:59:18.151136 UTC. Mar 2 12:59:18.154859 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 2 12:59:18.159392 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 12:59:18.164772 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 12:59:18.165467 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:59:18.171194 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 12:59:18.176345 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 12:59:18.196155 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 12:59:18.201812 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:59:18.355692 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 12:59:18.371469 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 12:59:18.383491 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 2 12:59:18.393341 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 2 12:59:18.397889 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 2 12:59:18.450678 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 12:59:18.460807 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 2 12:59:18.467824 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 12:59:18.473855 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:59:18.479717 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:59:18.486719 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:59:18.486822 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:59:18.489646 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 12:59:18.498195 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 12:59:18.522323 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 12:59:18.558755 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 12:59:18.664832 jq[1544]: false Mar 2 12:59:18.665943 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 12:59:18.683201 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 12:59:18.689353 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 2 12:59:18.735157 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 12:59:18.745055 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Refreshing passwd entry cache Mar 2 12:59:18.744755 oslogin_cache_refresh[1546]: Refreshing passwd entry cache Mar 2 12:59:18.749872 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 12:59:18.766234 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 12:59:18.770053 extend-filesystems[1545]: Found /dev/vda6 Mar 2 12:59:18.769108 oslogin_cache_refresh[1546]: Failure getting users, quitting Mar 2 12:59:18.781654 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Failure getting users, quitting Mar 2 12:59:18.781654 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 12:59:18.781654 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Refreshing group entry cache Mar 2 12:59:18.769145 oslogin_cache_refresh[1546]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 12:59:18.769246 oslogin_cache_refresh[1546]: Refreshing group entry cache Mar 2 12:59:18.785421 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 12:59:18.790120 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Failure getting groups, quitting Mar 2 12:59:18.790120 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 12:59:18.788277 oslogin_cache_refresh[1546]: Failure getting groups, quitting Mar 2 12:59:18.788298 oslogin_cache_refresh[1546]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 12:59:18.796220 extend-filesystems[1545]: Found /dev/vda9 Mar 2 12:59:18.806076 extend-filesystems[1545]: Checking size of /dev/vda9 Mar 2 12:59:18.807533 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 12:59:18.814979 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 12:59:18.816398 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 12:59:18.832745 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 12:59:18.845674 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 12:59:18.861587 extend-filesystems[1545]: Resized partition /dev/vda9 Mar 2 12:59:18.903273 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 12:59:18.943085 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Mar 2 12:59:18.999475 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 12:59:18.999520 jq[1565]: true Mar 2 12:59:18.959933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 12:59:18.960476 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 12:59:18.961152 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 2 12:59:18.961577 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 2 12:59:18.981157 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 12:59:18.981807 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 12:59:18.993143 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 12:59:19.007718 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 12:59:19.041652 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 12:59:19.271376 update_engine[1562]: I20260302 12:59:19.045888 1562 main.cc:92] Flatcar Update Engine starting Mar 2 12:59:19.154268 systemd-networkd[1439]: eth0: Gained IPv6LL Mar 2 12:59:19.280112 jq[1576]: true Mar 2 12:59:19.184303 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 12:59:19.272270 systemd-logind[1559]: Watching system buttons on /dev/input/event2 (Power Button) Mar 2 12:59:19.272312 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 12:59:19.274515 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 12:59:19.309497 systemd-logind[1559]: New seat seat0. Mar 2 12:59:19.461780 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 12:59:19.334373 dbus-daemon[1542]: [system] SELinux support is enabled Mar 2 12:59:19.318204 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 12:59:19.377184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:59:19.464450 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 12:59:19.464450 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 12:59:19.464450 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 12:59:19.522821 update_engine[1562]: I20260302 12:59:19.473310 1562 update_check_scheduler.cc:74] Next update check in 7m55s Mar 2 12:59:19.522914 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Mar 2 12:59:19.535270 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Mar 2 12:59:19.472523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 12:59:19.478940 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 12:59:19.505339 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 12:59:19.510454 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 12:59:19.511222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 12:59:19.544744 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 12:59:19.581242 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 12:59:19.583114 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 12:59:19.581554 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 12:59:19.584964 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 12:59:19.589391 tar[1573]: linux-amd64/LICENSE Mar 2 12:59:19.589391 tar[1573]: linux-amd64/helm Mar 2 12:59:19.597315 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 12:59:19.597365 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 12:59:19.606638 systemd[1]: Started update-engine.service - Update Engine. Mar 2 12:59:19.746654 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 12:59:19.763746 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 12:59:19.779363 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 12:59:19.848173 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 12:59:19.848773 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 12:59:19.855525 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 12:59:20.017231 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 12:59:20.089535 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 12:59:20.265422 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 12:59:20.283266 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 12:59:20.284064 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 12:59:20.315805 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 12:59:20.493520 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 12:59:20.585334 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 12:59:20.599955 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 12:59:20.611305 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 12:59:21.157056 containerd[1577]: time="2026-03-02T12:59:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 2 12:59:21.161354 containerd[1577]: time="2026-03-02T12:59:21.158097077Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 2 12:59:21.305205 containerd[1577]: time="2026-03-02T12:59:21.304144356Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="304.739µs" Mar 2 12:59:21.305205 containerd[1577]: time="2026-03-02T12:59:21.304364728Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 2 12:59:21.305205 containerd[1577]: time="2026-03-02T12:59:21.304475103Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 2 12:59:21.448187 containerd[1577]: time="2026-03-02T12:59:21.446976868Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 2 12:59:21.452352 containerd[1577]: time="2026-03-02T12:59:21.449980174Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 2 12:59:21.452352 containerd[1577]: time="2026-03-02T12:59:21.450228247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 12:59:21.452352 containerd[1577]: time="2026-03-02T12:59:21.450645726Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 12:59:21.452352 containerd[1577]: time="2026-03-02T12:59:21.450686903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 12:59:21.452736 containerd[1577]: time="2026-03-02T12:59:21.452667590Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 12:59:21.452869 containerd[1577]: time="2026-03-02T12:59:21.452845282Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 12:59:21.452947 containerd[1577]: time="2026-03-02T12:59:21.452925752Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 12:59:21.453081 containerd[1577]: time="2026-03-02T12:59:21.453059431Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 2 12:59:21.453368 containerd[1577]: time="2026-03-02T12:59:21.453342118Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 2 12:59:21.454126 containerd[1577]: time="2026-03-02T12:59:21.454100534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 12:59:21.454267 containerd[1577]: time="2026-03-02T12:59:21.454239925Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 12:59:21.454359 containerd[1577]: time="2026-03-02T12:59:21.454340603Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 2 12:59:21.454822 containerd[1577]: time="2026-03-02T12:59:21.454684704Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 2 12:59:21.455946 containerd[1577]: time="2026-03-02T12:59:21.455914821Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 2 12:59:21.456409 containerd[1577]: time="2026-03-02T12:59:21.456378306Z" level=info msg="metadata content store policy set" policy=shared Mar 2 12:59:21.488598 containerd[1577]: time="2026-03-02T12:59:21.488522253Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 2 12:59:21.489938 containerd[1577]: time="2026-03-02T12:59:21.489812599Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.489948121Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490076301Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490163905Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490187118Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490206794Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490226361Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490246809Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490303865Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490326027Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490349561Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490775957Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 2 12:59:21.490842 containerd[1577]: time="2026-03-02T12:59:21.490839876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490864452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490880772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490895680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490911520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490933080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490947417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490963036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.490978585Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.491070196Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 2 12:59:21.491253 containerd[1577]: time="2026-03-02T12:59:21.491245263Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 2 12:59:21.491586 containerd[1577]: time="2026-03-02T12:59:21.491269017Z" level=info msg="Start snapshots syncer" Mar 2 12:59:21.491586 containerd[1577]: time="2026-03-02T12:59:21.491303932Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 2 12:59:21.493916 containerd[1577]: time="2026-03-02T12:59:21.492336820Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 2 12:59:21.493916 containerd[1577]: time="2026-03-02T12:59:21.492513440Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 2 12:59:21.505745 containerd[1577]: time="2026-03-02T12:59:21.504593082Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 2 12:59:21.506261 containerd[1577]: time="2026-03-02T12:59:21.506194942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 2 12:59:21.506492 containerd[1577]: time="2026-03-02T12:59:21.506407760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 2 12:59:21.506492 containerd[1577]: time="2026-03-02T12:59:21.506454808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.507744344Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.507839192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.507867294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.507885247Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.508063099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.508084299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.508103104Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.508266349Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.508290835Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 12:59:21.508505 containerd[1577]: time="2026-03-02T12:59:21.508342831Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 12:59:21.508873 containerd[1577]: time="2026-03-02T12:59:21.508514912Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 12:59:21.508873 containerd[1577]: time="2026-03-02T12:59:21.508538076Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 2 12:59:21.508873 containerd[1577]: time="2026-03-02T12:59:21.508591396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 2 12:59:21.508873 containerd[1577]: time="2026-03-02T12:59:21.508719965Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 2 12:59:21.508873 containerd[1577]: time="2026-03-02T12:59:21.508833397Z" level=info msg="runtime interface created" Mar 2 12:59:21.508873 containerd[1577]: time="2026-03-02T12:59:21.508846261Z" level=info msg="created NRI interface" Mar 2 12:59:21.509142 containerd[1577]: time="2026-03-02T12:59:21.508933744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 2 12:59:21.509142 containerd[1577]: time="2026-03-02T12:59:21.508963279Z" level=info msg="Connect containerd service" Mar 2 12:59:21.512274 containerd[1577]: time="2026-03-02T12:59:21.512157757Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 12:59:21.586327 containerd[1577]: time="2026-03-02T12:59:21.584962940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 12:59:22.564894 containerd[1577]: time="2026-03-02T12:59:22.564155147Z" level=info msg="Start subscribing containerd event" Mar 2 12:59:22.567434 containerd[1577]: time="2026-03-02T12:59:22.565184147Z" level=info msg="Start recovering state" Mar 2 12:59:22.567434 containerd[1577]: time="2026-03-02T12:59:22.566511155Z" level=info msg="Start event monitor" Mar 2 12:59:22.567434 containerd[1577]: time="2026-03-02T12:59:22.567404993Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 12:59:22.567574 containerd[1577]: time="2026-03-02T12:59:22.567492667Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568096948Z" level=info msg="Start cni network conf syncer for default" Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568124820Z" level=info msg="Start streaming server" Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568256485Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568271103Z" level=info msg="runtime interface starting up..." Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568280580Z" level=info msg="starting plugins..." Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568307100Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 2 12:59:22.569048 containerd[1577]: time="2026-03-02T12:59:22.568526810Z" level=info msg="containerd successfully booted in 1.445857s" Mar 2 12:59:22.568926 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 12:59:22.643726 tar[1573]: linux-amd64/README.md Mar 2 12:59:22.695559 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 12:59:27.806346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:59:27.815593 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 12:59:27.815866 systemd[1]: Startup finished in 9.688s (kernel) + 27.384s (initrd) + 23.991s (userspace) = 1min 1.064s. Mar 2 12:59:27.841944 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:59:28.339041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 12:59:28.345911 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Mar 2 12:59:29.103238 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:29.112143 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:29.287469 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 12:59:29.297432 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 12:59:29.311340 systemd-logind[1559]: New session 1 of user core. Mar 2 12:59:29.547228 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 12:59:29.559241 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 12:59:29.589551 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 12:59:29.599238 systemd-logind[1559]: New session c1 of user core. Mar 2 12:59:30.218706 systemd[1695]: Queued start job for default target default.target. Mar 2 12:59:30.246399 systemd[1695]: Created slice app.slice - User Application Slice. Mar 2 12:59:30.246463 systemd[1695]: Reached target paths.target - Paths. Mar 2 12:59:30.247091 systemd[1695]: Reached target timers.target - Timers. Mar 2 12:59:30.271369 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 12:59:30.380859 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 12:59:30.387699 systemd[1695]: Reached target sockets.target - Sockets. Mar 2 12:59:30.388209 systemd[1695]: Reached target basic.target - Basic System. Mar 2 12:59:30.388325 systemd[1695]: Reached target default.target - Main User Target. Mar 2 12:59:30.388539 systemd[1695]: Startup finished in 693ms. Mar 2 12:59:30.395716 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 12:59:30.487190 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 12:59:30.549135 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:37756.service - OpenSSH per-connection server daemon (10.0.0.1:37756). Mar 2 12:59:30.804394 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 37756 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:30.803263 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:30.921340 systemd-logind[1559]: New session 2 of user core. Mar 2 12:59:30.950395 kubelet[1682]: E0302 12:59:30.949944 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:59:30.953455 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 12:59:30.956465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:59:30.956821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:59:30.957784 systemd[1]: kubelet.service: Consumed 4.410s CPU time, 257.7M memory peak. Mar 2 12:59:30.987907 sshd[1711]: Connection closed by 10.0.0.1 port 37756 Mar 2 12:59:30.991152 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:31.005294 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:37756.service: Deactivated successfully. Mar 2 12:59:31.008341 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 12:59:31.013923 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Mar 2 12:59:31.015951 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:37764.service - OpenSSH per-connection server daemon (10.0.0.1:37764). Mar 2 12:59:31.019769 systemd-logind[1559]: Removed session 2. Mar 2 12:59:31.167407 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 37764 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:31.169577 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:31.187415 systemd-logind[1559]: New session 3 of user core. Mar 2 12:59:31.206388 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 12:59:31.269830 sshd[1721]: Connection closed by 10.0.0.1 port 37764 Mar 2 12:59:31.272128 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:31.352077 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:37764.service: Deactivated successfully. Mar 2 12:59:31.356493 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 12:59:31.360400 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Mar 2 12:59:31.365489 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:37778.service - OpenSSH per-connection server daemon (10.0.0.1:37778). Mar 2 12:59:31.368855 systemd-logind[1559]: Removed session 3. Mar 2 12:59:31.487292 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 37778 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:31.491114 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:31.658423 systemd-logind[1559]: New session 4 of user core. Mar 2 12:59:31.679859 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 12:59:31.729152 sshd[1730]: Connection closed by 10.0.0.1 port 37778 Mar 2 12:59:31.729850 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:31.747902 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:37778.service: Deactivated successfully. Mar 2 12:59:31.753170 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 12:59:31.758057 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Mar 2 12:59:31.764615 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:37790.service - OpenSSH per-connection server daemon (10.0.0.1:37790). Mar 2 12:59:31.767287 systemd-logind[1559]: Removed session 4. Mar 2 12:59:31.900285 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 37790 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:31.911290 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:32.192118 systemd-logind[1559]: New session 5 of user core. Mar 2 12:59:32.203605 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 12:59:32.904324 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 12:59:32.919125 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:59:33.000735 sudo[1740]: pam_unix(sudo:session): session closed for user root Mar 2 12:59:33.066882 sshd[1739]: Connection closed by 10.0.0.1 port 37790 Mar 2 12:59:33.068797 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:33.318415 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:37790.service: Deactivated successfully. Mar 2 12:59:33.360603 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 12:59:33.365570 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Mar 2 12:59:33.376192 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:37796.service - OpenSSH per-connection server daemon (10.0.0.1:37796). Mar 2 12:59:33.377818 systemd-logind[1559]: Removed session 5. Mar 2 12:59:33.595049 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 37796 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:33.598213 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:33.641093 systemd-logind[1559]: New session 6 of user core. Mar 2 12:59:33.675541 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 12:59:33.753683 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 12:59:33.757360 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:59:33.818871 sudo[1751]: pam_unix(sudo:session): session closed for user root Mar 2 12:59:33.851395 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 2 12:59:33.853148 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:59:33.953269 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 12:59:34.073713 augenrules[1773]: No rules Mar 2 12:59:34.079263 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:59:34.080256 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 12:59:34.082849 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 2 12:59:34.089058 sshd[1749]: Connection closed by 10.0.0.1 port 37796 Mar 2 12:59:34.089972 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:34.109182 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:37796.service: Deactivated successfully. Mar 2 12:59:34.112576 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 12:59:34.116835 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Mar 2 12:59:34.120385 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:37810.service - OpenSSH per-connection server daemon (10.0.0.1:37810). Mar 2 12:59:34.122597 systemd-logind[1559]: Removed session 6. Mar 2 12:59:34.341411 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 12:59:34.347791 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:34.375340 systemd-logind[1559]: New session 7 of user core. Mar 2 12:59:34.384580 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 12:59:34.435900 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 12:59:34.436516 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:59:38.718249 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 12:59:38.819337 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 12:59:41.268851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 12:59:41.276981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:59:41.649066 dockerd[1807]: time="2026-03-02T12:59:41.647932155Z" level=info msg="Starting up" Mar 2 12:59:41.653088 dockerd[1807]: time="2026-03-02T12:59:41.652748632Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 2 12:59:41.881055 dockerd[1807]: time="2026-03-02T12:59:41.880070325Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 2 12:59:42.392462 dockerd[1807]: time="2026-03-02T12:59:42.387351779Z" level=info msg="Loading containers: start." Mar 2 12:59:42.559900 kernel: Initializing XFRM netlink socket Mar 2 12:59:43.117912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:59:43.171305 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:59:43.573132 kubelet[1842]: E0302 12:59:43.572545 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:59:43.586895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:59:43.587349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:59:43.589591 systemd[1]: kubelet.service: Consumed 1.055s CPU time, 108.8M memory peak. Mar 2 12:59:45.480807 systemd-networkd[1439]: docker0: Link UP Mar 2 12:59:45.552337 dockerd[1807]: time="2026-03-02T12:59:45.550804139Z" level=info msg="Loading containers: done." Mar 2 12:59:45.676328 dockerd[1807]: time="2026-03-02T12:59:45.674735453Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 12:59:45.676328 dockerd[1807]: time="2026-03-02T12:59:45.676080644Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 2 12:59:45.676328 dockerd[1807]: time="2026-03-02T12:59:45.676302758Z" level=info msg="Initializing buildkit" Mar 2 12:59:45.871106 dockerd[1807]: time="2026-03-02T12:59:45.870270794Z" level=info msg="Completed buildkit initialization" Mar 2 12:59:45.890332 dockerd[1807]: time="2026-03-02T12:59:45.890214759Z" level=info msg="Daemon has completed initialization" Mar 2 12:59:45.891094 dockerd[1807]: time="2026-03-02T12:59:45.890707304Z" level=info msg="API listen on /run/docker.sock" Mar 2 12:59:45.891763 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 12:59:48.514865 containerd[1577]: time="2026-03-02T12:59:48.513255655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 2 12:59:49.807261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280308191.mount: Deactivated successfully. Mar 2 12:59:53.701492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 12:59:53.716210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:59:54.460208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:59:54.482260 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:59:56.316748 kubelet[2104]: E0302 12:59:56.316162 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:59:56.341155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:59:56.341528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:59:56.346457 systemd[1]: kubelet.service: Consumed 1.669s CPU time, 109M memory peak. Mar 2 12:59:58.811054 containerd[1577]: time="2026-03-02T12:59:58.805249612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:58.815351 containerd[1577]: time="2026-03-02T12:59:58.812303027Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 2 12:59:58.817141 containerd[1577]: time="2026-03-02T12:59:58.815866440Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:58.843265 containerd[1577]: time="2026-03-02T12:59:58.843118989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:58.846353 containerd[1577]: time="2026-03-02T12:59:58.845873736Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 10.332512531s" Mar 2 12:59:58.846353 containerd[1577]: time="2026-03-02T12:59:58.845985250Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 2 12:59:58.852663 containerd[1577]: time="2026-03-02T12:59:58.852443465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 2 13:00:04.756705 update_engine[1562]: I20260302 13:00:04.754729 1562 update_attempter.cc:509] Updating boot flags... Mar 2 13:00:05.655854 containerd[1577]: time="2026-03-02T13:00:05.655690005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:05.657435 containerd[1577]: time="2026-03-02T13:00:05.657392432Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 2 13:00:05.659251 containerd[1577]: time="2026-03-02T13:00:05.659136851Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:05.664880 containerd[1577]: time="2026-03-02T13:00:05.664708300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:05.668413 containerd[1577]: time="2026-03-02T13:00:05.666681049Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 6.814163585s" Mar 2 13:00:05.668413 containerd[1577]: time="2026-03-02T13:00:05.666724176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 2 13:00:05.678234 containerd[1577]: time="2026-03-02T13:00:05.678107978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 2 13:00:06.447431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:00:06.453766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:07.213797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:07.269160 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:00:07.669647 kubelet[2142]: E0302 13:00:07.668353 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:00:07.675409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:00:07.675685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:00:07.676508 systemd[1]: kubelet.service: Consumed 809ms CPU time, 110.8M memory peak. Mar 2 13:00:09.656847 containerd[1577]: time="2026-03-02T13:00:09.655843056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:09.661382 containerd[1577]: time="2026-03-02T13:00:09.660640681Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 2 13:00:09.664025 containerd[1577]: time="2026-03-02T13:00:09.663794116Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:09.670921 containerd[1577]: time="2026-03-02T13:00:09.670615748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:09.675226 containerd[1577]: time="2026-03-02T13:00:09.674048757Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 3.995855437s" Mar 2 13:00:09.675495 containerd[1577]: time="2026-03-02T13:00:09.675270767Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 2 13:00:09.683671 containerd[1577]: time="2026-03-02T13:00:09.683469413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 13:00:13.713812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440302709.mount: Deactivated successfully. Mar 2 13:00:15.879927 containerd[1577]: time="2026-03-02T13:00:15.878725647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:15.883718 containerd[1577]: time="2026-03-02T13:00:15.880183580Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 2 13:00:15.883718 containerd[1577]: time="2026-03-02T13:00:15.883181799Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:15.889556 containerd[1577]: time="2026-03-02T13:00:15.889347227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:15.891809 containerd[1577]: time="2026-03-02T13:00:15.891302272Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 6.207740913s" Mar 2 13:00:15.891809 containerd[1577]: time="2026-03-02T13:00:15.891357864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 2 13:00:15.898508 containerd[1577]: time="2026-03-02T13:00:15.897363154Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 2 13:00:18.685152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:00:18.768240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:18.777092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045677194.mount: Deactivated successfully. Mar 2 13:00:20.201267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:20.258624 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:00:20.505779 kubelet[2186]: E0302 13:00:20.504755 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:00:20.511326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:00:20.511643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:00:20.512772 systemd[1]: kubelet.service: Consumed 1.080s CPU time, 110.5M memory peak. Mar 2 13:00:24.774688 containerd[1577]: time="2026-03-02T13:00:24.774262635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:24.776497 containerd[1577]: time="2026-03-02T13:00:24.776214568Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 2 13:00:24.797449 containerd[1577]: time="2026-03-02T13:00:24.791406297Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:24.871136 containerd[1577]: time="2026-03-02T13:00:24.868052463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:24.871136 containerd[1577]: time="2026-03-02T13:00:24.870709834Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 8.973246826s" Mar 2 13:00:24.871136 containerd[1577]: time="2026-03-02T13:00:24.870817914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 2 13:00:24.880607 containerd[1577]: time="2026-03-02T13:00:24.880540928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:00:25.872653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125331474.mount: Deactivated successfully. Mar 2 13:00:25.897426 containerd[1577]: time="2026-03-02T13:00:25.895960156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:25.897426 containerd[1577]: time="2026-03-02T13:00:25.897105029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 13:00:25.898843 containerd[1577]: time="2026-03-02T13:00:25.898624545Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:25.911658 containerd[1577]: time="2026-03-02T13:00:25.911507976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:25.938962 containerd[1577]: time="2026-03-02T13:00:25.918450792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.037856196s" Mar 2 13:00:25.938962 containerd[1577]: time="2026-03-02T13:00:25.918510973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 13:00:25.981759 containerd[1577]: time="2026-03-02T13:00:25.978809120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 2 13:00:26.950561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616330406.mount: Deactivated successfully. Mar 2 13:00:30.691494 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 13:00:30.766850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:32.699181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:32.810932 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:00:33.071770 kubelet[2298]: E0302 13:00:33.071300 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:00:33.078110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:00:33.078463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:00:33.079512 systemd[1]: kubelet.service: Consumed 867ms CPU time, 110.7M memory peak. Mar 2 13:00:34.648202 containerd[1577]: time="2026-03-02T13:00:34.647840932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:34.653302 containerd[1577]: time="2026-03-02T13:00:34.652727707Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 2 13:00:34.654984 containerd[1577]: time="2026-03-02T13:00:34.654851316Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:34.664768 containerd[1577]: time="2026-03-02T13:00:34.664622026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:00:34.668597 containerd[1577]: time="2026-03-02T13:00:34.668385131Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 8.68908311s" Mar 2 13:00:34.668597 containerd[1577]: time="2026-03-02T13:00:34.668555426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 2 13:00:40.078353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:40.078668 systemd[1]: kubelet.service: Consumed 867ms CPU time, 110.7M memory peak. Mar 2 13:00:40.100772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:40.203973 systemd[1]: Reload requested from client PID 2354 ('systemctl') (unit session-7.scope)... Mar 2 13:00:40.204446 systemd[1]: Reloading... Mar 2 13:00:40.422703 zram_generator::config[2397]: No configuration found. Mar 2 13:00:41.054439 systemd[1]: Reloading finished in 848 ms. Mar 2 13:00:41.201410 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:00:41.201608 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:00:41.202218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:41.202287 systemd[1]: kubelet.service: Consumed 285ms CPU time, 98.2M memory peak. Mar 2 13:00:41.205224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:41.744573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:41.764950 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:00:42.246201 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:00:42.659844 kubelet[2445]: I0302 13:00:42.658612 2445 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:00:42.659844 kubelet[2445]: I0302 13:00:42.658825 2445 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:00:42.659844 kubelet[2445]: I0302 13:00:42.659186 2445 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:00:42.659844 kubelet[2445]: I0302 13:00:42.659208 2445 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:00:42.660949 kubelet[2445]: I0302 13:00:42.660367 2445 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:00:42.879612 kubelet[2445]: E0302 13:00:42.876323 2445 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:00:42.884830 kubelet[2445]: I0302 13:00:42.882234 2445 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:00:43.047836 kubelet[2445]: I0302 13:00:43.046837 2445 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 13:00:43.107731 kubelet[2445]: I0302 13:00:43.107528 2445 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:00:43.109584 kubelet[2445]: I0302 13:00:43.109482 2445 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:00:43.109950 kubelet[2445]: I0302 13:00:43.109558 2445 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:00:43.110498 kubelet[2445]: I0302 13:00:43.109984 2445 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:00:43.110498 kubelet[2445]: I0302 13:00:43.110123 2445 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:00:43.110498 kubelet[2445]: I0302 13:00:43.110410 2445 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:00:43.119335 kubelet[2445]: I0302 13:00:43.118517 2445 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:00:43.119335 kubelet[2445]: I0302 13:00:43.119291 2445 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:00:43.122786 kubelet[2445]: I0302 13:00:43.119358 2445 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:00:43.122786 kubelet[2445]: I0302 13:00:43.119506 2445 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:00:43.122786 kubelet[2445]: I0302 13:00:43.119529 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:00:43.146113 kubelet[2445]: I0302 13:00:43.144546 2445 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 13:00:43.150269 kubelet[2445]: I0302 13:00:43.150178 2445 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:00:43.150269 kubelet[2445]: I0302 13:00:43.150254 2445 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:00:43.150521 kubelet[2445]: W0302 13:00:43.150464 2445 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:00:43.160109 kubelet[2445]: I0302 13:00:43.157869 2445 server.go:1257] "Started kubelet" Mar 2 13:00:43.160109 kubelet[2445]: I0302 13:00:43.159544 2445 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:00:43.166144 kubelet[2445]: I0302 13:00:43.164970 2445 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:00:43.173983 kubelet[2445]: I0302 13:00:43.173907 2445 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:00:43.174730 kubelet[2445]: I0302 13:00:43.174315 2445 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:00:43.174730 kubelet[2445]: I0302 13:00:43.174638 2445 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:00:43.174930 kubelet[2445]: I0302 13:00:43.174818 2445 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:00:43.178213 kubelet[2445]: I0302 13:00:43.176272 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:00:43.186623 kubelet[2445]: I0302 13:00:43.186513 2445 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:00:43.186790 kubelet[2445]: E0302 13:00:43.186732 2445 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:00:43.187487 kubelet[2445]: I0302 13:00:43.187457 2445 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:00:43.187764 kubelet[2445]: I0302 13:00:43.187633 2445 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:00:43.199565 kubelet[2445]: I0302 13:00:43.199481 2445 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:00:43.200865 kubelet[2445]: E0302 13:00:43.200481 2445 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Mar 2 13:00:43.204589 kubelet[2445]: E0302 13:00:43.204484 2445 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:00:43.208671 kubelet[2445]: E0302 13:00:43.194506 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189907be824579a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:00:43.157772709 +0000 UTC m=+1.365113421,LastTimestamp:2026-03-02 13:00:43.157772709 +0000 UTC m=+1.365113421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:00:43.226131 kubelet[2445]: I0302 13:00:43.225598 2445 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:00:43.226131 kubelet[2445]: I0302 13:00:43.226143 2445 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:00:43.290724 kubelet[2445]: E0302 13:00:43.289515 2445 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:00:43.378928 kubelet[2445]: I0302 13:00:43.378744 2445 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:00:43.381114 kubelet[2445]: I0302 13:00:43.380762 2445 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:00:43.381114 kubelet[2445]: I0302 13:00:43.380881 2445 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:00:43.395218 kubelet[2445]: E0302 13:00:43.390889 2445 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:00:43.396119 kubelet[2445]: I0302 13:00:43.395423 2445 policy_none.go:50] "Start" Mar 2 13:00:43.396119 kubelet[2445]: I0302 13:00:43.395502 2445 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:00:43.396119 kubelet[2445]: I0302 13:00:43.395528 2445 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:00:43.401479 kubelet[2445]: I0302 13:00:43.401281 2445 policy_none.go:44] "Start" Mar 2 13:00:43.408152 kubelet[2445]: E0302 13:00:43.403608 2445 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Mar 2 13:00:43.424745 kubelet[2445]: I0302 13:00:43.424334 2445 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:00:43.435324 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:00:43.437641 kubelet[2445]: I0302 13:00:43.437221 2445 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:00:43.438380 kubelet[2445]: I0302 13:00:43.438316 2445 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:00:43.438452 kubelet[2445]: I0302 13:00:43.438395 2445 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:00:43.438668 kubelet[2445]: E0302 13:00:43.438526 2445 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:00:43.468521 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:00:43.483331 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:00:43.491388 kubelet[2445]: E0302 13:00:43.491167 2445 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:00:43.510804 kubelet[2445]: E0302 13:00:43.509359 2445 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:00:43.510804 kubelet[2445]: I0302 13:00:43.509769 2445 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:00:43.510804 kubelet[2445]: I0302 13:00:43.509813 2445 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:00:43.510804 kubelet[2445]: I0302 13:00:43.510499 2445 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:00:43.518500 kubelet[2445]: E0302 13:00:43.518299 2445 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:00:43.518500 kubelet[2445]: E0302 13:00:43.518352 2445 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:00:43.593508 kubelet[2445]: I0302 13:00:43.591379 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7d7b527356c313b89805fd4de98a769-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7d7b527356c313b89805fd4de98a769\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:00:43.593508 kubelet[2445]: I0302 13:00:43.591636 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7d7b527356c313b89805fd4de98a769-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7d7b527356c313b89805fd4de98a769\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:00:43.593508 kubelet[2445]: I0302 13:00:43.591665 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7d7b527356c313b89805fd4de98a769-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e7d7b527356c313b89805fd4de98a769\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:00:43.620147 kubelet[2445]: I0302 13:00:43.620110 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:43.630813 kubelet[2445]: E0302 13:00:43.628250 2445 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 2 13:00:43.651743 systemd[1]: Created slice kubepods-burstable-pode7d7b527356c313b89805fd4de98a769.slice - libcontainer container kubepods-burstable-pode7d7b527356c313b89805fd4de98a769.slice. Mar 2 13:00:43.693417 kubelet[2445]: E0302 13:00:43.692774 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:43.696511 kubelet[2445]: I0302 13:00:43.693839 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:00:43.696511 kubelet[2445]: I0302 13:00:43.693879 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:00:43.696511 kubelet[2445]: I0302 13:00:43.693909 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:00:43.696511 kubelet[2445]: I0302 13:00:43.694165 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:00:43.696511 kubelet[2445]: I0302 13:00:43.694200 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:00:43.697165 kubelet[2445]: I0302 13:00:43.694238 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:00:43.701596 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 2 13:00:43.727107 kubelet[2445]: E0302 13:00:43.725754 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:43.742130 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 2 13:00:43.756825 kubelet[2445]: E0302 13:00:43.755785 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:43.952575 kubelet[2445]: E0302 13:00:43.951542 2445 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Mar 2 13:00:44.001268 kubelet[2445]: I0302 13:00:43.999980 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:44.007331 kubelet[2445]: E0302 13:00:44.005282 2445 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 2 13:00:44.019371 containerd[1577]: time="2026-03-02T13:00:44.018341789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e7d7b527356c313b89805fd4de98a769,Namespace:kube-system,Attempt:0,}" Mar 2 13:00:44.040575 containerd[1577]: time="2026-03-02T13:00:44.039944811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 2 13:00:44.075147 containerd[1577]: time="2026-03-02T13:00:44.074435866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 2 13:00:44.420795 kubelet[2445]: I0302 13:00:44.420536 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:44.435833 kubelet[2445]: E0302 13:00:44.430611 2445 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 2 13:00:44.870440 kubelet[2445]: E0302 13:00:44.861118 2445 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" Mar 2 13:00:45.079486 kubelet[2445]: E0302 13:00:45.077213 2445 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:00:45.113790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128430272.mount: Deactivated successfully. Mar 2 13:00:45.150693 containerd[1577]: time="2026-03-02T13:00:45.149578893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:00:45.162641 containerd[1577]: time="2026-03-02T13:00:45.162436763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:00:45.168313 containerd[1577]: time="2026-03-02T13:00:45.167932680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 13:00:45.173513 containerd[1577]: time="2026-03-02T13:00:45.173134367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 13:00:45.179244 containerd[1577]: time="2026-03-02T13:00:45.179128205Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:00:45.191168 containerd[1577]: time="2026-03-02T13:00:45.190934523Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:00:45.290379 containerd[1577]: time="2026-03-02T13:00:45.259427851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 13:00:45.304438 containerd[1577]: time="2026-03-02T13:00:45.304139961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:00:45.312170 kubelet[2445]: I0302 13:00:45.310292 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:45.312170 kubelet[2445]: E0302 13:00:45.311094 2445 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 2 13:00:45.312514 containerd[1577]: time="2026-03-02T13:00:45.311563384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.275829327s" Mar 2 13:00:45.315274 containerd[1577]: time="2026-03-02T13:00:45.315174066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.234378517s" Mar 2 13:00:45.318356 containerd[1577]: time="2026-03-02T13:00:45.318284582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.267991567s" Mar 2 13:00:45.655901 containerd[1577]: time="2026-03-02T13:00:45.653370930Z" level=info msg="connecting to shim 9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d" address="unix:///run/containerd/s/e34783f445b25e190dba99d34a34912bf4a0eac6b2aa5ae46aa438d1b3817f76" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:00:45.753656 containerd[1577]: time="2026-03-02T13:00:45.725299852Z" level=info msg="connecting to shim 241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e" address="unix:///run/containerd/s/b597e15f98e9eb682983bc0d04de46e143550f2b775fa79eacc7b998fb5d0272" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:00:45.821474 containerd[1577]: time="2026-03-02T13:00:45.821290716Z" level=info msg="connecting to shim 33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499" address="unix:///run/containerd/s/3a44ec299b49f741fe318b85cee07ad3d58a0e84e833c57f57a5977bb3ff80ac" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:00:45.980878 systemd[1]: Started cri-containerd-33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499.scope - libcontainer container 33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499. Mar 2 13:00:45.989433 systemd[1]: Started cri-containerd-241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e.scope - libcontainer container 241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e. Mar 2 13:00:46.057509 systemd[1]: Started cri-containerd-9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d.scope - libcontainer container 9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d. Mar 2 13:00:46.793966 kubelet[2445]: E0302 13:00:46.792904 2445 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="3.2s" Mar 2 13:00:47.083812 kubelet[2445]: I0302 13:00:47.082788 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:47.088287 kubelet[2445]: E0302 13:00:47.088153 2445 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 2 13:00:47.253557 containerd[1577]: time="2026-03-02T13:00:47.250890070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e\"" Mar 2 13:00:47.298529 containerd[1577]: time="2026-03-02T13:00:47.298471276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499\"" Mar 2 13:00:47.299200 containerd[1577]: time="2026-03-02T13:00:47.298911628Z" level=info msg="CreateContainer within sandbox \"241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:00:47.305863 containerd[1577]: time="2026-03-02T13:00:47.305692909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e7d7b527356c313b89805fd4de98a769,Namespace:kube-system,Attempt:0,} returns sandbox id \"9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d\"" Mar 2 13:00:47.332176 containerd[1577]: time="2026-03-02T13:00:47.320431347Z" level=info msg="CreateContainer within sandbox \"33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:00:47.451698 containerd[1577]: time="2026-03-02T13:00:47.451636713Z" level=info msg="CreateContainer within sandbox \"9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:00:47.973420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323122643.mount: Deactivated successfully. Mar 2 13:00:48.214544 containerd[1577]: time="2026-03-02T13:00:48.212700537Z" level=info msg="Container 5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:00:48.580579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055693194.mount: Deactivated successfully. Mar 2 13:00:48.986868 containerd[1577]: time="2026-03-02T13:00:48.964243128Z" level=info msg="Container e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:00:49.028378 containerd[1577]: time="2026-03-02T13:00:49.024713222Z" level=info msg="Container c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:00:49.090514 containerd[1577]: time="2026-03-02T13:00:49.087365319Z" level=info msg="CreateContainer within sandbox \"241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155\"" Mar 2 13:00:49.108171 containerd[1577]: time="2026-03-02T13:00:49.103517156Z" level=info msg="StartContainer for \"5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155\"" Mar 2 13:00:49.118713 containerd[1577]: time="2026-03-02T13:00:49.118498733Z" level=info msg="connecting to shim 5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155" address="unix:///run/containerd/s/b597e15f98e9eb682983bc0d04de46e143550f2b775fa79eacc7b998fb5d0272" protocol=ttrpc version=3 Mar 2 13:00:49.161815 containerd[1577]: time="2026-03-02T13:00:49.153872265Z" level=info msg="CreateContainer within sandbox \"9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225\"" Mar 2 13:00:49.166137 kubelet[2445]: E0302 13:00:49.157664 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189907be824579a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:00:43.157772709 +0000 UTC m=+1.365113421,LastTimestamp:2026-03-02 13:00:43.157772709 +0000 UTC m=+1.365113421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:00:49.177110 containerd[1577]: time="2026-03-02T13:00:49.176951822Z" level=info msg="StartContainer for \"c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225\"" Mar 2 13:00:49.285508 containerd[1577]: time="2026-03-02T13:00:49.265698743Z" level=info msg="connecting to shim c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225" address="unix:///run/containerd/s/e34783f445b25e190dba99d34a34912bf4a0eac6b2aa5ae46aa438d1b3817f76" protocol=ttrpc version=3 Mar 2 13:00:49.285508 containerd[1577]: time="2026-03-02T13:00:49.278761137Z" level=info msg="CreateContainer within sandbox \"33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e\"" Mar 2 13:00:49.446429 containerd[1577]: time="2026-03-02T13:00:49.406731608Z" level=info msg="StartContainer for \"e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e\"" Mar 2 13:00:50.060892 kubelet[2445]: E0302 13:00:50.059928 2445 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="6.4s" Mar 2 13:00:50.086125 kubelet[2445]: E0302 13:00:50.065719 2445 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:00:50.086662 containerd[1577]: time="2026-03-02T13:00:50.060540979Z" level=info msg="connecting to shim e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e" address="unix:///run/containerd/s/3a44ec299b49f741fe318b85cee07ad3d58a0e84e833c57f57a5977bb3ff80ac" protocol=ttrpc version=3 Mar 2 13:00:50.299150 systemd[1]: Started cri-containerd-e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e.scope - libcontainer container e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e. Mar 2 13:00:50.499926 kubelet[2445]: I0302 13:00:50.499586 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:50.517665 kubelet[2445]: E0302 13:00:50.515634 2445 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 2 13:00:50.513119 systemd[1]: Started cri-containerd-c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225.scope - libcontainer container c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225. Mar 2 13:00:50.591521 systemd[1]: Started cri-containerd-5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155.scope - libcontainer container 5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155. Mar 2 13:00:50.808874 containerd[1577]: time="2026-03-02T13:00:50.805962261Z" level=info msg="StartContainer for \"c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225\" returns successfully" Mar 2 13:00:50.934095 containerd[1577]: time="2026-03-02T13:00:50.932173000Z" level=info msg="StartContainer for \"5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155\" returns successfully" Mar 2 13:00:50.940118 containerd[1577]: time="2026-03-02T13:00:50.935810775Z" level=info msg="StartContainer for \"e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e\" returns successfully" Mar 2 13:00:51.799732 kubelet[2445]: E0302 13:00:51.799460 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:51.808101 kubelet[2445]: E0302 13:00:51.807421 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:51.811616 kubelet[2445]: E0302 13:00:51.811137 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:53.579689 kubelet[2445]: E0302 13:00:53.577897 2445 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:00:53.680728 kubelet[2445]: E0302 13:00:53.581766 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:53.680728 kubelet[2445]: E0302 13:00:53.582987 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:53.680728 kubelet[2445]: E0302 13:00:53.583427 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:54.484480 kubelet[2445]: E0302 13:00:54.483311 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:54.489363 kubelet[2445]: E0302 13:00:54.489283 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:54.489850 kubelet[2445]: E0302 13:00:54.489591 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:55.518287 kubelet[2445]: E0302 13:00:55.517732 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:57.063967 kubelet[2445]: I0302 13:00:57.061807 2445 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:00:58.066876 kubelet[2445]: E0302 13:00:58.062258 2445 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:00:59.074256 kubelet[2445]: I0302 13:00:59.055547 2445 apiserver.go:52] "Watching apiserver" Mar 2 13:00:59.074256 kubelet[2445]: E0302 13:00:59.079696 2445 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:01:01.715214 kubelet[2445]: I0302 13:01:01.705943 2445 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 13:01:01.760751 kubelet[2445]: E0302 13:01:01.759288 2445 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:01:01.789066 kubelet[2445]: I0302 13:01:01.788904 2445 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:01.893070 kubelet[2445]: E0302 13:01:01.892365 2445 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189907be824579a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:00:43.157772709 +0000 UTC m=+1.365113421,LastTimestamp:2026-03-02 13:00:43.157772709 +0000 UTC m=+1.365113421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:01:01.994316 kubelet[2445]: I0302 13:01:01.988851 2445 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:01:02.001846 kubelet[2445]: I0302 13:01:02.001601 2445 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:02.053081 kubelet[2445]: I0302 13:01:02.052942 2445 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:04.168285 kubelet[2445]: I0302 13:01:04.167748 2445 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.167730482 podStartE2EDuration="3.167730482s" podCreationTimestamp="2026-03-02 13:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:04.167170886 +0000 UTC m=+22.374511608" watchObservedRunningTime="2026-03-02 13:01:04.167730482 +0000 UTC m=+22.375071184" Mar 2 13:01:04.168285 kubelet[2445]: I0302 13:01:04.168072 2445 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.168062965 podStartE2EDuration="2.168062965s" podCreationTimestamp="2026-03-02 13:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:04.078806005 +0000 UTC m=+22.286146778" watchObservedRunningTime="2026-03-02 13:01:04.168062965 +0000 UTC m=+22.375403658" Mar 2 13:01:04.302971 kubelet[2445]: I0302 13:01:04.302880 2445 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.302856796 podStartE2EDuration="2.302856796s" podCreationTimestamp="2026-03-02 13:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:04.244306914 +0000 UTC m=+22.451647636" watchObservedRunningTime="2026-03-02 13:01:04.302856796 +0000 UTC m=+22.510197509" Mar 2 13:01:08.181805 systemd[1]: Reload requested from client PID 2736 ('systemctl') (unit session-7.scope)... Mar 2 13:01:08.181838 systemd[1]: Reloading... Mar 2 13:01:09.050653 zram_generator::config[2780]: No configuration found. Mar 2 13:01:10.975407 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1094739871 wd_nsec: 1094739363 Mar 2 13:01:11.217486 systemd[1]: Reloading finished in 3034 ms. Mar 2 13:01:11.568575 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:11.650214 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:01:11.650682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:11.651068 systemd[1]: kubelet.service: Consumed 5.936s CPU time, 129M memory peak. Mar 2 13:01:11.657829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:12.492588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:12.514184 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:01:12.803083 kubelet[2824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:01:12.880805 kubelet[2824]: I0302 13:01:12.879426 2824 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:01:12.880805 kubelet[2824]: I0302 13:01:12.879976 2824 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:01:12.880805 kubelet[2824]: I0302 13:01:12.880131 2824 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:01:12.880805 kubelet[2824]: I0302 13:01:12.880143 2824 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:01:12.880805 kubelet[2824]: I0302 13:01:12.881187 2824 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:01:12.887083 kubelet[2824]: I0302 13:01:12.884351 2824 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:01:12.908713 kubelet[2824]: I0302 13:01:12.906149 2824 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:01:13.264362 kubelet[2824]: I0302 13:01:13.263445 2824 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 13:01:13.304841 kubelet[2824]: I0302 13:01:13.304477 2824 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:01:13.305656 kubelet[2824]: I0302 13:01:13.305330 2824 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:01:13.305847 kubelet[2824]: I0302 13:01:13.305403 2824 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:01:13.306193 kubelet[2824]: I0302 13:01:13.305860 2824 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:01:13.306193 kubelet[2824]: I0302 13:01:13.305877 2824 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:01:13.306193 kubelet[2824]: I0302 13:01:13.305919 2824 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:01:13.309253 kubelet[2824]: I0302 13:01:13.309154 2824 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:01:13.309975 kubelet[2824]: I0302 13:01:13.309915 2824 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:01:13.310095 kubelet[2824]: I0302 13:01:13.309978 2824 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:01:13.310095 kubelet[2824]: I0302 13:01:13.310060 2824 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:01:13.310095 kubelet[2824]: I0302 13:01:13.310077 2824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:01:13.343893 kubelet[2824]: I0302 13:01:13.343669 2824 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 13:01:13.347155 kubelet[2824]: I0302 13:01:13.347124 2824 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:01:13.349172 kubelet[2824]: I0302 13:01:13.347440 2824 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:01:13.366407 kubelet[2824]: I0302 13:01:13.365908 2824 server.go:1257] "Started kubelet" Mar 2 13:01:13.511732 kubelet[2824]: I0302 13:01:13.511465 2824 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:01:13.550936 kubelet[2824]: I0302 13:01:13.515806 2824 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:01:13.557133 kubelet[2824]: I0302 13:01:13.555895 2824 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:01:13.563884 kubelet[2824]: I0302 13:01:13.563854 2824 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:01:13.566685 kubelet[2824]: I0302 13:01:13.564700 2824 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:01:13.566685 kubelet[2824]: I0302 13:01:13.564917 2824 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:01:13.566685 kubelet[2824]: E0302 13:01:13.565148 2824 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:01:13.566685 kubelet[2824]: I0302 13:01:13.565258 2824 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:01:13.571657 kubelet[2824]: I0302 13:01:13.570203 2824 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:01:13.573436 kubelet[2824]: I0302 13:01:13.573382 2824 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:01:13.573967 kubelet[2824]: I0302 13:01:13.573628 2824 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:01:13.576328 sudo[2843]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 13:01:13.577529 sudo[2843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 13:01:13.613073 kubelet[2824]: I0302 13:01:13.611732 2824 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:01:13.614940 kubelet[2824]: I0302 13:01:13.614296 2824 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:01:13.637093 kubelet[2824]: I0302 13:01:13.637049 2824 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:01:14.353242 kubelet[2824]: I0302 13:01:14.333631 2824 apiserver.go:52] "Watching apiserver" Mar 2 13:01:14.460204 kubelet[2824]: I0302 13:01:14.460101 2824 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:01:14.464298 kubelet[2824]: I0302 13:01:14.464099 2824 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:01:14.468147 kubelet[2824]: I0302 13:01:14.468083 2824 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:01:14.472459 kubelet[2824]: I0302 13:01:14.469159 2824 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:01:14.472459 kubelet[2824]: E0302 13:01:14.471511 2824 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:01:14.665626 kubelet[2824]: E0302 13:01:14.656237 2824 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:01:14.890873 kubelet[2824]: E0302 13:01:14.890094 2824 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:01:14.978347 kubelet[2824]: I0302 13:01:14.978265 2824 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:01:14.978347 kubelet[2824]: I0302 13:01:14.978299 2824 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:01:14.978347 kubelet[2824]: I0302 13:01:14.978334 2824 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:01:14.978872 kubelet[2824]: I0302 13:01:14.978608 2824 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 2 13:01:14.978872 kubelet[2824]: I0302 13:01:14.978662 2824 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 2 13:01:14.978872 kubelet[2824]: I0302 13:01:14.978724 2824 policy_none.go:50] "Start" Mar 2 13:01:14.978872 kubelet[2824]: I0302 13:01:14.978745 2824 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:01:14.978872 kubelet[2824]: I0302 13:01:14.978763 2824 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:01:14.980467 kubelet[2824]: I0302 13:01:14.980439 2824 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:01:14.980467 kubelet[2824]: I0302 13:01:14.980464 2824 policy_none.go:44] "Start" Mar 2 13:01:15.000435 kubelet[2824]: E0302 13:01:15.000325 2824 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:01:15.000791 kubelet[2824]: I0302 13:01:15.000700 2824 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:01:15.000791 kubelet[2824]: I0302 13:01:15.000748 2824 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:01:15.002540 kubelet[2824]: I0302 13:01:15.001729 2824 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:01:15.375430 kubelet[2824]: E0302 13:01:15.374459 2824 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:01:15.380317 kubelet[2824]: I0302 13:01:15.380136 2824 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:01:15.380765 kubelet[2824]: I0302 13:01:15.380649 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:15.380891 kubelet[2824]: I0302 13:01:15.380800 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:15.380945 kubelet[2824]: I0302 13:01:15.380889 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:15.380945 kubelet[2824]: I0302 13:01:15.380928 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/236c576a-a9c3-4e3e-aa90-f266498b8e50-lib-modules\") pod \"kube-proxy-4l75j\" (UID: \"236c576a-a9c3-4e3e-aa90-f266498b8e50\") " pod="kube-system/kube-proxy-4l75j" Mar 2 13:01:15.381096 kubelet[2824]: I0302 13:01:15.380986 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7d7b527356c313b89805fd4de98a769-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7d7b527356c313b89805fd4de98a769\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:15.381096 kubelet[2824]: I0302 13:01:15.381083 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:15.381184 kubelet[2824]: I0302 13:01:15.381107 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:15.381239 kubelet[2824]: I0302 13:01:15.381129 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/236c576a-a9c3-4e3e-aa90-f266498b8e50-kube-proxy\") pod \"kube-proxy-4l75j\" (UID: \"236c576a-a9c3-4e3e-aa90-f266498b8e50\") " pod="kube-system/kube-proxy-4l75j" Mar 2 13:01:15.381649 kubelet[2824]: I0302 13:01:15.381351 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/236c576a-a9c3-4e3e-aa90-f266498b8e50-xtables-lock\") pod \"kube-proxy-4l75j\" (UID: \"236c576a-a9c3-4e3e-aa90-f266498b8e50\") " pod="kube-system/kube-proxy-4l75j" Mar 2 13:01:15.381649 kubelet[2824]: I0302 13:01:15.381422 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh277\" (UniqueName: \"kubernetes.io/projected/236c576a-a9c3-4e3e-aa90-f266498b8e50-kube-api-access-lh277\") pod \"kube-proxy-4l75j\" (UID: \"236c576a-a9c3-4e3e-aa90-f266498b8e50\") " pod="kube-system/kube-proxy-4l75j" Mar 2 13:01:15.381649 kubelet[2824]: I0302 13:01:15.381450 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7d7b527356c313b89805fd4de98a769-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e7d7b527356c313b89805fd4de98a769\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:15.381649 kubelet[2824]: I0302 13:01:15.381530 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7d7b527356c313b89805fd4de98a769-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e7d7b527356c313b89805fd4de98a769\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:15.381649 kubelet[2824]: I0302 13:01:15.381558 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:15.389227 kubelet[2824]: I0302 13:01:15.388883 2824 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:01:15.396458 containerd[1577]: time="2026-03-02T13:01:15.396094228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:01:15.402350 kubelet[2824]: I0302 13:01:15.402224 2824 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:01:15.462809 systemd[1]: Created slice kubepods-besteffort-pod236c576a_a9c3_4e3e_aa90_f266498b8e50.slice - libcontainer container kubepods-besteffort-pod236c576a_a9c3_4e3e_aa90_f266498b8e50.slice. Mar 2 13:01:15.892656 kubelet[2824]: I0302 13:01:15.892459 2824 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 13:01:15.958696 kubelet[2824]: I0302 13:01:15.958597 2824 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 2 13:01:15.958917 kubelet[2824]: I0302 13:01:15.958733 2824 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 13:01:16.098763 containerd[1577]: time="2026-03-02T13:01:16.097786015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4l75j,Uid:236c576a-a9c3-4e3e-aa90-f266498b8e50,Namespace:kube-system,Attempt:0,}" Mar 2 13:01:16.547158 containerd[1577]: time="2026-03-02T13:01:16.546529729Z" level=info msg="connecting to shim 592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be" address="unix:///run/containerd/s/596514fc8f70fe953047021b796cf87afe37a3b750298eba3ef456acdce94a67" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:01:16.866742 systemd[1]: Started cri-containerd-592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be.scope - libcontainer container 592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be. Mar 2 13:01:17.288429 containerd[1577]: time="2026-03-02T13:01:17.288283107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4l75j,Uid:236c576a-a9c3-4e3e-aa90-f266498b8e50,Namespace:kube-system,Attempt:0,} returns sandbox id \"592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be\"" Mar 2 13:01:17.318054 containerd[1577]: time="2026-03-02T13:01:17.317943707Z" level=info msg="CreateContainer within sandbox \"592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:01:17.463614 containerd[1577]: time="2026-03-02T13:01:17.454733258Z" level=info msg="Container ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:01:17.473404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644178267.mount: Deactivated successfully. Mar 2 13:01:17.606476 sudo[2843]: pam_unix(sudo:session): session closed for user root Mar 2 13:01:17.761919 containerd[1577]: time="2026-03-02T13:01:17.760190544Z" level=info msg="CreateContainer within sandbox \"592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf\"" Mar 2 13:01:17.769401 containerd[1577]: time="2026-03-02T13:01:17.768231417Z" level=info msg="StartContainer for \"ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf\"" Mar 2 13:01:17.772647 containerd[1577]: time="2026-03-02T13:01:17.772608982Z" level=info msg="connecting to shim ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf" address="unix:///run/containerd/s/596514fc8f70fe953047021b796cf87afe37a3b750298eba3ef456acdce94a67" protocol=ttrpc version=3 Mar 2 13:01:17.899081 systemd[1]: Started cri-containerd-ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf.scope - libcontainer container ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf. Mar 2 13:01:19.030363 containerd[1577]: time="2026-03-02T13:01:19.005660954Z" level=info msg="StartContainer for \"ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf\" returns successfully" Mar 2 13:01:21.091332 kubelet[2824]: I0302 13:01:21.090646 2824 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-4l75j" podStartSLOduration=7.090529542 podStartE2EDuration="7.090529542s" podCreationTimestamp="2026-03-02 13:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:19.490388735 +0000 UTC m=+6.933451258" watchObservedRunningTime="2026-03-02 13:01:21.090529542 +0000 UTC m=+8.533592065" Mar 2 13:01:21.188355 systemd[1]: Created slice kubepods-burstable-pod01be2fb8_8c27_4bff_8654_2186ba08db93.slice - libcontainer container kubepods-burstable-pod01be2fb8_8c27_4bff_8654_2186ba08db93.slice. Mar 2 13:01:21.203895 systemd[1]: Created slice kubepods-besteffort-poddaeafd74_e1e4_481a_801a_04856244d09d.slice - libcontainer container kubepods-besteffort-poddaeafd74_e1e4_481a_801a_04856244d09d.slice. Mar 2 13:01:21.276809 kubelet[2824]: I0302 13:01:21.275289 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-net\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.276809 kubelet[2824]: I0302 13:01:21.275613 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-hubble-tls\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.276809 kubelet[2824]: I0302 13:01:21.275690 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-config-path\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.276809 kubelet[2824]: I0302 13:01:21.275723 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-run\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.276809 kubelet[2824]: I0302 13:01:21.275747 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-etc-cni-netd\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.276809 kubelet[2824]: I0302 13:01:21.275768 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-lib-modules\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.290968 kubelet[2824]: I0302 13:01:21.276219 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-kernel\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.290968 kubelet[2824]: I0302 13:01:21.277070 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-bpf-maps\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.290968 kubelet[2824]: I0302 13:01:21.277122 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-cgroup\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.290968 kubelet[2824]: I0302 13:01:21.277245 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cni-path\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.290968 kubelet[2824]: I0302 13:01:21.277317 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-xtables-lock\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.290968 kubelet[2824]: I0302 13:01:21.277361 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7s6\" (UniqueName: \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-kube-api-access-sn7s6\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.310739 kubelet[2824]: I0302 13:01:21.277394 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/daeafd74-e1e4-481a-801a-04856244d09d-cilium-config-path\") pod \"cilium-operator-78cf5644cb-69n42\" (UID: \"daeafd74-e1e4-481a-801a-04856244d09d\") " pod="kube-system/cilium-operator-78cf5644cb-69n42" Mar 2 13:01:21.310739 kubelet[2824]: I0302 13:01:21.277420 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmfw\" (UniqueName: \"kubernetes.io/projected/daeafd74-e1e4-481a-801a-04856244d09d-kube-api-access-hpmfw\") pod \"cilium-operator-78cf5644cb-69n42\" (UID: \"daeafd74-e1e4-481a-801a-04856244d09d\") " pod="kube-system/cilium-operator-78cf5644cb-69n42" Mar 2 13:01:21.310739 kubelet[2824]: I0302 13:01:21.277765 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-hostproc\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.310739 kubelet[2824]: I0302 13:01:21.277791 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01be2fb8-8c27-4bff-8654-2186ba08db93-clustermesh-secrets\") pod \"cilium-ptn7p\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " pod="kube-system/cilium-ptn7p" Mar 2 13:01:21.598801 containerd[1577]: time="2026-03-02T13:01:21.597620142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-69n42,Uid:daeafd74-e1e4-481a-801a-04856244d09d,Namespace:kube-system,Attempt:0,}" Mar 2 13:01:21.604769 containerd[1577]: time="2026-03-02T13:01:21.603149645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ptn7p,Uid:01be2fb8-8c27-4bff-8654-2186ba08db93,Namespace:kube-system,Attempt:0,}" Mar 2 13:01:21.748464 containerd[1577]: time="2026-03-02T13:01:21.748266025Z" level=info msg="connecting to shim 63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4" address="unix:///run/containerd/s/b773a25e18dd4ae5f40d833493883e5190ff3a3d8e4432ada3957f87d96218e7" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:01:21.753466 containerd[1577]: time="2026-03-02T13:01:21.753354852Z" level=info msg="connecting to shim e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b" address="unix:///run/containerd/s/b75ca29e99acc2553d24658b24f80e9914a978c2da0ce5f785caa5af2682541c" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:01:21.976582 systemd[1]: Started cri-containerd-e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b.scope - libcontainer container e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b. Mar 2 13:01:21.986836 systemd[1]: Started cri-containerd-63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4.scope - libcontainer container 63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4. Mar 2 13:01:22.254520 containerd[1577]: time="2026-03-02T13:01:22.248546635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ptn7p,Uid:01be2fb8-8c27-4bff-8654-2186ba08db93,Namespace:kube-system,Attempt:0,} returns sandbox id \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\"" Mar 2 13:01:22.264701 containerd[1577]: time="2026-03-02T13:01:22.264543084Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:01:22.270547 containerd[1577]: time="2026-03-02T13:01:22.270142646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-69n42,Uid:daeafd74-e1e4-481a-801a-04856244d09d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\"" Mar 2 13:01:43.036092 kubelet[2824]: E0302 13:01:43.033076 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.415s" Mar 2 13:02:00.811047 kubelet[2824]: E0302 13:02:00.810592 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.797s" Mar 2 13:02:08.182530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633672454.mount: Deactivated successfully. Mar 2 13:02:18.916534 containerd[1577]: time="2026-03-02T13:02:18.915818756Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:18.930550 containerd[1577]: time="2026-03-02T13:02:18.928445279Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:02:18.950949 containerd[1577]: time="2026-03-02T13:02:18.946683494Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:18.954606 containerd[1577]: time="2026-03-02T13:02:18.954551014Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 56.689889887s" Mar 2 13:02:18.954799 containerd[1577]: time="2026-03-02T13:02:18.954645571Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:02:18.959949 containerd[1577]: time="2026-03-02T13:02:18.959435372Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:02:18.978097 containerd[1577]: time="2026-03-02T13:02:18.975121240Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:02:19.021188 containerd[1577]: time="2026-03-02T13:02:19.020921088Z" level=info msg="Container d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:19.173562 containerd[1577]: time="2026-03-02T13:02:19.172152214Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\"" Mar 2 13:02:19.187458 containerd[1577]: time="2026-03-02T13:02:19.187267087Z" level=info msg="StartContainer for \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\"" Mar 2 13:02:19.195323 containerd[1577]: time="2026-03-02T13:02:19.194100960Z" level=info msg="connecting to shim d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01" address="unix:///run/containerd/s/b773a25e18dd4ae5f40d833493883e5190ff3a3d8e4432ada3957f87d96218e7" protocol=ttrpc version=3 Mar 2 13:02:19.317397 systemd[1]: Started cri-containerd-d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01.scope - libcontainer container d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01. Mar 2 13:02:19.487542 containerd[1577]: time="2026-03-02T13:02:19.487287210Z" level=info msg="StartContainer for \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\" returns successfully" Mar 2 13:02:19.594491 systemd[1]: cri-containerd-d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01.scope: Deactivated successfully. Mar 2 13:02:19.628804 containerd[1577]: time="2026-03-02T13:02:19.622262514Z" level=info msg="received container exit event container_id:\"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\" id:\"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\" pid:3233 exited_at:{seconds:1772456539 nanos:613290300}" Mar 2 13:02:19.862717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01-rootfs.mount: Deactivated successfully. Mar 2 13:02:20.209618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3557689447.mount: Deactivated successfully. Mar 2 13:02:20.841323 containerd[1577]: time="2026-03-02T13:02:20.840551367Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:02:20.907273 containerd[1577]: time="2026-03-02T13:02:20.904270911Z" level=info msg="Container cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:20.937800 containerd[1577]: time="2026-03-02T13:02:20.934919348Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\"" Mar 2 13:02:20.944460 containerd[1577]: time="2026-03-02T13:02:20.944351390Z" level=info msg="StartContainer for \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\"" Mar 2 13:02:20.951242 containerd[1577]: time="2026-03-02T13:02:20.951196322Z" level=info msg="connecting to shim cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b" address="unix:///run/containerd/s/b773a25e18dd4ae5f40d833493883e5190ff3a3d8e4432ada3957f87d96218e7" protocol=ttrpc version=3 Mar 2 13:02:21.005698 systemd[1]: Started cri-containerd-cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b.scope - libcontainer container cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b. Mar 2 13:02:21.171507 containerd[1577]: time="2026-03-02T13:02:21.169670420Z" level=info msg="StartContainer for \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\" returns successfully" Mar 2 13:02:21.220866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:02:21.221714 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:02:21.233226 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:02:21.237568 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:02:21.242519 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 13:02:21.248644 systemd[1]: cri-containerd-cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b.scope: Deactivated successfully. Mar 2 13:02:21.255607 containerd[1577]: time="2026-03-02T13:02:21.248153193Z" level=info msg="received container exit event container_id:\"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\" id:\"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\" pid:3291 exited_at:{seconds:1772456541 nanos:247471781}" Mar 2 13:02:21.249396 systemd[1]: cri-containerd-cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b.scope: Consumed 76ms CPU time, 5.6M memory peak, 88K read from disk, 2.2M written to disk. Mar 2 13:02:21.407447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:02:21.456954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b-rootfs.mount: Deactivated successfully. Mar 2 13:02:21.870099 containerd[1577]: time="2026-03-02T13:02:21.869277991Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:02:21.952168 containerd[1577]: time="2026-03-02T13:02:21.951297185Z" level=info msg="Container 48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:21.975889 containerd[1577]: time="2026-03-02T13:02:21.975775618Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\"" Mar 2 13:02:21.982300 containerd[1577]: time="2026-03-02T13:02:21.980669923Z" level=info msg="StartContainer for \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\"" Mar 2 13:02:21.983265 containerd[1577]: time="2026-03-02T13:02:21.983218931Z" level=info msg="connecting to shim 48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3" address="unix:///run/containerd/s/b773a25e18dd4ae5f40d833493883e5190ff3a3d8e4432ada3957f87d96218e7" protocol=ttrpc version=3 Mar 2 13:02:22.061179 systemd[1]: Started cri-containerd-48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3.scope - libcontainer container 48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3. Mar 2 13:02:22.254371 containerd[1577]: time="2026-03-02T13:02:22.254296918Z" level=info msg="StartContainer for \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\" returns successfully" Mar 2 13:02:22.254740 systemd[1]: cri-containerd-48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3.scope: Deactivated successfully. Mar 2 13:02:22.264918 containerd[1577]: time="2026-03-02T13:02:22.264571724Z" level=info msg="received container exit event container_id:\"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\" id:\"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\" pid:3339 exited_at:{seconds:1772456542 nanos:260310781}" Mar 2 13:02:22.331379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3-rootfs.mount: Deactivated successfully. Mar 2 13:02:22.482940 containerd[1577]: time="2026-03-02T13:02:22.482803201Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:22.487911 containerd[1577]: time="2026-03-02T13:02:22.487836372Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:02:22.490888 containerd[1577]: time="2026-03-02T13:02:22.490823181Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:22.493754 containerd[1577]: time="2026-03-02T13:02:22.493500941Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.533552587s" Mar 2 13:02:22.493754 containerd[1577]: time="2026-03-02T13:02:22.493576032Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:02:22.504796 containerd[1577]: time="2026-03-02T13:02:22.504568292Z" level=info msg="CreateContainer within sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:02:22.548229 containerd[1577]: time="2026-03-02T13:02:22.546956278Z" level=info msg="Container 193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:22.579591 containerd[1577]: time="2026-03-02T13:02:22.579458524Z" level=info msg="CreateContainer within sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\"" Mar 2 13:02:22.582175 containerd[1577]: time="2026-03-02T13:02:22.582102440Z" level=info msg="StartContainer for \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\"" Mar 2 13:02:22.589298 containerd[1577]: time="2026-03-02T13:02:22.589151152Z" level=info msg="connecting to shim 193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63" address="unix:///run/containerd/s/b75ca29e99acc2553d24658b24f80e9914a978c2da0ce5f785caa5af2682541c" protocol=ttrpc version=3 Mar 2 13:02:22.638372 systemd[1]: Started cri-containerd-193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63.scope - libcontainer container 193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63. Mar 2 13:02:22.718839 containerd[1577]: time="2026-03-02T13:02:22.718714898Z" level=info msg="StartContainer for \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\" returns successfully" Mar 2 13:02:22.860425 containerd[1577]: time="2026-03-02T13:02:22.853438579Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:02:22.894964 kubelet[2824]: I0302 13:02:22.894318 2824 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-69n42" podStartSLOduration=2.668429229 podStartE2EDuration="1m2.890962282s" podCreationTimestamp="2026-03-02 13:01:20 +0000 UTC" firstStartedPulling="2026-03-02 13:01:22.273947493 +0000 UTC m=+9.717010016" lastFinishedPulling="2026-03-02 13:02:22.496480546 +0000 UTC m=+69.939543069" observedRunningTime="2026-03-02 13:02:22.883278576 +0000 UTC m=+70.326341168" watchObservedRunningTime="2026-03-02 13:02:22.890962282 +0000 UTC m=+70.334024815" Mar 2 13:02:22.911740 containerd[1577]: time="2026-03-02T13:02:22.909229010Z" level=info msg="Container a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:22.948191 containerd[1577]: time="2026-03-02T13:02:22.948122735Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\"" Mar 2 13:02:22.954253 containerd[1577]: time="2026-03-02T13:02:22.953834312Z" level=info msg="StartContainer for \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\"" Mar 2 13:02:22.960072 containerd[1577]: time="2026-03-02T13:02:22.959196724Z" level=info msg="connecting to shim a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813" address="unix:///run/containerd/s/b773a25e18dd4ae5f40d833493883e5190ff3a3d8e4432ada3957f87d96218e7" protocol=ttrpc version=3 Mar 2 13:02:23.060470 systemd[1]: Started cri-containerd-a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813.scope - libcontainer container a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813. Mar 2 13:02:23.216223 systemd[1]: cri-containerd-a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813.scope: Deactivated successfully. Mar 2 13:02:23.250103 containerd[1577]: time="2026-03-02T13:02:23.248394331Z" level=info msg="received container exit event container_id:\"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\" id:\"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\" pid:3416 exited_at:{seconds:1772456543 nanos:247476011}" Mar 2 13:02:23.250103 containerd[1577]: time="2026-03-02T13:02:23.248531551Z" level=info msg="StartContainer for \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\" returns successfully" Mar 2 13:02:23.346208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813-rootfs.mount: Deactivated successfully. Mar 2 13:02:23.905200 containerd[1577]: time="2026-03-02T13:02:23.904605677Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:02:24.032838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151140217.mount: Deactivated successfully. Mar 2 13:02:24.068117 containerd[1577]: time="2026-03-02T13:02:24.063326907Z" level=info msg="Container e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:24.123485 containerd[1577]: time="2026-03-02T13:02:24.123218968Z" level=info msg="CreateContainer within sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\"" Mar 2 13:02:24.148102 containerd[1577]: time="2026-03-02T13:02:24.130939340Z" level=info msg="StartContainer for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\"" Mar 2 13:02:24.162205 containerd[1577]: time="2026-03-02T13:02:24.161858232Z" level=info msg="connecting to shim e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee" address="unix:///run/containerd/s/b773a25e18dd4ae5f40d833493883e5190ff3a3d8e4432ada3957f87d96218e7" protocol=ttrpc version=3 Mar 2 13:02:24.222555 systemd[1]: Started cri-containerd-e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee.scope - libcontainer container e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee. Mar 2 13:02:24.462502 containerd[1577]: time="2026-03-02T13:02:24.462417150Z" level=info msg="StartContainer for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" returns successfully" Mar 2 13:02:24.728221 kubelet[2824]: I0302 13:02:24.726839 2824 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 2 13:02:24.858974 systemd[1]: Created slice kubepods-burstable-pod75126270_5e34_4c1c_8d5d_41652830de15.slice - libcontainer container kubepods-burstable-pod75126270_5e34_4c1c_8d5d_41652830de15.slice. Mar 2 13:02:24.893678 systemd[1]: Created slice kubepods-burstable-pod3d06fa33_ded3_4118_8010_a710eb65220c.slice - libcontainer container kubepods-burstable-pod3d06fa33_ded3_4118_8010_a710eb65220c.slice. Mar 2 13:02:24.897179 kubelet[2824]: I0302 13:02:24.896662 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvvc\" (UniqueName: \"kubernetes.io/projected/3d06fa33-ded3-4118-8010-a710eb65220c-kube-api-access-4pvvc\") pod \"coredns-7d764666f9-nmbxm\" (UID: \"3d06fa33-ded3-4118-8010-a710eb65220c\") " pod="kube-system/coredns-7d764666f9-nmbxm" Mar 2 13:02:24.898097 kubelet[2824]: I0302 13:02:24.897725 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgrm4\" (UniqueName: \"kubernetes.io/projected/75126270-5e34-4c1c-8d5d-41652830de15-kube-api-access-sgrm4\") pod \"coredns-7d764666f9-plfcs\" (UID: \"75126270-5e34-4c1c-8d5d-41652830de15\") " pod="kube-system/coredns-7d764666f9-plfcs" Mar 2 13:02:24.900234 kubelet[2824]: I0302 13:02:24.899307 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d06fa33-ded3-4118-8010-a710eb65220c-config-volume\") pod \"coredns-7d764666f9-nmbxm\" (UID: \"3d06fa33-ded3-4118-8010-a710eb65220c\") " pod="kube-system/coredns-7d764666f9-nmbxm" Mar 2 13:02:24.900234 kubelet[2824]: I0302 13:02:24.899351 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75126270-5e34-4c1c-8d5d-41652830de15-config-volume\") pod \"coredns-7d764666f9-plfcs\" (UID: \"75126270-5e34-4c1c-8d5d-41652830de15\") " pod="kube-system/coredns-7d764666f9-plfcs" Mar 2 13:02:25.176168 containerd[1577]: time="2026-03-02T13:02:25.174586524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-plfcs,Uid:75126270-5e34-4c1c-8d5d-41652830de15,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:25.230953 containerd[1577]: time="2026-03-02T13:02:25.230839336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nmbxm,Uid:3d06fa33-ded3-4118-8010-a710eb65220c,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:28.241333 systemd-networkd[1439]: cilium_host: Link UP Mar 2 13:02:28.243916 systemd-networkd[1439]: cilium_net: Link UP Mar 2 13:02:28.244381 systemd-networkd[1439]: cilium_net: Gained carrier Mar 2 13:02:28.244738 systemd-networkd[1439]: cilium_host: Gained carrier Mar 2 13:02:28.466609 systemd-networkd[1439]: cilium_net: Gained IPv6LL Mar 2 13:02:28.632441 systemd-networkd[1439]: cilium_vxlan: Link UP Mar 2 13:02:28.632456 systemd-networkd[1439]: cilium_vxlan: Gained carrier Mar 2 13:02:28.870671 systemd-networkd[1439]: cilium_host: Gained IPv6LL Mar 2 13:02:29.332439 kernel: NET: Registered PF_ALG protocol family Mar 2 13:02:30.470192 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Mar 2 13:02:31.213917 systemd-networkd[1439]: lxc_health: Link UP Mar 2 13:02:31.217049 systemd-networkd[1439]: lxc_health: Gained carrier Mar 2 13:02:31.686790 kubelet[2824]: I0302 13:02:31.686459 2824 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-ptn7p" podStartSLOduration=10.094707101 podStartE2EDuration="1m11.686435962s" podCreationTimestamp="2026-03-02 13:01:20 +0000 UTC" firstStartedPulling="2026-03-02 13:01:22.264116718 +0000 UTC m=+9.707179252" lastFinishedPulling="2026-03-02 13:02:23.85584556 +0000 UTC m=+71.298908113" observedRunningTime="2026-03-02 13:02:25.007789512 +0000 UTC m=+72.450852075" watchObservedRunningTime="2026-03-02 13:02:31.686435962 +0000 UTC m=+79.129498485" Mar 2 13:02:31.849804 kernel: eth0: renamed from tmp67ebf Mar 2 13:02:31.844889 systemd-networkd[1439]: lxc2be4766c2657: Link UP Mar 2 13:02:31.856595 systemd-networkd[1439]: lxc2be4766c2657: Gained carrier Mar 2 13:02:31.984628 systemd-networkd[1439]: lxc7bbb2f628b4a: Link UP Mar 2 13:02:31.993251 kernel: eth0: renamed from tmp95ca8 Mar 2 13:02:32.000619 systemd-networkd[1439]: lxc7bbb2f628b4a: Gained carrier Mar 2 13:02:33.247938 systemd-networkd[1439]: lxc_health: Gained IPv6LL Mar 2 13:02:33.612134 systemd-networkd[1439]: lxc2be4766c2657: Gained IPv6LL Mar 2 13:02:34.004643 systemd-networkd[1439]: lxc7bbb2f628b4a: Gained IPv6LL Mar 2 13:02:35.856511 kubelet[2824]: E0302 13:02:35.850669 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.068s" Mar 2 13:02:41.627842 sudo[1786]: pam_unix(sudo:session): session closed for user root Mar 2 13:02:41.640887 sshd[1785]: Connection closed by 10.0.0.1 port 37810 Mar 2 13:02:41.641802 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:41.656072 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:02:41.656478 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:37810.service: Deactivated successfully. Mar 2 13:02:41.661421 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:02:41.661884 systemd[1]: session-7.scope: Consumed 15.047s CPU time, 238.8M memory peak. Mar 2 13:02:41.666218 systemd-logind[1559]: Removed session 7. Mar 2 13:02:49.963557 kubelet[2824]: E0302 13:02:49.963183 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.387s" Mar 2 13:02:51.802803 containerd[1577]: time="2026-03-02T13:02:51.802508624Z" level=info msg="connecting to shim 67ebfd7b4f2fb6b4ac50130cbf9c381431e0e267d6faeb0cd219346f234ba1f2" address="unix:///run/containerd/s/7a55b5d6a630098dbeaf6852c41ea3f00f6b5e20d64c3e02e86252351b4f487f" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:02:51.810475 containerd[1577]: time="2026-03-02T13:02:51.809937247Z" level=info msg="connecting to shim 95ca8779b1f6380ef64b7e32a34b52daa0c6b38f04d9064c81a9cd08c11dc9d9" address="unix:///run/containerd/s/38a733678253224cac973413da9cf82c22c3322d45d4d4ee701f9d8a2f718b4e" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:02:52.966054 systemd[1]: Started cri-containerd-95ca8779b1f6380ef64b7e32a34b52daa0c6b38f04d9064c81a9cd08c11dc9d9.scope - libcontainer container 95ca8779b1f6380ef64b7e32a34b52daa0c6b38f04d9064c81a9cd08c11dc9d9. Mar 2 13:02:53.134192 systemd[1]: Started cri-containerd-67ebfd7b4f2fb6b4ac50130cbf9c381431e0e267d6faeb0cd219346f234ba1f2.scope - libcontainer container 67ebfd7b4f2fb6b4ac50130cbf9c381431e0e267d6faeb0cd219346f234ba1f2. Mar 2 13:02:53.709782 systemd-resolved[1440]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:54.057678 systemd-resolved[1440]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:54.580729 containerd[1577]: time="2026-03-02T13:02:54.579449227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-plfcs,Uid:75126270-5e34-4c1c-8d5d-41652830de15,Namespace:kube-system,Attempt:0,} returns sandbox id \"67ebfd7b4f2fb6b4ac50130cbf9c381431e0e267d6faeb0cd219346f234ba1f2\"" Mar 2 13:02:54.607928 containerd[1577]: time="2026-03-02T13:02:54.607867394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-nmbxm,Uid:3d06fa33-ded3-4118-8010-a710eb65220c,Namespace:kube-system,Attempt:0,} returns sandbox id \"95ca8779b1f6380ef64b7e32a34b52daa0c6b38f04d9064c81a9cd08c11dc9d9\"" Mar 2 13:02:54.614546 containerd[1577]: time="2026-03-02T13:02:54.612508322Z" level=info msg="CreateContainer within sandbox \"67ebfd7b4f2fb6b4ac50130cbf9c381431e0e267d6faeb0cd219346f234ba1f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:02:54.650921 containerd[1577]: time="2026-03-02T13:02:54.650577609Z" level=info msg="CreateContainer within sandbox \"95ca8779b1f6380ef64b7e32a34b52daa0c6b38f04d9064c81a9cd08c11dc9d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:02:54.908637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558593206.mount: Deactivated successfully. Mar 2 13:02:54.928772 containerd[1577]: time="2026-03-02T13:02:54.926816943Z" level=info msg="Container 32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:54.933200 containerd[1577]: time="2026-03-02T13:02:54.932697127Z" level=info msg="Container 671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:02:54.974532 containerd[1577]: time="2026-03-02T13:02:54.972910730Z" level=info msg="CreateContainer within sandbox \"95ca8779b1f6380ef64b7e32a34b52daa0c6b38f04d9064c81a9cd08c11dc9d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128\"" Mar 2 13:02:55.010671 containerd[1577]: time="2026-03-02T13:02:55.008449283Z" level=info msg="StartContainer for \"32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128\"" Mar 2 13:02:55.017855 containerd[1577]: time="2026-03-02T13:02:55.016574155Z" level=info msg="CreateContainer within sandbox \"67ebfd7b4f2fb6b4ac50130cbf9c381431e0e267d6faeb0cd219346f234ba1f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a\"" Mar 2 13:02:55.021747 containerd[1577]: time="2026-03-02T13:02:55.021081631Z" level=info msg="connecting to shim 32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128" address="unix:///run/containerd/s/38a733678253224cac973413da9cf82c22c3322d45d4d4ee701f9d8a2f718b4e" protocol=ttrpc version=3 Mar 2 13:02:55.029231 containerd[1577]: time="2026-03-02T13:02:55.028965837Z" level=info msg="StartContainer for \"671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a\"" Mar 2 13:02:55.363270 containerd[1577]: time="2026-03-02T13:02:55.361815109Z" level=info msg="connecting to shim 671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a" address="unix:///run/containerd/s/7a55b5d6a630098dbeaf6852c41ea3f00f6b5e20d64c3e02e86252351b4f487f" protocol=ttrpc version=3 Mar 2 13:02:56.756163 systemd[1]: Started cri-containerd-32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128.scope - libcontainer container 32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128. Mar 2 13:02:56.817474 systemd[1]: Started cri-containerd-671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a.scope - libcontainer container 671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a. Mar 2 13:02:58.184296 containerd[1577]: time="2026-03-02T13:02:58.181342013Z" level=info msg="StartContainer for \"32c49f06a4ab814b1df0d1940c7966e7bdf4e742269a4fcaa962e318b4313128\" returns successfully" Mar 2 13:02:58.477354 containerd[1577]: time="2026-03-02T13:02:58.477278880Z" level=info msg="StartContainer for \"671dc4ba8e6a37eb9f3a47b6d1f2623a9940c22e3cf97d35ad1fa047bb32be5a\" returns successfully" Mar 2 13:03:00.918783 kubelet[2824]: I0302 13:03:00.916740 2824 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-plfcs" podStartSLOduration=106.916676908 podStartE2EDuration="1m46.916676908s" podCreationTimestamp="2026-03-02 13:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:00.405775766 +0000 UTC m=+107.848838400" watchObservedRunningTime="2026-03-02 13:03:00.916676908 +0000 UTC m=+108.359739430" Mar 2 13:03:00.918783 kubelet[2824]: I0302 13:03:00.917871 2824 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nmbxm" podStartSLOduration=106.91691333 podStartE2EDuration="1m46.91691333s" podCreationTimestamp="2026-03-02 13:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:03:00.894281578 +0000 UTC m=+108.337344111" watchObservedRunningTime="2026-03-02 13:03:00.91691333 +0000 UTC m=+108.359975873" Mar 2 13:03:24.112445 systemd[1]: cri-containerd-193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63.scope: Deactivated successfully. Mar 2 13:03:24.113421 systemd[1]: cri-containerd-193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63.scope: Consumed 1.673s CPU time, 26.1M memory peak, 4K written to disk. Mar 2 13:03:24.183191 systemd[1]: cri-containerd-e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e.scope: Deactivated successfully. Mar 2 13:03:24.183942 systemd[1]: cri-containerd-e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e.scope: Consumed 16.440s CPU time, 52.2M memory peak, 196K read from disk. Mar 2 13:03:24.212958 containerd[1577]: time="2026-03-02T13:03:24.212609054Z" level=info msg="received container exit event container_id:\"e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e\" id:\"e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e\" pid:2665 exit_status:1 exited_at:{seconds:1772456604 nanos:191830322}" Mar 2 13:03:24.256503 containerd[1577]: time="2026-03-02T13:03:24.256437452Z" level=info msg="received container exit event container_id:\"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\" id:\"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\" pid:3383 exit_status:1 exited_at:{seconds:1772456604 nanos:255845448}" Mar 2 13:03:24.357845 systemd[1]: cri-containerd-5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155.scope: Deactivated successfully. Mar 2 13:03:24.359129 systemd[1]: cri-containerd-5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155.scope: Consumed 8.896s CPU time, 21.4M memory peak, 128K read from disk. Mar 2 13:03:24.374284 containerd[1577]: time="2026-03-02T13:03:24.374045025Z" level=info msg="received container exit event container_id:\"5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155\" id:\"5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155\" pid:2680 exit_status:1 exited_at:{seconds:1772456604 nanos:372592785}" Mar 2 13:03:24.378065 kubelet[2824]: E0302 13:03:24.375625 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.89s" Mar 2 13:03:24.506592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63-rootfs.mount: Deactivated successfully. Mar 2 13:03:24.541412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e-rootfs.mount: Deactivated successfully. Mar 2 13:03:24.547298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155-rootfs.mount: Deactivated successfully. Mar 2 13:03:25.369469 kubelet[2824]: I0302 13:03:25.360545 2824 scope.go:122] "RemoveContainer" containerID="e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e" Mar 2 13:03:25.381414 kubelet[2824]: I0302 13:03:25.379194 2824 scope.go:122] "RemoveContainer" containerID="5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155" Mar 2 13:03:25.395338 containerd[1577]: time="2026-03-02T13:03:25.393537727Z" level=info msg="CreateContainer within sandbox \"241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 2 13:03:25.396592 containerd[1577]: time="2026-03-02T13:03:25.396516109Z" level=info msg="CreateContainer within sandbox \"33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 2 13:03:25.401332 kubelet[2824]: I0302 13:03:25.401256 2824 scope.go:122] "RemoveContainer" containerID="193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63" Mar 2 13:03:25.415467 containerd[1577]: time="2026-03-02T13:03:25.415380400Z" level=info msg="CreateContainer within sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Mar 2 13:03:25.509474 containerd[1577]: time="2026-03-02T13:03:25.509305178Z" level=info msg="Container d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:03:25.532201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172738086.mount: Deactivated successfully. Mar 2 13:03:25.537927 containerd[1577]: time="2026-03-02T13:03:25.537150771Z" level=info msg="Container d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:03:25.539542 containerd[1577]: time="2026-03-02T13:03:25.539410356Z" level=info msg="Container ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:03:25.542206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547536787.mount: Deactivated successfully. Mar 2 13:03:25.550702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount944413687.mount: Deactivated successfully. Mar 2 13:03:25.557384 containerd[1577]: time="2026-03-02T13:03:25.557311404Z" level=info msg="CreateContainer within sandbox \"241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45\"" Mar 2 13:03:25.562505 containerd[1577]: time="2026-03-02T13:03:25.562326994Z" level=info msg="StartContainer for \"d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45\"" Mar 2 13:03:25.566620 containerd[1577]: time="2026-03-02T13:03:25.566134377Z" level=info msg="connecting to shim d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45" address="unix:///run/containerd/s/b597e15f98e9eb682983bc0d04de46e143550f2b775fa79eacc7b998fb5d0272" protocol=ttrpc version=3 Mar 2 13:03:25.607578 containerd[1577]: time="2026-03-02T13:03:25.607166981Z" level=info msg="CreateContainer within sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\"" Mar 2 13:03:25.612050 containerd[1577]: time="2026-03-02T13:03:25.611477898Z" level=info msg="StartContainer for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\"" Mar 2 13:03:25.626163 containerd[1577]: time="2026-03-02T13:03:25.623213868Z" level=info msg="connecting to shim ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4" address="unix:///run/containerd/s/b75ca29e99acc2553d24658b24f80e9914a978c2da0ce5f785caa5af2682541c" protocol=ttrpc version=3 Mar 2 13:03:25.635166 containerd[1577]: time="2026-03-02T13:03:25.634973033Z" level=info msg="CreateContainer within sandbox \"33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083\"" Mar 2 13:03:25.645518 containerd[1577]: time="2026-03-02T13:03:25.645312231Z" level=info msg="StartContainer for \"d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083\"" Mar 2 13:03:25.669080 containerd[1577]: time="2026-03-02T13:03:25.668951020Z" level=info msg="connecting to shim d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083" address="unix:///run/containerd/s/3a44ec299b49f741fe318b85cee07ad3d58a0e84e833c57f57a5977bb3ff80ac" protocol=ttrpc version=3 Mar 2 13:03:25.671412 systemd[1]: Started cri-containerd-d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45.scope - libcontainer container d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45. Mar 2 13:03:25.697187 systemd[1]: Started cri-containerd-ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4.scope - libcontainer container ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4. Mar 2 13:03:25.744662 systemd[1]: Started cri-containerd-d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083.scope - libcontainer container d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083. Mar 2 13:03:25.855162 containerd[1577]: time="2026-03-02T13:03:25.855063270Z" level=info msg="StartContainer for \"d51dc43e76b7726f8025b61076e4e2f8b3b49fe6bda6ef1c684e86203416fa45\" returns successfully" Mar 2 13:03:25.891260 containerd[1577]: time="2026-03-02T13:03:25.889901278Z" level=info msg="StartContainer for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" returns successfully" Mar 2 13:03:25.940725 containerd[1577]: time="2026-03-02T13:03:25.940268991Z" level=info msg="StartContainer for \"d62aa1335f70d0c0a63102a796ab2ffa29ea08504a26199bdefb9dd64e877083\" returns successfully" Mar 2 13:03:39.601117 kubelet[2824]: E0302 13:03:39.555391 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.081s" Mar 2 13:03:43.970953 kubelet[2824]: E0302 13:03:43.970085 2824 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.179s" Mar 2 13:04:01.944963 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:44848.service - OpenSSH per-connection server daemon (10.0.0.1:44848). Mar 2 13:04:02.069164 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 44848 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:02.072822 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:02.082602 systemd-logind[1559]: New session 8 of user core. Mar 2 13:04:02.090330 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:04:02.275834 sshd[4429]: Connection closed by 10.0.0.1 port 44848 Mar 2 13:04:02.275937 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:02.283444 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:44848.service: Deactivated successfully. Mar 2 13:04:02.287169 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:04:02.288923 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:04:02.292053 systemd-logind[1559]: Removed session 8. Mar 2 13:04:07.303094 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:44852.service - OpenSSH per-connection server daemon (10.0.0.1:44852). Mar 2 13:04:07.445155 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 44852 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:07.447835 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:07.470283 systemd-logind[1559]: New session 9 of user core. Mar 2 13:04:07.486842 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:04:07.736392 sshd[4451]: Connection closed by 10.0.0.1 port 44852 Mar 2 13:04:07.736493 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:07.750706 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:04:07.755817 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:44852.service: Deactivated successfully. Mar 2 13:04:07.760680 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:04:07.774126 systemd-logind[1559]: Removed session 9. Mar 2 13:04:12.756814 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:36738.service - OpenSSH per-connection server daemon (10.0.0.1:36738). Mar 2 13:04:12.875467 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 36738 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:12.880383 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:12.896494 systemd-logind[1559]: New session 10 of user core. Mar 2 13:04:12.905738 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:04:13.089899 sshd[4469]: Connection closed by 10.0.0.1 port 36738 Mar 2 13:04:13.089353 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:13.096850 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:36738.service: Deactivated successfully. Mar 2 13:04:13.101520 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:04:13.103929 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:04:13.106538 systemd-logind[1559]: Removed session 10. Mar 2 13:04:18.120758 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:36746.service - OpenSSH per-connection server daemon (10.0.0.1:36746). Mar 2 13:04:18.257178 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 36746 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:18.260721 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:18.282296 systemd-logind[1559]: New session 11 of user core. Mar 2 13:04:18.295749 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:04:18.552180 sshd[4492]: Connection closed by 10.0.0.1 port 36746 Mar 2 13:04:18.552383 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:18.566752 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:36746.service: Deactivated successfully. Mar 2 13:04:18.573350 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:04:18.577138 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:04:18.580937 systemd-logind[1559]: Removed session 11. Mar 2 13:04:23.642296 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:53216.service - OpenSSH per-connection server daemon (10.0.0.1:53216). Mar 2 13:04:23.906514 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 53216 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:23.919085 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:23.944815 systemd-logind[1559]: New session 12 of user core. Mar 2 13:04:23.959636 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:04:24.321765 sshd[4513]: Connection closed by 10.0.0.1 port 53216 Mar 2 13:04:24.323905 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:24.346423 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:53216.service: Deactivated successfully. Mar 2 13:04:24.355364 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:04:24.360803 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:04:24.366493 systemd-logind[1559]: Removed session 12. Mar 2 13:04:29.363563 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:52752.service - OpenSSH per-connection server daemon (10.0.0.1:52752). Mar 2 13:04:29.551566 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 52752 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:29.554826 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:29.580773 systemd-logind[1559]: New session 13 of user core. Mar 2 13:04:29.591945 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:04:29.848790 sshd[4531]: Connection closed by 10.0.0.1 port 52752 Mar 2 13:04:29.849646 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:29.858597 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:52752.service: Deactivated successfully. Mar 2 13:04:29.863276 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:04:29.868753 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:04:29.873484 systemd-logind[1559]: Removed session 13. Mar 2 13:04:34.882354 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:52756.service - OpenSSH per-connection server daemon (10.0.0.1:52756). Mar 2 13:04:34.974271 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 52756 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:34.977269 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:34.989845 systemd-logind[1559]: New session 14 of user core. Mar 2 13:04:34.997622 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:04:35.166793 sshd[4548]: Connection closed by 10.0.0.1 port 52756 Mar 2 13:04:35.168374 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:35.176235 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:52756.service: Deactivated successfully. Mar 2 13:04:35.180520 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:04:35.183696 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:04:35.188565 systemd-logind[1559]: Removed session 14. Mar 2 13:04:40.210088 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:55382.service - OpenSSH per-connection server daemon (10.0.0.1:55382). Mar 2 13:04:40.365525 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 55382 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:40.368166 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:40.386912 systemd-logind[1559]: New session 15 of user core. Mar 2 13:04:40.418274 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:04:40.718104 sshd[4565]: Connection closed by 10.0.0.1 port 55382 Mar 2 13:04:40.718719 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:40.750132 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:55382.service: Deactivated successfully. Mar 2 13:04:40.753927 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:04:40.757110 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:04:40.769560 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:55394.service - OpenSSH per-connection server daemon (10.0.0.1:55394). Mar 2 13:04:40.775290 systemd-logind[1559]: Removed session 15. Mar 2 13:04:40.972525 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 55394 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:40.976554 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:41.003286 systemd-logind[1559]: New session 16 of user core. Mar 2 13:04:41.019824 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:04:41.545738 sshd[4582]: Connection closed by 10.0.0.1 port 55394 Mar 2 13:04:41.543133 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:41.571548 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:55394.service: Deactivated successfully. Mar 2 13:04:41.588113 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:04:41.591068 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:04:41.615575 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:55404.service - OpenSSH per-connection server daemon (10.0.0.1:55404). Mar 2 13:04:41.618350 systemd-logind[1559]: Removed session 16. Mar 2 13:04:41.822222 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 55404 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:41.840315 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:41.864787 systemd-logind[1559]: New session 17 of user core. Mar 2 13:04:41.881724 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:04:42.162552 sshd[4598]: Connection closed by 10.0.0.1 port 55404 Mar 2 13:04:42.164976 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:42.191514 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:55404.service: Deactivated successfully. Mar 2 13:04:42.200248 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:04:42.203286 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:04:42.207826 systemd-logind[1559]: Removed session 17. Mar 2 13:04:47.243305 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:55412.service - OpenSSH per-connection server daemon (10.0.0.1:55412). Mar 2 13:04:47.472360 sshd[4612]: Accepted publickey for core from 10.0.0.1 port 55412 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:47.478497 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:47.494086 systemd-logind[1559]: New session 18 of user core. Mar 2 13:04:47.515115 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:04:47.851356 sshd[4615]: Connection closed by 10.0.0.1 port 55412 Mar 2 13:04:47.851876 sshd-session[4612]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:47.865134 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:55412.service: Deactivated successfully. Mar 2 13:04:47.879868 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:04:47.883434 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:04:47.888254 systemd-logind[1559]: Removed session 18. Mar 2 13:04:52.876341 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Mar 2 13:04:53.144271 sshd[4631]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:53.148638 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:53.187398 systemd-logind[1559]: New session 19 of user core. Mar 2 13:04:53.211393 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:04:53.562334 sshd[4634]: Connection closed by 10.0.0.1 port 52988 Mar 2 13:04:53.564431 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:53.574560 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:52988.service: Deactivated successfully. Mar 2 13:04:53.578545 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:04:53.583245 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:04:53.587136 systemd-logind[1559]: Removed session 19. Mar 2 13:04:59.465801 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:53000.service - OpenSSH per-connection server daemon (10.0.0.1:53000). Mar 2 13:04:59.591355 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 53000 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:04:59.595148 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:59.626666 systemd-logind[1559]: New session 20 of user core. Mar 2 13:04:59.643854 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:04:59.941658 sshd[4652]: Connection closed by 10.0.0.1 port 53000 Mar 2 13:04:59.942314 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:59.956111 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:53000.service: Deactivated successfully. Mar 2 13:04:59.961279 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:04:59.965111 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:04:59.971536 systemd-logind[1559]: Removed session 20. Mar 2 13:05:05.569745 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:49314.service - OpenSSH per-connection server daemon (10.0.0.1:49314). Mar 2 13:05:07.561382 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 49314 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:07.569914 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:07.602290 systemd-logind[1559]: New session 21 of user core. Mar 2 13:05:07.612810 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:05:13.554311 sshd[4672]: Connection closed by 10.0.0.1 port 49314 Mar 2 13:05:13.558783 sshd-session[4669]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:13.574853 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:49314.service: Deactivated successfully. Mar 2 13:05:13.583906 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:05:13.597721 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:05:13.608393 systemd-logind[1559]: Removed session 21. Mar 2 13:05:18.607878 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:43434.service - OpenSSH per-connection server daemon (10.0.0.1:43434). Mar 2 13:05:18.930201 sshd[4688]: Accepted publickey for core from 10.0.0.1 port 43434 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:18.931406 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:18.958477 systemd-logind[1559]: New session 22 of user core. Mar 2 13:05:18.971692 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:05:19.255589 sshd[4691]: Connection closed by 10.0.0.1 port 43434 Mar 2 13:05:19.255236 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:19.262892 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:43434.service: Deactivated successfully. Mar 2 13:05:19.267492 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:05:19.271720 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:05:19.278450 systemd-logind[1559]: Removed session 22. Mar 2 13:05:24.309967 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:41582.service - OpenSSH per-connection server daemon (10.0.0.1:41582). Mar 2 13:05:24.459069 sshd[4707]: Accepted publickey for core from 10.0.0.1 port 41582 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:24.467344 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:24.483940 systemd-logind[1559]: New session 23 of user core. Mar 2 13:05:24.496866 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:05:24.913751 sshd[4710]: Connection closed by 10.0.0.1 port 41582 Mar 2 13:05:24.914910 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:24.940912 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:41582.service: Deactivated successfully. Mar 2 13:05:24.947760 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:05:24.963161 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:05:24.967610 systemd-logind[1559]: Removed session 23. Mar 2 13:05:29.972146 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:50038.service - OpenSSH per-connection server daemon (10.0.0.1:50038). Mar 2 13:05:30.127333 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 50038 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:30.146084 sshd-session[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:30.160612 systemd-logind[1559]: New session 24 of user core. Mar 2 13:05:30.175653 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:05:30.486694 sshd[4726]: Connection closed by 10.0.0.1 port 50038 Mar 2 13:05:30.489445 sshd-session[4723]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:30.524849 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:50038.service: Deactivated successfully. Mar 2 13:05:30.541114 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:05:30.543398 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:05:30.561826 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:50040.service - OpenSSH per-connection server daemon (10.0.0.1:50040). Mar 2 13:05:30.567902 systemd-logind[1559]: Removed session 24. Mar 2 13:05:30.716617 sshd[4740]: Accepted publickey for core from 10.0.0.1 port 50040 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:30.722951 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:30.770568 systemd-logind[1559]: New session 25 of user core. Mar 2 13:05:30.786667 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:05:32.195903 sshd[4743]: Connection closed by 10.0.0.1 port 50040 Mar 2 13:05:32.195317 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:32.223910 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:50040.service: Deactivated successfully. Mar 2 13:05:32.236326 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:05:32.241824 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:05:32.260472 systemd-logind[1559]: Removed session 25. Mar 2 13:05:32.273267 systemd[1]: Started sshd@25-10.0.0.57:22-10.0.0.1:50054.service - OpenSSH per-connection server daemon (10.0.0.1:50054). Mar 2 13:05:32.586969 sshd[4754]: Accepted publickey for core from 10.0.0.1 port 50054 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:32.597078 sshd-session[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:32.621707 systemd-logind[1559]: New session 26 of user core. Mar 2 13:05:32.695054 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:05:34.399078 sshd[4757]: Connection closed by 10.0.0.1 port 50054 Mar 2 13:05:34.400688 sshd-session[4754]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:34.439731 systemd[1]: sshd@25-10.0.0.57:22-10.0.0.1:50054.service: Deactivated successfully. Mar 2 13:05:34.443226 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:05:34.457799 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:05:34.482376 systemd[1]: Started sshd@26-10.0.0.57:22-10.0.0.1:50068.service - OpenSSH per-connection server daemon (10.0.0.1:50068). Mar 2 13:05:34.491385 systemd-logind[1559]: Removed session 26. Mar 2 13:05:34.674473 sshd[4779]: Accepted publickey for core from 10.0.0.1 port 50068 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:34.679425 sshd-session[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:34.699422 systemd-logind[1559]: New session 27 of user core. Mar 2 13:05:34.709215 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:05:35.492281 sshd[4782]: Connection closed by 10.0.0.1 port 50068 Mar 2 13:05:35.494311 sshd-session[4779]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:35.544329 systemd[1]: sshd@26-10.0.0.57:22-10.0.0.1:50068.service: Deactivated successfully. Mar 2 13:05:35.549309 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:05:35.554126 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:05:35.557951 systemd[1]: Started sshd@27-10.0.0.57:22-10.0.0.1:50078.service - OpenSSH per-connection server daemon (10.0.0.1:50078). Mar 2 13:05:35.567231 systemd-logind[1559]: Removed session 27. Mar 2 13:05:35.695199 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 50078 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:35.699210 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:35.713681 systemd-logind[1559]: New session 28 of user core. Mar 2 13:05:35.723927 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:05:35.918056 sshd[4799]: Connection closed by 10.0.0.1 port 50078 Mar 2 13:05:35.920406 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:35.932387 systemd[1]: sshd@27-10.0.0.57:22-10.0.0.1:50078.service: Deactivated successfully. Mar 2 13:05:35.935950 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:05:35.942796 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:05:35.946466 systemd-logind[1559]: Removed session 28. Mar 2 13:05:40.954603 systemd[1]: Started sshd@28-10.0.0.57:22-10.0.0.1:57188.service - OpenSSH per-connection server daemon (10.0.0.1:57188). Mar 2 13:05:41.140824 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 57188 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:41.145565 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:41.165069 systemd-logind[1559]: New session 29 of user core. Mar 2 13:05:41.173720 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:05:41.404682 sshd[4816]: Connection closed by 10.0.0.1 port 57188 Mar 2 13:05:41.405265 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:41.418225 systemd[1]: sshd@28-10.0.0.57:22-10.0.0.1:57188.service: Deactivated successfully. Mar 2 13:05:41.437423 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:05:41.442557 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:05:41.449979 systemd-logind[1559]: Removed session 29. Mar 2 13:05:46.469506 systemd[1]: Started sshd@29-10.0.0.57:22-10.0.0.1:57196.service - OpenSSH per-connection server daemon (10.0.0.1:57196). Mar 2 13:05:46.628643 sshd[4829]: Accepted publickey for core from 10.0.0.1 port 57196 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:46.646609 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:46.667985 systemd-logind[1559]: New session 30 of user core. Mar 2 13:05:46.680252 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:05:46.937420 sshd[4832]: Connection closed by 10.0.0.1 port 57196 Mar 2 13:05:46.936477 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:46.948658 systemd[1]: sshd@29-10.0.0.57:22-10.0.0.1:57196.service: Deactivated successfully. Mar 2 13:05:46.959807 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:05:46.965500 systemd-logind[1559]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:05:46.973428 systemd-logind[1559]: Removed session 30. Mar 2 13:05:47.250195 containerd[1577]: time="2026-03-02T13:05:47.249203067Z" level=warning msg="container event discarded" container=241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e type=CONTAINER_CREATED_EVENT Mar 2 13:05:47.263081 containerd[1577]: time="2026-03-02T13:05:47.262120083Z" level=warning msg="container event discarded" container=241e37a3484fd717f7ace17a81cec0f5928c4af6afe4184c696e7f20c18b363e type=CONTAINER_STARTED_EVENT Mar 2 13:05:47.309740 containerd[1577]: time="2026-03-02T13:05:47.309556123Z" level=warning msg="container event discarded" container=33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499 type=CONTAINER_CREATED_EVENT Mar 2 13:05:47.309740 containerd[1577]: time="2026-03-02T13:05:47.309652442Z" level=warning msg="container event discarded" container=33379e96e56f0396c6f5b122c94fb1f99b9aa9b0765e0ab3a846fbcca3b9b499 type=CONTAINER_STARTED_EVENT Mar 2 13:05:47.309740 containerd[1577]: time="2026-03-02T13:05:47.309680384Z" level=warning msg="container event discarded" container=9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d type=CONTAINER_CREATED_EVENT Mar 2 13:05:47.309740 containerd[1577]: time="2026-03-02T13:05:47.309692447Z" level=warning msg="container event discarded" container=9321d356c3a1e81c633cce62e9e904a3ddb6a8506809c1cc645f0c2c93d2532d type=CONTAINER_STARTED_EVENT Mar 2 13:05:49.094309 containerd[1577]: time="2026-03-02T13:05:49.093357796Z" level=warning msg="container event discarded" container=5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155 type=CONTAINER_CREATED_EVENT Mar 2 13:05:49.129533 containerd[1577]: time="2026-03-02T13:05:49.126417471Z" level=warning msg="container event discarded" container=c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225 type=CONTAINER_CREATED_EVENT Mar 2 13:05:49.273594 containerd[1577]: time="2026-03-02T13:05:49.272448913Z" level=warning msg="container event discarded" container=e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e type=CONTAINER_CREATED_EVENT Mar 2 13:05:50.813441 containerd[1577]: time="2026-03-02T13:05:50.811597077Z" level=warning msg="container event discarded" container=c9eca24daf5731256c4bd816108578ba3d9cb6fa49fdf6bdcbc6cdaab4b3e225 type=CONTAINER_STARTED_EVENT Mar 2 13:05:50.930474 containerd[1577]: time="2026-03-02T13:05:50.918628580Z" level=warning msg="container event discarded" container=5df4573ece7877ae91d610c4063ce649a9f3c5fb99acf2f2669e7945c3be9155 type=CONTAINER_STARTED_EVENT Mar 2 13:05:50.930474 containerd[1577]: time="2026-03-02T13:05:50.918735650Z" level=warning msg="container event discarded" container=e363ca56472e3ca5282cb8231c52996c9ef87c8865c499c4376a3b4941fedc8e type=CONTAINER_STARTED_EVENT Mar 2 13:05:51.961419 systemd[1]: Started sshd@30-10.0.0.57:22-10.0.0.1:50456.service - OpenSSH per-connection server daemon (10.0.0.1:50456). Mar 2 13:05:52.095831 sshd[4847]: Accepted publickey for core from 10.0.0.1 port 50456 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:52.100107 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:52.116671 systemd-logind[1559]: New session 31 of user core. Mar 2 13:05:52.144672 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:05:52.348269 sshd[4850]: Connection closed by 10.0.0.1 port 50456 Mar 2 13:05:52.349965 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:52.365603 systemd[1]: sshd@30-10.0.0.57:22-10.0.0.1:50456.service: Deactivated successfully. Mar 2 13:05:52.369254 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:05:52.376534 systemd-logind[1559]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:05:52.383362 systemd-logind[1559]: Removed session 31. Mar 2 13:05:57.413159 systemd[1]: Started sshd@31-10.0.0.57:22-10.0.0.1:50462.service - OpenSSH per-connection server daemon (10.0.0.1:50462). Mar 2 13:05:57.635725 sshd[4866]: Accepted publickey for core from 10.0.0.1 port 50462 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:05:57.648781 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:05:57.688082 systemd-logind[1559]: New session 32 of user core. Mar 2 13:05:57.707294 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 13:05:58.089687 sshd[4869]: Connection closed by 10.0.0.1 port 50462 Mar 2 13:05:58.090974 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Mar 2 13:05:58.101815 systemd[1]: sshd@31-10.0.0.57:22-10.0.0.1:50462.service: Deactivated successfully. Mar 2 13:05:58.110642 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 13:05:58.114658 systemd-logind[1559]: Session 32 logged out. Waiting for processes to exit. Mar 2 13:05:58.118438 systemd-logind[1559]: Removed session 32. Mar 2 13:06:03.122745 systemd[1]: Started sshd@32-10.0.0.57:22-10.0.0.1:44204.service - OpenSSH per-connection server daemon (10.0.0.1:44204). Mar 2 13:06:03.245314 sshd[4882]: Accepted publickey for core from 10.0.0.1 port 44204 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:06:03.248847 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:03.271140 systemd-logind[1559]: New session 33 of user core. Mar 2 13:06:03.277502 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 13:06:03.519193 sshd[4885]: Connection closed by 10.0.0.1 port 44204 Mar 2 13:06:03.520136 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:03.548582 systemd[1]: sshd@32-10.0.0.57:22-10.0.0.1:44204.service: Deactivated successfully. Mar 2 13:06:03.553493 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 13:06:03.562542 systemd-logind[1559]: Session 33 logged out. Waiting for processes to exit. Mar 2 13:06:03.578704 systemd[1]: Started sshd@33-10.0.0.57:22-10.0.0.1:44212.service - OpenSSH per-connection server daemon (10.0.0.1:44212). Mar 2 13:06:03.586125 systemd-logind[1559]: Removed session 33. Mar 2 13:06:03.702336 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 44212 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:06:03.705845 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:03.737138 systemd-logind[1559]: New session 34 of user core. Mar 2 13:06:03.755928 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 13:06:05.432971 containerd[1577]: time="2026-03-02T13:06:05.432851645Z" level=info msg="StopContainer for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" with timeout 30 (s)" Mar 2 13:06:05.439809 containerd[1577]: time="2026-03-02T13:06:05.439611850Z" level=info msg="Stop container \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" with signal terminated" Mar 2 13:06:05.537877 systemd[1]: cri-containerd-ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4.scope: Deactivated successfully. Mar 2 13:06:05.538619 systemd[1]: cri-containerd-ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4.scope: Consumed 1.720s CPU time, 29.6M memory peak, 4K written to disk. Mar 2 13:06:05.543903 containerd[1577]: time="2026-03-02T13:06:05.543643022Z" level=info msg="received container exit event container_id:\"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" id:\"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" pid:4361 exited_at:{seconds:1772456765 nanos:542679024}" Mar 2 13:06:05.579401 containerd[1577]: time="2026-03-02T13:06:05.579310128Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:06:05.588269 containerd[1577]: time="2026-03-02T13:06:05.587829133Z" level=info msg="StopContainer for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" with timeout 2 (s)" Mar 2 13:06:05.589186 containerd[1577]: time="2026-03-02T13:06:05.589158079Z" level=info msg="Stop container \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" with signal terminated" Mar 2 13:06:05.619698 systemd-networkd[1439]: lxc_health: Link DOWN Mar 2 13:06:05.620326 systemd-networkd[1439]: lxc_health: Lost carrier Mar 2 13:06:05.630809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4-rootfs.mount: Deactivated successfully. Mar 2 13:06:05.647747 systemd[1]: cri-containerd-e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee.scope: Deactivated successfully. Mar 2 13:06:05.648454 systemd[1]: cri-containerd-e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee.scope: Consumed 23.977s CPU time, 140.5M memory peak, 248K read from disk, 13.3M written to disk. Mar 2 13:06:05.658924 containerd[1577]: time="2026-03-02T13:06:05.658613933Z" level=info msg="received container exit event container_id:\"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" id:\"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" pid:3453 exited_at:{seconds:1772456765 nanos:654903887}" Mar 2 13:06:05.672126 containerd[1577]: time="2026-03-02T13:06:05.671923727Z" level=info msg="StopContainer for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" returns successfully" Mar 2 13:06:05.678885 containerd[1577]: time="2026-03-02T13:06:05.678755526Z" level=info msg="StopPodSandbox for \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\"" Mar 2 13:06:05.690332 containerd[1577]: time="2026-03-02T13:06:05.690237353Z" level=info msg="Container to stop \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.690332 containerd[1577]: time="2026-03-02T13:06:05.690318475Z" level=info msg="Container to stop \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.708891 systemd[1]: cri-containerd-e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b.scope: Deactivated successfully. Mar 2 13:06:05.716768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee-rootfs.mount: Deactivated successfully. Mar 2 13:06:05.719942 containerd[1577]: time="2026-03-02T13:06:05.719780368Z" level=info msg="received sandbox exit event container_id:\"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" id:\"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" exit_status:137 exited_at:{seconds:1772456765 nanos:718554897}" monitor_name=podsandbox Mar 2 13:06:05.747760 containerd[1577]: time="2026-03-02T13:06:05.747681692Z" level=info msg="StopContainer for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" returns successfully" Mar 2 13:06:05.749111 containerd[1577]: time="2026-03-02T13:06:05.748925747Z" level=info msg="StopPodSandbox for \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\"" Mar 2 13:06:05.749200 containerd[1577]: time="2026-03-02T13:06:05.749132863Z" level=info msg="Container to stop \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.749200 containerd[1577]: time="2026-03-02T13:06:05.749160013Z" level=info msg="Container to stop \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.749200 containerd[1577]: time="2026-03-02T13:06:05.749175272Z" level=info msg="Container to stop \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.749200 containerd[1577]: time="2026-03-02T13:06:05.749189108Z" level=info msg="Container to stop \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.749328 containerd[1577]: time="2026-03-02T13:06:05.749201541Z" level=info msg="Container to stop \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:06:05.765491 systemd[1]: cri-containerd-63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4.scope: Deactivated successfully. Mar 2 13:06:05.766574 containerd[1577]: time="2026-03-02T13:06:05.766534784Z" level=info msg="received sandbox exit event container_id:\"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" id:\"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" exit_status:137 exited_at:{seconds:1772456765 nanos:765927181}" monitor_name=podsandbox Mar 2 13:06:05.774978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b-rootfs.mount: Deactivated successfully. Mar 2 13:06:05.787418 containerd[1577]: time="2026-03-02T13:06:05.787180921Z" level=info msg="shim disconnected" id=e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b namespace=k8s.io Mar 2 13:06:05.787418 containerd[1577]: time="2026-03-02T13:06:05.787276539Z" level=warning msg="cleaning up after shim disconnected" id=e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b namespace=k8s.io Mar 2 13:06:05.787418 containerd[1577]: time="2026-03-02T13:06:05.787291998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:06:05.825421 containerd[1577]: time="2026-03-02T13:06:05.824539440Z" level=info msg="TearDown network for sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" successfully" Mar 2 13:06:05.825421 containerd[1577]: time="2026-03-02T13:06:05.824619319Z" level=info msg="StopPodSandbox for \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" returns successfully" Mar 2 13:06:05.828122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4-rootfs.mount: Deactivated successfully. Mar 2 13:06:05.828322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b-shm.mount: Deactivated successfully. Mar 2 13:06:05.833547 containerd[1577]: time="2026-03-02T13:06:05.833463573Z" level=info msg="received sandbox container exit event sandbox_id:\"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" exit_status:137 exited_at:{seconds:1772456765 nanos:718554897}" monitor_name=criService Mar 2 13:06:05.839463 containerd[1577]: time="2026-03-02T13:06:05.839404204Z" level=info msg="shim disconnected" id=63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4 namespace=k8s.io Mar 2 13:06:05.839463 containerd[1577]: time="2026-03-02T13:06:05.839447645Z" level=warning msg="cleaning up after shim disconnected" id=63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4 namespace=k8s.io Mar 2 13:06:05.839610 containerd[1577]: time="2026-03-02T13:06:05.839462583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:06:05.869712 containerd[1577]: time="2026-03-02T13:06:05.869582428Z" level=info msg="received sandbox container exit event sandbox_id:\"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" exit_status:137 exited_at:{seconds:1772456765 nanos:765927181}" monitor_name=criService Mar 2 13:06:05.870672 containerd[1577]: time="2026-03-02T13:06:05.870515357Z" level=info msg="TearDown network for sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" successfully" Mar 2 13:06:05.870672 containerd[1577]: time="2026-03-02T13:06:05.870560962Z" level=info msg="StopPodSandbox for \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" returns successfully" Mar 2 13:06:05.974071 kubelet[2824]: I0302 13:06:05.973688 2824 scope.go:122] "RemoveContainer" containerID="e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee" Mar 2 13:06:05.982436 containerd[1577]: time="2026-03-02T13:06:05.981955024Z" level=info msg="RemoveContainer for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\"" Mar 2 13:06:06.004329 kubelet[2824]: I0302 13:06:06.003940 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-xtables-lock\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004329 kubelet[2824]: I0302 13:06:06.004172 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-hubble-tls\" (UniqueName: \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-hubble-tls\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004329 kubelet[2824]: I0302 13:06:06.004205 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-net\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004329 kubelet[2824]: I0302 13:06:06.004236 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-bpf-maps\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004329 kubelet[2824]: I0302 13:06:06.004264 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cni-path\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cni-path\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004842 kubelet[2824]: I0302 13:06:06.004296 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-etc-cni-netd\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004842 kubelet[2824]: I0302 13:06:06.004328 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-kube-api-access-sn7s6\" (UniqueName: \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-kube-api-access-sn7s6\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004842 kubelet[2824]: I0302 13:06:06.004355 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/daeafd74-e1e4-481a-801a-04856244d09d-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/daeafd74-e1e4-481a-801a-04856244d09d-cilium-config-path\") pod \"daeafd74-e1e4-481a-801a-04856244d09d\" (UID: \"daeafd74-e1e4-481a-801a-04856244d09d\") " Mar 2 13:06:06.004842 kubelet[2824]: I0302 13:06:06.004384 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-run\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-run\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.004842 kubelet[2824]: I0302 13:06:06.004414 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-config-path\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.005161 kubelet[2824]: I0302 13:06:06.004444 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/daeafd74-e1e4-481a-801a-04856244d09d-kube-api-access-hpmfw\" (UniqueName: \"kubernetes.io/projected/daeafd74-e1e4-481a-801a-04856244d09d-kube-api-access-hpmfw\") pod \"daeafd74-e1e4-481a-801a-04856244d09d\" (UID: \"daeafd74-e1e4-481a-801a-04856244d09d\") " Mar 2 13:06:06.005161 kubelet[2824]: I0302 13:06:06.004470 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-cgroup\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.005161 kubelet[2824]: I0302 13:06:06.004494 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-hostproc\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-hostproc\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.005161 kubelet[2824]: I0302 13:06:06.004523 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/01be2fb8-8c27-4bff-8654-2186ba08db93-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01be2fb8-8c27-4bff-8654-2186ba08db93-clustermesh-secrets\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.005161 kubelet[2824]: I0302 13:06:06.004545 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-lib-modules\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-lib-modules\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.005391 kubelet[2824]: I0302 13:06:06.004569 2824 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-kernel\") pod \"01be2fb8-8c27-4bff-8654-2186ba08db93\" (UID: \"01be2fb8-8c27-4bff-8654-2186ba08db93\") " Mar 2 13:06:06.005391 kubelet[2824]: I0302 13:06:06.004825 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-kernel" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.005391 kubelet[2824]: I0302 13:06:06.004916 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-run" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.005391 kubelet[2824]: I0302 13:06:06.005131 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-etc-cni-netd" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.005391 kubelet[2824]: I0302 13:06:06.005178 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cni-path" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.007569 kubelet[2824]: I0302 13:06:06.006573 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-bpf-maps" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.007569 kubelet[2824]: I0302 13:06:06.006754 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-net" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.013139 containerd[1577]: time="2026-03-02T13:06:06.012874615Z" level=info msg="RemoveContainer for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" returns successfully" Mar 2 13:06:06.013456 kubelet[2824]: I0302 13:06:06.013242 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-xtables-lock" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.016275 kubelet[2824]: I0302 13:06:06.016159 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-cgroup" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.018066 kubelet[2824]: I0302 13:06:06.017953 2824 scope.go:122] "RemoveContainer" containerID="a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813" Mar 2 13:06:06.018499 kubelet[2824]: I0302 13:06:06.018369 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-hostproc" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.022713 kubelet[2824]: I0302 13:06:06.022652 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-lib-modules" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:06:06.023138 kubelet[2824]: I0302 13:06:06.023111 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-hubble-tls" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:06:06.024678 kubelet[2824]: I0302 13:06:06.024617 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-kube-api-access-sn7s6" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "kube-api-access-sn7s6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:06:06.025684 kubelet[2824]: I0302 13:06:06.025494 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-config-path" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:06:06.028227 containerd[1577]: time="2026-03-02T13:06:06.027751779Z" level=info msg="RemoveContainer for \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\"" Mar 2 13:06:06.028460 kubelet[2824]: I0302 13:06:06.028427 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be2fb8-8c27-4bff-8654-2186ba08db93-clustermesh-secrets" pod "01be2fb8-8c27-4bff-8654-2186ba08db93" (UID: "01be2fb8-8c27-4bff-8654-2186ba08db93"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:06:06.030734 kubelet[2824]: I0302 13:06:06.030544 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daeafd74-e1e4-481a-801a-04856244d09d-kube-api-access-hpmfw" pod "daeafd74-e1e4-481a-801a-04856244d09d" (UID: "daeafd74-e1e4-481a-801a-04856244d09d"). InnerVolumeSpecName "kube-api-access-hpmfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:06:06.032593 kubelet[2824]: I0302 13:06:06.032321 2824 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daeafd74-e1e4-481a-801a-04856244d09d-cilium-config-path" pod "daeafd74-e1e4-481a-801a-04856244d09d" (UID: "daeafd74-e1e4-481a-801a-04856244d09d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:06:06.045323 containerd[1577]: time="2026-03-02T13:06:06.045114099Z" level=info msg="RemoveContainer for \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\" returns successfully" Mar 2 13:06:06.046175 kubelet[2824]: I0302 13:06:06.045816 2824 scope.go:122] "RemoveContainer" containerID="48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3" Mar 2 13:06:06.057802 containerd[1577]: time="2026-03-02T13:06:06.057620894Z" level=info msg="RemoveContainer for \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\"" Mar 2 13:06:06.069104 containerd[1577]: time="2026-03-02T13:06:06.068727765Z" level=info msg="RemoveContainer for \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\" returns successfully" Mar 2 13:06:06.069308 kubelet[2824]: I0302 13:06:06.069233 2824 scope.go:122] "RemoveContainer" containerID="cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b" Mar 2 13:06:06.074221 containerd[1577]: time="2026-03-02T13:06:06.073671633Z" level=info msg="RemoveContainer for \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\"" Mar 2 13:06:06.083481 containerd[1577]: time="2026-03-02T13:06:06.083378812Z" level=info msg="RemoveContainer for \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\" returns successfully" Mar 2 13:06:06.084346 kubelet[2824]: I0302 13:06:06.084256 2824 scope.go:122] "RemoveContainer" containerID="d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01" Mar 2 13:06:06.088217 containerd[1577]: time="2026-03-02T13:06:06.087878769Z" level=info msg="RemoveContainer for \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\"" Mar 2 13:06:06.095796 containerd[1577]: time="2026-03-02T13:06:06.095559968Z" level=info msg="RemoveContainer for \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\" returns successfully" Mar 2 13:06:06.096528 kubelet[2824]: I0302 13:06:06.096457 2824 scope.go:122] "RemoveContainer" containerID="e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee" Mar 2 13:06:06.097347 containerd[1577]: time="2026-03-02T13:06:06.097184084Z" level=error msg="ContainerStatus for \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\": not found" Mar 2 13:06:06.097704 kubelet[2824]: E0302 13:06:06.097612 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\": not found" containerID="e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee" Mar 2 13:06:06.097969 kubelet[2824]: I0302 13:06:06.097741 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee"} err="failed to get container status \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1df314ffa6c4b7bfc6947872283ec7ac5a07b593d4e2c2d0368c0df4b7de8ee\": not found" Mar 2 13:06:06.097969 kubelet[2824]: I0302 13:06:06.097857 2824 scope.go:122] "RemoveContainer" containerID="a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813" Mar 2 13:06:06.098398 containerd[1577]: time="2026-03-02T13:06:06.098308684Z" level=error msg="ContainerStatus for \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\": not found" Mar 2 13:06:06.098795 kubelet[2824]: E0302 13:06:06.098487 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\": not found" containerID="a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813" Mar 2 13:06:06.098795 kubelet[2824]: I0302 13:06:06.098550 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813"} err="failed to get container status \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8efe7defcd51bdf2ecef0dbc175f6a40c065b0193486dceb1a6e172ef31e813\": not found" Mar 2 13:06:06.098795 kubelet[2824]: I0302 13:06:06.098571 2824 scope.go:122] "RemoveContainer" containerID="48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3" Mar 2 13:06:06.099362 kubelet[2824]: E0302 13:06:06.099234 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\": not found" containerID="48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3" Mar 2 13:06:06.099362 kubelet[2824]: I0302 13:06:06.099258 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3"} err="failed to get container status \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\": rpc error: code = NotFound desc = an error occurred when try to find container \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\": not found" Mar 2 13:06:06.099362 kubelet[2824]: I0302 13:06:06.099275 2824 scope.go:122] "RemoveContainer" containerID="cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b" Mar 2 13:06:06.099759 containerd[1577]: time="2026-03-02T13:06:06.098955059Z" level=error msg="ContainerStatus for \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48e359147c412075c7ca88ca8a6e5067d84dc5759d05ec28382cbc10ef9c1ca3\": not found" Mar 2 13:06:06.099759 containerd[1577]: time="2026-03-02T13:06:06.099455062Z" level=error msg="ContainerStatus for \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\": not found" Mar 2 13:06:06.099849 kubelet[2824]: E0302 13:06:06.099779 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\": not found" containerID="cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b" Mar 2 13:06:06.099849 kubelet[2824]: I0302 13:06:06.099805 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b"} err="failed to get container status \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb5490c113d372cbd4f4233cc44c687b81645bdb702d60eb9b7f3443e45f786b\": not found" Mar 2 13:06:06.099849 kubelet[2824]: I0302 13:06:06.099823 2824 scope.go:122] "RemoveContainer" containerID="d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01" Mar 2 13:06:06.100478 containerd[1577]: time="2026-03-02T13:06:06.100086951Z" level=error msg="ContainerStatus for \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\": not found" Mar 2 13:06:06.100618 kubelet[2824]: E0302 13:06:06.100397 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\": not found" containerID="d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01" Mar 2 13:06:06.100618 kubelet[2824]: I0302 13:06:06.100423 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01"} err="failed to get container status \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3df7dca5640ab6fe981dced847e52dbc7ad7ef865db8ffb0d93b1dac722ef01\": not found" Mar 2 13:06:06.100618 kubelet[2824]: I0302 13:06:06.100476 2824 scope.go:122] "RemoveContainer" containerID="ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4" Mar 2 13:06:06.104375 containerd[1577]: time="2026-03-02T13:06:06.103180151Z" level=info msg="RemoveContainer for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\"" Mar 2 13:06:06.105108 kubelet[2824]: I0302 13:06:06.105080 2824 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105389 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpmfw\" (UniqueName: \"kubernetes.io/projected/daeafd74-e1e4-481a-801a-04856244d09d-kube-api-access-hpmfw\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105413 2824 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105427 2824 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105442 2824 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01be2fb8-8c27-4bff-8654-2186ba08db93-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105456 2824 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105469 2824 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105483 2824 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.105761 kubelet[2824]: I0302 13:06:06.105495 2824 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105509 2824 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105523 2824 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105536 2824 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105550 2824 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105563 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sn7s6\" (UniqueName: \"kubernetes.io/projected/01be2fb8-8c27-4bff-8654-2186ba08db93-kube-api-access-sn7s6\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105577 2824 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/daeafd74-e1e4-481a-801a-04856244d09d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.106289 kubelet[2824]: I0302 13:06:06.105592 2824 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01be2fb8-8c27-4bff-8654-2186ba08db93-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 13:06:06.112867 containerd[1577]: time="2026-03-02T13:06:06.112611445Z" level=info msg="RemoveContainer for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" returns successfully" Mar 2 13:06:06.113399 kubelet[2824]: I0302 13:06:06.113307 2824 scope.go:122] "RemoveContainer" containerID="193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63" Mar 2 13:06:06.116881 containerd[1577]: time="2026-03-02T13:06:06.116792611Z" level=info msg="RemoveContainer for \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\"" Mar 2 13:06:06.131504 containerd[1577]: time="2026-03-02T13:06:06.131439913Z" level=info msg="RemoveContainer for \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\" returns successfully" Mar 2 13:06:06.132246 kubelet[2824]: I0302 13:06:06.132203 2824 scope.go:122] "RemoveContainer" containerID="ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4" Mar 2 13:06:06.133852 containerd[1577]: time="2026-03-02T13:06:06.133746402Z" level=error msg="ContainerStatus for \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\": not found" Mar 2 13:06:06.134650 kubelet[2824]: E0302 13:06:06.134509 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\": not found" containerID="ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4" Mar 2 13:06:06.134650 kubelet[2824]: I0302 13:06:06.134594 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4"} err="failed to get container status \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef57942591b9720061086f7107f505c3a78f6c8c37f229b53cf24a3899c71db4\": not found" Mar 2 13:06:06.134650 kubelet[2824]: I0302 13:06:06.134628 2824 scope.go:122] "RemoveContainer" containerID="193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63" Mar 2 13:06:06.136066 containerd[1577]: time="2026-03-02T13:06:06.135906320Z" level=error msg="ContainerStatus for \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\": not found" Mar 2 13:06:06.136870 kubelet[2824]: E0302 13:06:06.136783 2824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\": not found" containerID="193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63" Mar 2 13:06:06.136870 kubelet[2824]: I0302 13:06:06.136834 2824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63"} err="failed to get container status \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\": rpc error: code = NotFound desc = an error occurred when try to find container \"193fec7a49ca6e50f8a4562a7821eb026dc586cfc3557a296a98d566c38b4c63\": not found" Mar 2 13:06:06.305971 systemd[1]: Removed slice kubepods-burstable-pod01be2fb8_8c27_4bff_8654_2186ba08db93.slice - libcontainer container kubepods-burstable-pod01be2fb8_8c27_4bff_8654_2186ba08db93.slice. Mar 2 13:06:06.306356 systemd[1]: kubepods-burstable-pod01be2fb8_8c27_4bff_8654_2186ba08db93.slice: Consumed 24.299s CPU time, 140.8M memory peak, 344K read from disk, 15.6M written to disk. Mar 2 13:06:06.346864 systemd[1]: Removed slice kubepods-besteffort-poddaeafd74_e1e4_481a_801a_04856244d09d.slice - libcontainer container kubepods-besteffort-poddaeafd74_e1e4_481a_801a_04856244d09d.slice. Mar 2 13:06:06.347350 systemd[1]: kubepods-besteffort-poddaeafd74_e1e4_481a_801a_04856244d09d.slice: Consumed 3.470s CPU time, 30M memory peak, 8K written to disk. Mar 2 13:06:06.478840 kubelet[2824]: I0302 13:06:06.478597 2824 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="01be2fb8-8c27-4bff-8654-2186ba08db93" path="/var/lib/kubelet/pods/01be2fb8-8c27-4bff-8654-2186ba08db93/volumes" Mar 2 13:06:06.480679 kubelet[2824]: I0302 13:06:06.480594 2824 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="daeafd74-e1e4-481a-801a-04856244d09d" path="/var/lib/kubelet/pods/daeafd74-e1e4-481a-801a-04856244d09d/volumes" Mar 2 13:06:06.628569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4-shm.mount: Deactivated successfully. Mar 2 13:06:06.628780 systemd[1]: var-lib-kubelet-pods-daeafd74\x2de1e4\x2d481a\x2d801a\x2d04856244d09d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpmfw.mount: Deactivated successfully. Mar 2 13:06:06.628906 systemd[1]: var-lib-kubelet-pods-01be2fb8\x2d8c27\x2d4bff\x2d8654\x2d2186ba08db93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn7s6.mount: Deactivated successfully. Mar 2 13:06:06.629177 systemd[1]: var-lib-kubelet-pods-01be2fb8\x2d8c27\x2d4bff\x2d8654\x2d2186ba08db93-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:06:06.629321 systemd[1]: var-lib-kubelet-pods-01be2fb8\x2d8c27\x2d4bff\x2d8654\x2d2186ba08db93-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:06:07.276444 sshd[4902]: Connection closed by 10.0.0.1 port 44212 Mar 2 13:06:07.278799 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:07.292528 systemd[1]: sshd@33-10.0.0.57:22-10.0.0.1:44212.service: Deactivated successfully. Mar 2 13:06:07.296591 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 13:06:07.299564 systemd-logind[1559]: Session 34 logged out. Waiting for processes to exit. Mar 2 13:06:07.306611 systemd[1]: Started sshd@34-10.0.0.57:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Mar 2 13:06:07.313385 systemd-logind[1559]: Removed session 34. Mar 2 13:06:07.429787 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:06:07.434951 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:07.454527 systemd-logind[1559]: New session 35 of user core. Mar 2 13:06:07.462535 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 13:06:08.307617 sshd[5051]: Connection closed by 10.0.0.1 port 44218 Mar 2 13:06:08.306459 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:08.323328 systemd[1]: sshd@34-10.0.0.57:22-10.0.0.1:44218.service: Deactivated successfully. Mar 2 13:06:08.328212 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 13:06:08.332619 systemd-logind[1559]: Session 35 logged out. Waiting for processes to exit. Mar 2 13:06:08.336786 systemd[1]: Started sshd@35-10.0.0.57:22-10.0.0.1:44236.service - OpenSSH per-connection server daemon (10.0.0.1:44236). Mar 2 13:06:08.339932 systemd-logind[1559]: Removed session 35. Mar 2 13:06:08.375370 systemd[1]: Created slice kubepods-burstable-podf2826cc0_0bae_49fd_a1ea_0a11255cf67c.slice - libcontainer container kubepods-burstable-podf2826cc0_0bae_49fd_a1ea_0a11255cf67c.slice. Mar 2 13:06:08.444720 kubelet[2824]: E0302 13:06:08.444625 2824 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:06:08.450686 kubelet[2824]: I0302 13:06:08.450355 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-bpf-maps\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450686 kubelet[2824]: I0302 13:06:08.450497 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-cilium-cgroup\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450686 kubelet[2824]: I0302 13:06:08.450530 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-cilium-config-path\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450686 kubelet[2824]: I0302 13:06:08.450552 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-host-proc-sys-net\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450686 kubelet[2824]: I0302 13:06:08.450576 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-cilium-run\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450686 kubelet[2824]: I0302 13:06:08.450595 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-xtables-lock\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450981 kubelet[2824]: I0302 13:06:08.450623 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-lib-modules\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450981 kubelet[2824]: I0302 13:06:08.450653 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-etc-cni-netd\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450981 kubelet[2824]: I0302 13:06:08.450814 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-host-proc-sys-kernel\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450981 kubelet[2824]: I0302 13:06:08.450890 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-cilium-ipsec-secrets\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.450981 kubelet[2824]: I0302 13:06:08.450911 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-hubble-tls\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.451693 kubelet[2824]: I0302 13:06:08.450937 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqgw\" (UniqueName: \"kubernetes.io/projected/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-kube-api-access-rzqgw\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.451693 kubelet[2824]: I0302 13:06:08.451116 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-cni-path\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.451693 kubelet[2824]: I0302 13:06:08.451166 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-clustermesh-secrets\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.451693 kubelet[2824]: I0302 13:06:08.451214 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2826cc0-0bae-49fd-a1ea-0a11255cf67c-hostproc\") pod \"cilium-8xdsp\" (UID: \"f2826cc0-0bae-49fd-a1ea-0a11255cf67c\") " pod="kube-system/cilium-8xdsp" Mar 2 13:06:08.459750 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 44236 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:06:08.463435 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:08.481219 systemd-logind[1559]: New session 36 of user core. Mar 2 13:06:08.487703 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 13:06:08.523865 sshd[5066]: Connection closed by 10.0.0.1 port 44236 Mar 2 13:06:08.529647 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:08.545961 systemd[1]: sshd@35-10.0.0.57:22-10.0.0.1:44236.service: Deactivated successfully. Mar 2 13:06:08.555707 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 13:06:08.558161 systemd-logind[1559]: Session 36 logged out. Waiting for processes to exit. Mar 2 13:06:08.568482 systemd[1]: Started sshd@36-10.0.0.57:22-10.0.0.1:44240.service - OpenSSH per-connection server daemon (10.0.0.1:44240). Mar 2 13:06:08.573806 systemd-logind[1559]: Removed session 36. Mar 2 13:06:08.686803 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 44240 ssh2: RSA SHA256:czmw/9q6sscF1+XfBsErcOiXF1BWhk2ZRfVBwfsNH5w Mar 2 13:06:08.688656 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:08.694057 containerd[1577]: time="2026-03-02T13:06:08.693819875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xdsp,Uid:f2826cc0-0bae-49fd-a1ea-0a11255cf67c,Namespace:kube-system,Attempt:0,}" Mar 2 13:06:08.700242 systemd-logind[1559]: New session 37 of user core. Mar 2 13:06:08.722064 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 13:06:08.739926 containerd[1577]: time="2026-03-02T13:06:08.739387356Z" level=info msg="connecting to shim 3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735" address="unix:///run/containerd/s/ef3e0be34f8099f0083e9a3b13fc409fcc83b4f54031e94cd848e587b14b0cfb" namespace=k8s.io protocol=ttrpc version=3 Mar 2 13:06:08.808470 systemd[1]: Started cri-containerd-3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735.scope - libcontainer container 3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735. Mar 2 13:06:08.878152 kubelet[2824]: I0302 13:06:08.877893 2824 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:06:08Z","lastTransitionTime":"2026-03-02T13:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:06:08.969654 containerd[1577]: time="2026-03-02T13:06:08.969557549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xdsp,Uid:f2826cc0-0bae-49fd-a1ea-0a11255cf67c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\"" Mar 2 13:06:09.007939 containerd[1577]: time="2026-03-02T13:06:09.007795092Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:06:09.064887 containerd[1577]: time="2026-03-02T13:06:09.059648417Z" level=info msg="Container 351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:06:09.076797 containerd[1577]: time="2026-03-02T13:06:09.075177956Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d\"" Mar 2 13:06:09.078968 containerd[1577]: time="2026-03-02T13:06:09.078879962Z" level=info msg="StartContainer for \"351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d\"" Mar 2 13:06:09.084053 containerd[1577]: time="2026-03-02T13:06:09.083805507Z" level=info msg="connecting to shim 351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d" address="unix:///run/containerd/s/ef3e0be34f8099f0083e9a3b13fc409fcc83b4f54031e94cd848e587b14b0cfb" protocol=ttrpc version=3 Mar 2 13:06:09.173739 systemd[1]: Started cri-containerd-351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d.scope - libcontainer container 351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d. Mar 2 13:06:09.401539 containerd[1577]: time="2026-03-02T13:06:09.400186344Z" level=info msg="StartContainer for \"351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d\" returns successfully" Mar 2 13:06:09.433767 systemd[1]: cri-containerd-351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d.scope: Deactivated successfully. Mar 2 13:06:09.451245 containerd[1577]: time="2026-03-02T13:06:09.450473196Z" level=info msg="received container exit event container_id:\"351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d\" id:\"351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d\" pid:5145 exited_at:{seconds:1772456769 nanos:449778861}" Mar 2 13:06:09.607825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-351eadff84fcd55292145f0f057907f8126b9b00bfbaba42e31f50c9d385222d-rootfs.mount: Deactivated successfully. Mar 2 13:06:10.041766 kubelet[2824]: E0302 13:06:10.040895 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:10.064384 containerd[1577]: time="2026-03-02T13:06:10.063743118Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:06:10.084708 containerd[1577]: time="2026-03-02T13:06:10.084666157Z" level=info msg="Container b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:06:10.098946 containerd[1577]: time="2026-03-02T13:06:10.098843003Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127\"" Mar 2 13:06:10.100893 containerd[1577]: time="2026-03-02T13:06:10.100838202Z" level=info msg="StartContainer for \"b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127\"" Mar 2 13:06:10.108388 containerd[1577]: time="2026-03-02T13:06:10.108333253Z" level=info msg="connecting to shim b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127" address="unix:///run/containerd/s/ef3e0be34f8099f0083e9a3b13fc409fcc83b4f54031e94cd848e587b14b0cfb" protocol=ttrpc version=3 Mar 2 13:06:10.184343 systemd[1]: Started cri-containerd-b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127.scope - libcontainer container b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127. Mar 2 13:06:10.485693 containerd[1577]: time="2026-03-02T13:06:10.482389831Z" level=info msg="StartContainer for \"b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127\" returns successfully" Mar 2 13:06:10.505311 systemd[1]: cri-containerd-b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127.scope: Deactivated successfully. Mar 2 13:06:10.506807 containerd[1577]: time="2026-03-02T13:06:10.505701245Z" level=info msg="received container exit event container_id:\"b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127\" id:\"b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127\" pid:5190 exited_at:{seconds:1772456770 nanos:504426069}" Mar 2 13:06:10.665279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9171160e750c1b51f7daf8b79e9af9e9b83d3fab0fd98e51ee5cc46f33a7127-rootfs.mount: Deactivated successfully. Mar 2 13:06:11.103080 kubelet[2824]: E0302 13:06:11.102103 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:11.147973 containerd[1577]: time="2026-03-02T13:06:11.132533343Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:06:11.231594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959247401.mount: Deactivated successfully. Mar 2 13:06:11.297394 containerd[1577]: time="2026-03-02T13:06:11.293670523Z" level=info msg="Container 0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:06:11.305564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753902242.mount: Deactivated successfully. Mar 2 13:06:11.332225 containerd[1577]: time="2026-03-02T13:06:11.331830265Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35\"" Mar 2 13:06:11.356760 containerd[1577]: time="2026-03-02T13:06:11.356615765Z" level=info msg="StartContainer for \"0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35\"" Mar 2 13:06:11.367805 containerd[1577]: time="2026-03-02T13:06:11.367695516Z" level=info msg="connecting to shim 0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35" address="unix:///run/containerd/s/ef3e0be34f8099f0083e9a3b13fc409fcc83b4f54031e94cd848e587b14b0cfb" protocol=ttrpc version=3 Mar 2 13:06:11.473701 systemd[1]: Started cri-containerd-0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35.scope - libcontainer container 0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35. Mar 2 13:06:11.794931 containerd[1577]: time="2026-03-02T13:06:11.793500356Z" level=info msg="StartContainer for \"0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35\" returns successfully" Mar 2 13:06:11.840895 systemd[1]: cri-containerd-0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35.scope: Deactivated successfully. Mar 2 13:06:11.851958 containerd[1577]: time="2026-03-02T13:06:11.851438210Z" level=info msg="received container exit event container_id:\"0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35\" id:\"0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35\" pid:5232 exited_at:{seconds:1772456771 nanos:849972120}" Mar 2 13:06:11.985889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e21a20a9086fb99a4e2e4e5bb6b44b6f3d23863d7cb5475e8343c32a110be35-rootfs.mount: Deactivated successfully. Mar 2 13:06:12.108414 kubelet[2824]: E0302 13:06:12.108191 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:12.153567 containerd[1577]: time="2026-03-02T13:06:12.150865490Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:06:12.309252 containerd[1577]: time="2026-03-02T13:06:12.307245885Z" level=info msg="Container 2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:06:12.348342 containerd[1577]: time="2026-03-02T13:06:12.347931588Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076\"" Mar 2 13:06:12.353929 containerd[1577]: time="2026-03-02T13:06:12.353891684Z" level=info msg="StartContainer for \"2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076\"" Mar 2 13:06:12.362690 containerd[1577]: time="2026-03-02T13:06:12.361982140Z" level=info msg="connecting to shim 2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076" address="unix:///run/containerd/s/ef3e0be34f8099f0083e9a3b13fc409fcc83b4f54031e94cd848e587b14b0cfb" protocol=ttrpc version=3 Mar 2 13:06:12.450768 systemd[1]: Started cri-containerd-2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076.scope - libcontainer container 2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076. Mar 2 13:06:12.673304 systemd[1]: cri-containerd-2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076.scope: Deactivated successfully. Mar 2 13:06:12.682290 containerd[1577]: time="2026-03-02T13:06:12.682212602Z" level=info msg="received container exit event container_id:\"2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076\" id:\"2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076\" pid:5272 exited_at:{seconds:1772456772 nanos:676542148}" Mar 2 13:06:12.702329 containerd[1577]: time="2026-03-02T13:06:12.693883717Z" level=info msg="StartContainer for \"2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076\" returns successfully" Mar 2 13:06:12.800692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2343ab10510e934e6c2df43c96c6a5b1d6f4c3cb7da663ac530624921c8bb076-rootfs.mount: Deactivated successfully. Mar 2 13:06:13.169640 kubelet[2824]: E0302 13:06:13.169499 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:13.205241 containerd[1577]: time="2026-03-02T13:06:13.202513905Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:06:13.291718 containerd[1577]: time="2026-03-02T13:06:13.288314260Z" level=info msg="Container 04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a: CDI devices from CRI Config.CDIDevices: []" Mar 2 13:06:13.361735 containerd[1577]: time="2026-03-02T13:06:13.361510593Z" level=info msg="CreateContainer within sandbox \"3aae960cc97e2024e939a20e435947f3b551f10a40d85efc1a185988019b6735\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a\"" Mar 2 13:06:13.368427 containerd[1577]: time="2026-03-02T13:06:13.367240415Z" level=info msg="StartContainer for \"04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a\"" Mar 2 13:06:13.372978 containerd[1577]: time="2026-03-02T13:06:13.372756622Z" level=info msg="connecting to shim 04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a" address="unix:///run/containerd/s/ef3e0be34f8099f0083e9a3b13fc409fcc83b4f54031e94cd848e587b14b0cfb" protocol=ttrpc version=3 Mar 2 13:06:13.454834 kubelet[2824]: E0302 13:06:13.454770 2824 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:06:13.456405 systemd[1]: Started cri-containerd-04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a.scope - libcontainer container 04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a. Mar 2 13:06:13.805959 containerd[1577]: time="2026-03-02T13:06:13.801512812Z" level=info msg="StartContainer for \"04933995b0753218f3fda2a97997a11da842ccd5ede6abd96866c00ed5e8ab6a\" returns successfully" Mar 2 13:06:15.537381 kubelet[2824]: E0302 13:06:15.536712 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:15.585525 kubelet[2824]: I0302 13:06:15.584319 2824 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-8xdsp" podStartSLOduration=7.584134698 podStartE2EDuration="7.584134698s" podCreationTimestamp="2026-03-02 13:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:06:15.584104564 +0000 UTC m=+303.027167108" watchObservedRunningTime="2026-03-02 13:06:15.584134698 +0000 UTC m=+303.027197221" Mar 2 13:06:15.596287 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 2 13:06:16.688084 kubelet[2824]: E0302 13:06:16.687743 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:17.298979 containerd[1577]: time="2026-03-02T13:06:17.298767883Z" level=warning msg="container event discarded" container=592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be type=CONTAINER_CREATED_EVENT Mar 2 13:06:17.298979 containerd[1577]: time="2026-03-02T13:06:17.298910439Z" level=warning msg="container event discarded" container=592f962121e566ebfb073c662bc9b7fa2fdac8c3b5994dfaceba7f2ab66ca4be type=CONTAINER_STARTED_EVENT Mar 2 13:06:17.474437 kubelet[2824]: E0302 13:06:17.472864 2824 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-plfcs" podUID="75126270-5e34-4c1c-8d5d-41652830de15" Mar 2 13:06:17.476487 kubelet[2824]: E0302 13:06:17.475842 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:17.743079 containerd[1577]: time="2026-03-02T13:06:17.742117537Z" level=warning msg="container event discarded" container=ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf type=CONTAINER_CREATED_EVENT Mar 2 13:06:18.863088 containerd[1577]: time="2026-03-02T13:06:18.861372150Z" level=warning msg="container event discarded" container=ce6d1655e490331854408efcfd0b73c74269187d8ac6b9ce195a29a39230ceaf type=CONTAINER_STARTED_EVENT Mar 2 13:06:19.476094 kubelet[2824]: E0302 13:06:19.475918 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:21.989479 systemd-networkd[1439]: lxc_health: Link UP Mar 2 13:06:21.991893 systemd-networkd[1439]: lxc_health: Gained carrier Mar 2 13:06:22.260325 containerd[1577]: time="2026-03-02T13:06:22.259968750Z" level=warning msg="container event discarded" container=63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4 type=CONTAINER_CREATED_EVENT Mar 2 13:06:22.260325 containerd[1577]: time="2026-03-02T13:06:22.260124369Z" level=warning msg="container event discarded" container=63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4 type=CONTAINER_STARTED_EVENT Mar 2 13:06:22.280946 containerd[1577]: time="2026-03-02T13:06:22.280815760Z" level=warning msg="container event discarded" container=e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b type=CONTAINER_CREATED_EVENT Mar 2 13:06:22.280946 containerd[1577]: time="2026-03-02T13:06:22.280901210Z" level=warning msg="container event discarded" container=e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b type=CONTAINER_STARTED_EVENT Mar 2 13:06:22.712337 kubelet[2824]: E0302 13:06:22.691234 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:23.818633 kubelet[2824]: E0302 13:06:23.817477 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:23.887832 systemd-networkd[1439]: lxc_health: Gained IPv6LL Mar 2 13:06:24.442751 containerd[1577]: time="2026-03-02T13:06:24.442697257Z" level=info msg="StopPodSandbox for \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\"" Mar 2 13:06:24.444156 containerd[1577]: time="2026-03-02T13:06:24.444065480Z" level=info msg="TearDown network for sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" successfully" Mar 2 13:06:24.444156 containerd[1577]: time="2026-03-02T13:06:24.444115663Z" level=info msg="StopPodSandbox for \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" returns successfully" Mar 2 13:06:24.445063 containerd[1577]: time="2026-03-02T13:06:24.444980616Z" level=info msg="RemovePodSandbox for \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\"" Mar 2 13:06:24.445292 containerd[1577]: time="2026-03-02T13:06:24.445262552Z" level=info msg="Forcibly stopping sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\"" Mar 2 13:06:24.445563 containerd[1577]: time="2026-03-02T13:06:24.445536003Z" level=info msg="TearDown network for sandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" successfully" Mar 2 13:06:24.448482 containerd[1577]: time="2026-03-02T13:06:24.448450920Z" level=info msg="Ensure that sandbox 63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4 in task-service has been cleanup successfully" Mar 2 13:06:24.460098 containerd[1577]: time="2026-03-02T13:06:24.459115983Z" level=info msg="RemovePodSandbox \"63e66b85c42ccf66dcf87b9a2ec859931c58d8c92fdfe26b609b65def2a8f2b4\" returns successfully" Mar 2 13:06:24.461086 containerd[1577]: time="2026-03-02T13:06:24.460950263Z" level=info msg="StopPodSandbox for \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\"" Mar 2 13:06:24.461475 containerd[1577]: time="2026-03-02T13:06:24.461445807Z" level=info msg="TearDown network for sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" successfully" Mar 2 13:06:24.461755 containerd[1577]: time="2026-03-02T13:06:24.461732092Z" level=info msg="StopPodSandbox for \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" returns successfully" Mar 2 13:06:24.465270 containerd[1577]: time="2026-03-02T13:06:24.462619847Z" level=info msg="RemovePodSandbox for \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\"" Mar 2 13:06:24.465270 containerd[1577]: time="2026-03-02T13:06:24.462653100Z" level=info msg="Forcibly stopping sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\"" Mar 2 13:06:24.465270 containerd[1577]: time="2026-03-02T13:06:24.462846851Z" level=info msg="TearDown network for sandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" successfully" Mar 2 13:06:24.465719 containerd[1577]: time="2026-03-02T13:06:24.465689693Z" level=info msg="Ensure that sandbox e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b in task-service has been cleanup successfully" Mar 2 13:06:24.473079 containerd[1577]: time="2026-03-02T13:06:24.472519284Z" level=info msg="RemovePodSandbox \"e3a401d6c489c8df156a211dc66006f1554f5a7abce6a31e3cc2f1f1df1aff0b\" returns successfully" Mar 2 13:06:24.766108 kubelet[2824]: E0302 13:06:24.764857 2824 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:06:32.870517 sshd[5090]: Connection closed by 10.0.0.1 port 44240 Mar 2 13:06:32.871874 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:32.887202 systemd[1]: sshd@36-10.0.0.57:22-10.0.0.1:44240.service: Deactivated successfully. Mar 2 13:06:32.904904 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 13:06:32.936671 systemd-logind[1559]: Session 37 logged out. Waiting for processes to exit. Mar 2 13:06:32.944144 systemd-logind[1559]: Removed session 37.