Mar 6 02:37:20.097618 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:16:40 -00 2026 Mar 6 02:37:20.097641 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:37:20.097654 kernel: BIOS-provided physical RAM map: Mar 6 02:37:20.097660 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 6 02:37:20.097666 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 6 02:37:20.097672 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 6 02:37:20.097679 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 6 02:37:20.097685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 6 02:37:20.097691 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 02:37:20.097697 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 6 02:37:20.097703 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 02:37:20.097712 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 6 02:37:20.097718 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 02:37:20.097724 kernel: NX (Execute Disable) protection: active Mar 6 02:37:20.097732 kernel: APIC: Static calls initialized Mar 6 02:37:20.097738 kernel: SMBIOS 2.8 present. Mar 6 02:37:20.097747 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 6 02:37:20.097754 kernel: DMI: Memory slots populated: 1/1 Mar 6 02:37:20.097760 kernel: Hypervisor detected: KVM Mar 6 02:37:20.097766 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 02:37:20.097772 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 02:37:20.097779 kernel: kvm-clock: using sched offset of 10440231649 cycles Mar 6 02:37:20.097786 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 02:37:20.097792 kernel: tsc: Detected 2445.426 MHz processor Mar 6 02:37:20.097799 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 02:37:20.097806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 02:37:20.097815 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 02:37:20.097822 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 6 02:37:20.097828 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 02:37:20.097835 kernel: Using GB pages for direct mapping Mar 6 02:37:20.097841 kernel: ACPI: Early table checksum verification disabled Mar 6 02:37:20.097848 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 6 02:37:20.097854 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097861 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097868 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097877 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 6 02:37:20.097884 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097890 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097897 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097903 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 02:37:20.097913 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 6 02:37:20.097923 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 6 02:37:20.097930 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 6 02:37:20.097936 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 6 02:37:20.097943 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 6 02:37:20.097950 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 6 02:37:20.097957 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 6 02:37:20.097963 kernel: No NUMA configuration found Mar 6 02:37:20.097970 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 6 02:37:20.097980 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 6 02:37:20.097986 kernel: Zone ranges: Mar 6 02:37:20.097993 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 02:37:20.098000 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 6 02:37:20.098007 kernel: Normal empty Mar 6 02:37:20.098013 kernel: Device empty Mar 6 02:37:20.098020 kernel: Movable zone start for each node Mar 6 02:37:20.098027 kernel: Early memory node ranges Mar 6 02:37:20.098034 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 6 02:37:20.098040 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 6 02:37:20.098050 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 6 02:37:20.098056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 02:37:20.098063 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 6 02:37:20.098070 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 6 02:37:20.098077 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 02:37:20.098083 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 02:37:20.098090 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 02:37:20.098097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 02:37:20.098104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 02:37:20.098113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 02:37:20.098120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 02:37:20.098127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 02:37:20.098133 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 02:37:20.098140 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 02:37:20.098147 kernel: TSC deadline timer available Mar 6 02:37:20.098154 kernel: CPU topo: Max. logical packages: 1 Mar 6 02:37:20.098160 kernel: CPU topo: Max. logical dies: 1 Mar 6 02:37:20.098167 kernel: CPU topo: Max. dies per package: 1 Mar 6 02:37:20.098212 kernel: CPU topo: Max. threads per core: 1 Mar 6 02:37:20.098219 kernel: CPU topo: Num. cores per package: 4 Mar 6 02:37:20.098226 kernel: CPU topo: Num. threads per package: 4 Mar 6 02:37:20.098232 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 6 02:37:20.098239 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 02:37:20.098246 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 02:37:20.098253 kernel: kvm-guest: setup PV sched yield Mar 6 02:37:20.098259 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 6 02:37:20.098266 kernel: Booting paravirtualized kernel on KVM Mar 6 02:37:20.098273 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 02:37:20.098283 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 02:37:20.098290 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 6 02:37:20.098297 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 6 02:37:20.098304 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 02:37:20.098310 kernel: kvm-guest: PV spinlocks enabled Mar 6 02:37:20.098317 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 02:37:20.098325 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:37:20.098332 kernel: random: crng init done Mar 6 02:37:20.098342 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 02:37:20.098349 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 02:37:20.098355 kernel: Fallback order for Node 0: 0 Mar 6 02:37:20.098362 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 6 02:37:20.098369 kernel: Policy zone: DMA32 Mar 6 02:37:20.098376 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 02:37:20.098383 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 02:37:20.098390 kernel: ftrace: allocating 40099 entries in 157 pages Mar 6 02:37:20.098396 kernel: ftrace: allocated 157 pages with 5 groups Mar 6 02:37:20.098406 kernel: Dynamic Preempt: voluntary Mar 6 02:37:20.098412 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 02:37:20.098420 kernel: rcu: RCU event tracing is enabled. Mar 6 02:37:20.098427 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 02:37:20.098434 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 02:37:20.098441 kernel: Rude variant of Tasks RCU enabled. Mar 6 02:37:20.098448 kernel: Tracing variant of Tasks RCU enabled. Mar 6 02:37:20.098455 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 02:37:20.098462 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 02:37:20.098471 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:37:20.098478 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:37:20.098485 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 02:37:20.098492 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 02:37:20.098500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 02:37:20.098515 kernel: Console: colour VGA+ 80x25 Mar 6 02:37:20.098524 kernel: printk: legacy console [ttyS0] enabled Mar 6 02:37:20.098567 kernel: ACPI: Core revision 20240827 Mar 6 02:37:20.098575 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 02:37:20.098582 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 02:37:20.098589 kernel: x2apic enabled Mar 6 02:37:20.098597 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 02:37:20.098608 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 02:37:20.098615 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 02:37:20.098622 kernel: kvm-guest: setup PV IPIs Mar 6 02:37:20.098629 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 02:37:20.098636 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 6 02:37:20.098647 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 6 02:37:20.098654 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 02:37:20.098661 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 02:37:20.098668 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 02:37:20.098675 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 02:37:20.098682 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 02:37:20.098690 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 02:37:20.098697 kernel: Speculative Store Bypass: Vulnerable Mar 6 02:37:20.098704 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 02:37:20.098714 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 02:37:20.098722 kernel: active return thunk: srso_alias_return_thunk Mar 6 02:37:20.098729 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 02:37:20.098736 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 02:37:20.098743 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 02:37:20.098750 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 02:37:20.098757 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 02:37:20.098764 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 02:37:20.098774 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 02:37:20.098781 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 02:37:20.098788 kernel: Freeing SMP alternatives memory: 32K Mar 6 02:37:20.098795 kernel: pid_max: default: 32768 minimum: 301 Mar 6 02:37:20.098802 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 02:37:20.098809 kernel: landlock: Up and running. Mar 6 02:37:20.098816 kernel: SELinux: Initializing. Mar 6 02:37:20.098823 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:37:20.098830 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 02:37:20.098840 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 02:37:20.098847 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 02:37:20.098854 kernel: signal: max sigframe size: 1776 Mar 6 02:37:20.098861 kernel: rcu: Hierarchical SRCU implementation. Mar 6 02:37:20.098869 kernel: rcu: Max phase no-delay instances is 400. Mar 6 02:37:20.098876 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 02:37:20.098883 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 02:37:20.098890 kernel: smp: Bringing up secondary CPUs ... Mar 6 02:37:20.098897 kernel: smpboot: x86: Booting SMP configuration: Mar 6 02:37:20.098906 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 02:37:20.098913 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 02:37:20.098920 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 6 02:37:20.098928 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145096K reserved, 0K cma-reserved) Mar 6 02:37:20.098935 kernel: devtmpfs: initialized Mar 6 02:37:20.098942 kernel: x86/mm: Memory block size: 128MB Mar 6 02:37:20.098949 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 02:37:20.098956 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 02:37:20.098963 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 02:37:20.098973 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 02:37:20.098980 kernel: audit: initializing netlink subsys (disabled) Mar 6 02:37:20.098987 kernel: audit: type=2000 audit(1772764636.432:1): state=initialized audit_enabled=0 res=1 Mar 6 02:37:20.098994 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 02:37:20.099001 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 02:37:20.099009 kernel: cpuidle: using governor menu Mar 6 02:37:20.099016 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 02:37:20.099023 kernel: dca service started, version 1.12.1 Mar 6 02:37:20.099030 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 6 02:37:20.099039 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 02:37:20.099046 kernel: PCI: Using configuration type 1 for base access Mar 6 02:37:20.099054 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 02:37:20.099061 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 02:37:20.099068 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 02:37:20.099075 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 02:37:20.099082 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 02:37:20.099089 kernel: ACPI: Added _OSI(Module Device) Mar 6 02:37:20.099096 kernel: ACPI: Added _OSI(Processor Device) Mar 6 02:37:20.099105 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 02:37:20.099113 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 02:37:20.099119 kernel: ACPI: Interpreter enabled Mar 6 02:37:20.099127 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 02:37:20.099134 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 02:37:20.099141 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 02:37:20.099148 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 02:37:20.099155 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 02:37:20.099162 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 02:37:20.099523 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 02:37:20.099722 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 02:37:20.099896 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 02:37:20.099907 kernel: PCI host bridge to bus 0000:00 Mar 6 02:37:20.100131 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 02:37:20.100342 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 02:37:20.100580 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 02:37:20.100724 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 02:37:20.100854 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 02:37:20.100981 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 6 02:37:20.101116 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 02:37:20.101349 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 6 02:37:20.101792 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 6 02:37:20.101961 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 6 02:37:20.102137 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 6 02:37:20.102349 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 6 02:37:20.102494 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 02:37:20.102837 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 6 02:37:20.102985 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 6 02:37:20.103132 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 6 02:37:20.103305 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 6 02:37:20.103512 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 6 02:37:20.103702 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 6 02:37:20.103843 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 6 02:37:20.103982 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 6 02:37:20.104173 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 6 02:37:20.104379 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 6 02:37:20.104522 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 6 02:37:20.104735 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 6 02:37:20.104876 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 6 02:37:20.105082 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 6 02:37:20.105255 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 02:37:20.105462 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 6 02:37:20.105659 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 6 02:37:20.105800 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 6 02:37:20.105990 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 6 02:37:20.106133 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 6 02:37:20.106143 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 02:37:20.106151 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 02:37:20.106158 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 02:37:20.106170 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 02:37:20.106204 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 02:37:20.106212 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 02:37:20.106219 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 02:37:20.106227 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 02:37:20.106234 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 02:37:20.106241 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 02:37:20.106248 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 02:37:20.106255 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 02:37:20.106270 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 02:37:20.106282 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 02:37:20.106294 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 02:37:20.106306 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 02:37:20.106318 kernel: iommu: Default domain type: Translated Mar 6 02:37:20.106330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 02:37:20.106342 kernel: PCI: Using ACPI for IRQ routing Mar 6 02:37:20.106354 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 02:37:20.106366 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 6 02:37:20.106381 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 6 02:37:20.106653 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 02:37:20.106849 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 02:37:20.106990 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 02:37:20.106999 kernel: vgaarb: loaded Mar 6 02:37:20.107007 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 02:37:20.107014 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 02:37:20.107021 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 02:37:20.107033 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 02:37:20.107041 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 02:37:20.107048 kernel: pnp: PnP ACPI init Mar 6 02:37:20.107303 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 02:37:20.107316 kernel: pnp: PnP ACPI: found 6 devices Mar 6 02:37:20.107324 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 02:37:20.107331 kernel: NET: Registered PF_INET protocol family Mar 6 02:37:20.107338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 02:37:20.107346 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 02:37:20.107357 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 02:37:20.107365 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 02:37:20.107372 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 02:37:20.107380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 02:37:20.107387 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:37:20.107394 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 02:37:20.107401 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 02:37:20.107409 kernel: NET: Registered PF_XDP protocol family Mar 6 02:37:20.107591 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 02:37:20.107727 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 02:37:20.107856 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 02:37:20.107983 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 02:37:20.108111 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 02:37:20.108272 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 6 02:37:20.108283 kernel: PCI: CLS 0 bytes, default 64 Mar 6 02:37:20.108291 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 6 02:37:20.108298 kernel: Initialise system trusted keyrings Mar 6 02:37:20.108311 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 02:37:20.108318 kernel: Key type asymmetric registered Mar 6 02:37:20.108325 kernel: Asymmetric key parser 'x509' registered Mar 6 02:37:20.108332 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 6 02:37:20.108340 kernel: io scheduler mq-deadline registered Mar 6 02:37:20.108347 kernel: io scheduler kyber registered Mar 6 02:37:20.108354 kernel: io scheduler bfq registered Mar 6 02:37:20.108361 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 02:37:20.108369 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 02:37:20.108379 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 02:37:20.108387 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 02:37:20.108394 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 02:37:20.108401 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 02:37:20.108408 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 02:37:20.108416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 02:37:20.108423 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 02:37:20.108658 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 02:37:20.108677 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 02:37:20.108822 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 02:37:20.108957 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T02:37:19 UTC (1772764639) Mar 6 02:37:20.109090 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 02:37:20.109100 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 02:37:20.109107 kernel: NET: Registered PF_INET6 protocol family Mar 6 02:37:20.109114 kernel: Segment Routing with IPv6 Mar 6 02:37:20.109121 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 02:37:20.109129 kernel: NET: Registered PF_PACKET protocol family Mar 6 02:37:20.109139 kernel: Key type dns_resolver registered Mar 6 02:37:20.109147 kernel: IPI shorthand broadcast: enabled Mar 6 02:37:20.109154 kernel: sched_clock: Marking stable (3403015932, 360328624)->(3912039277, -148694721) Mar 6 02:37:20.109161 kernel: registered taskstats version 1 Mar 6 02:37:20.109169 kernel: Loading compiled-in X.509 certificates Mar 6 02:37:20.109208 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 30893fe9fd219d26109af079e6493e1c8b1c00af' Mar 6 02:37:20.109216 kernel: Demotion targets for Node 0: null Mar 6 02:37:20.109224 kernel: Key type .fscrypt registered Mar 6 02:37:20.109231 kernel: Key type fscrypt-provisioning registered Mar 6 02:37:20.109242 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 02:37:20.109249 kernel: ima: Allocated hash algorithm: sha1 Mar 6 02:37:20.109256 kernel: ima: No architecture policies found Mar 6 02:37:20.109263 kernel: clk: Disabling unused clocks Mar 6 02:37:20.109270 kernel: Warning: unable to open an initial console. Mar 6 02:37:20.109278 kernel: Freeing unused kernel image (initmem) memory: 46196K Mar 6 02:37:20.109285 kernel: Write protecting the kernel read-only data: 40960k Mar 6 02:37:20.109292 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 6 02:37:20.109302 kernel: Run /init as init process Mar 6 02:37:20.109309 kernel: with arguments: Mar 6 02:37:20.109317 kernel: /init Mar 6 02:37:20.109324 kernel: with environment: Mar 6 02:37:20.109331 kernel: HOME=/ Mar 6 02:37:20.109338 kernel: TERM=linux Mar 6 02:37:20.109347 systemd[1]: Successfully made /usr/ read-only. Mar 6 02:37:20.109356 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:37:20.109367 systemd[1]: Detected virtualization kvm. Mar 6 02:37:20.109375 systemd[1]: Detected architecture x86-64. Mar 6 02:37:20.109382 systemd[1]: Running in initrd. Mar 6 02:37:20.109389 systemd[1]: No hostname configured, using default hostname. Mar 6 02:37:20.109397 systemd[1]: Hostname set to . Mar 6 02:37:20.109405 systemd[1]: Initializing machine ID from VM UUID. Mar 6 02:37:20.109413 systemd[1]: Queued start job for default target initrd.target. Mar 6 02:37:20.109420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:37:20.109441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:37:20.109452 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 02:37:20.109460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:37:20.109468 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 02:37:20.109476 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 02:37:20.109488 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 02:37:20.109496 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 02:37:20.109504 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:37:20.109511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:37:20.109519 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:37:20.109527 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:37:20.109571 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:37:20.109579 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:37:20.109591 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:37:20.109599 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:37:20.109606 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 02:37:20.109614 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 02:37:20.109622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:37:20.109630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:37:20.109638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:37:20.109646 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:37:20.109653 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 02:37:20.109664 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:37:20.109671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 02:37:20.109680 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 02:37:20.109687 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 02:37:20.109695 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:37:20.109703 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:37:20.109711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:37:20.109719 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 02:37:20.109732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:37:20.109743 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 02:37:20.109778 systemd-journald[203]: Collecting audit messages is disabled. Mar 6 02:37:20.109797 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:37:20.109806 systemd-journald[203]: Journal started Mar 6 02:37:20.109825 systemd-journald[203]: Runtime Journal (/run/log/journal/c3e90e90d7794b1492ac238eb2feefda) is 6M, max 48.3M, 42.2M free. Mar 6 02:37:20.091165 systemd-modules-load[204]: Inserted module 'overlay' Mar 6 02:37:20.239335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 02:37:20.239362 kernel: Bridge firewalling registered Mar 6 02:37:20.138829 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 6 02:37:20.245879 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:37:20.251272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:37:20.260356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:37:20.269705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:37:20.285019 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 02:37:20.294112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:37:20.304888 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:37:20.312798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:37:20.317782 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:37:20.325160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:37:20.333328 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 02:37:20.333801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:37:20.349704 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 02:37:20.357294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:37:20.359520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:37:20.385620 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bef16c10382b6f77f9493af2297475832ff2f09f1ada4155425ad9b32dd6e53 Mar 6 02:37:20.405818 systemd-resolved[245]: Positive Trust Anchors: Mar 6 02:37:20.405848 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:37:20.405874 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:37:20.408456 systemd-resolved[245]: Defaulting to hostname 'linux'. Mar 6 02:37:20.428112 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:37:20.431129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:37:20.535647 kernel: SCSI subsystem initialized Mar 6 02:37:20.545604 kernel: Loading iSCSI transport class v2.0-870. Mar 6 02:37:20.557619 kernel: iscsi: registered transport (tcp) Mar 6 02:37:20.581380 kernel: iscsi: registered transport (qla4xxx) Mar 6 02:37:20.581461 kernel: QLogic iSCSI HBA Driver Mar 6 02:37:20.608311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:37:20.639925 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:37:20.644704 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:37:20.719894 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 02:37:20.724772 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 02:37:20.792620 kernel: raid6: avx2x4 gen() 27461 MB/s Mar 6 02:37:20.810630 kernel: raid6: avx2x2 gen() 28568 MB/s Mar 6 02:37:20.830921 kernel: raid6: avx2x1 gen() 17998 MB/s Mar 6 02:37:20.830993 kernel: raid6: using algorithm avx2x2 gen() 28568 MB/s Mar 6 02:37:20.851602 kernel: raid6: .... xor() 26124 MB/s, rmw enabled Mar 6 02:37:20.851655 kernel: raid6: using avx2x2 recovery algorithm Mar 6 02:37:20.873624 kernel: xor: automatically using best checksumming function avx Mar 6 02:37:21.042585 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 02:37:21.052907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:37:21.059486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:37:21.107471 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 6 02:37:21.118793 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:37:21.120363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 02:37:21.150695 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Mar 6 02:37:21.201225 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:37:21.209433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:37:21.322809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:37:21.330070 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 02:37:21.381650 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 02:37:21.404625 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 02:37:21.412732 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 02:37:21.430005 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 02:37:21.430036 kernel: GPT:9289727 != 19775487 Mar 6 02:37:21.430054 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 02:37:21.430064 kernel: GPT:9289727 != 19775487 Mar 6 02:37:21.432608 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 02:37:21.431955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:37:21.442250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:37:21.432150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:37:21.445340 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:37:21.449443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:37:21.455132 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:37:21.462680 kernel: libata version 3.00 loaded. Mar 6 02:37:21.469583 kernel: AES CTR mode by8 optimization enabled Mar 6 02:37:21.478590 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 6 02:37:21.488682 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 02:37:21.488896 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 02:37:21.503586 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 6 02:37:21.503806 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 6 02:37:21.503975 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 02:37:21.518740 kernel: scsi host0: ahci Mar 6 02:37:21.521311 kernel: scsi host1: ahci Mar 6 02:37:21.521745 kernel: scsi host2: ahci Mar 6 02:37:21.522050 kernel: scsi host3: ahci Mar 6 02:37:21.522363 kernel: scsi host4: ahci Mar 6 02:37:21.523617 kernel: scsi host5: ahci Mar 6 02:37:21.523817 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 6 02:37:21.523830 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 6 02:37:21.523840 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 6 02:37:21.523850 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 6 02:37:21.523860 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 6 02:37:21.523869 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 6 02:37:21.536519 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 02:37:21.651924 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:37:21.672370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 02:37:21.681799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 02:37:21.688213 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 02:37:21.688325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 02:37:21.703301 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 02:37:21.735111 disk-uuid[617]: Primary Header is updated. Mar 6 02:37:21.735111 disk-uuid[617]: Secondary Entries is updated. Mar 6 02:37:21.735111 disk-uuid[617]: Secondary Header is updated. Mar 6 02:37:21.745212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:37:21.832604 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 02:37:21.837601 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 02:37:21.837625 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 02:37:21.837637 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 02:37:21.841028 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 02:37:21.841620 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 02:37:21.843618 kernel: ata3.00: LPM support broken, forcing max_power Mar 6 02:37:21.846672 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 02:37:21.846696 kernel: ata3.00: applying bridge limits Mar 6 02:37:21.849952 kernel: ata3.00: LPM support broken, forcing max_power Mar 6 02:37:21.849969 kernel: ata3.00: configured for UDMA/100 Mar 6 02:37:21.854620 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 02:37:21.919365 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 02:37:21.919671 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 02:37:21.951598 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 02:37:22.377386 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 02:37:22.381057 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:37:22.387001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:37:22.387098 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:37:22.394481 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 02:37:22.428525 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:37:22.761629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 02:37:22.762474 disk-uuid[618]: The operation has completed successfully. Mar 6 02:37:22.802665 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 02:37:22.802867 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 02:37:22.850063 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 02:37:22.877382 sh[646]: Success Mar 6 02:37:22.901782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 02:37:22.901872 kernel: device-mapper: uevent: version 1.0.3 Mar 6 02:37:22.904711 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 02:37:22.919629 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 6 02:37:22.965478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 02:37:22.968325 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 02:37:22.991582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 02:37:23.004580 kernel: BTRFS: device fsid 1235dd15-5252-4928-9c6c-372370c6bfca devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (658) Mar 6 02:37:23.010723 kernel: BTRFS info (device dm-0): first mount of filesystem 1235dd15-5252-4928-9c6c-372370c6bfca Mar 6 02:37:23.010767 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:37:23.022598 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 02:37:23.022666 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 02:37:23.024471 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 02:37:23.025153 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:37:23.032294 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 02:37:23.033309 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 02:37:23.038230 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 02:37:23.091623 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (691) Mar 6 02:37:23.091671 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:37:23.095690 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:37:23.104836 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:37:23.104872 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:37:23.113639 kernel: BTRFS info (device vda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:37:23.115268 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 02:37:23.123036 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 02:37:23.213397 ignition[746]: Ignition 2.22.0 Mar 6 02:37:23.213434 ignition[746]: Stage: fetch-offline Mar 6 02:37:23.213470 ignition[746]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:37:23.213481 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:37:23.213620 ignition[746]: parsed url from cmdline: "" Mar 6 02:37:23.213625 ignition[746]: no config URL provided Mar 6 02:37:23.213631 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 02:37:23.225447 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:37:23.213641 ignition[746]: no config at "/usr/lib/ignition/user.ign" Mar 6 02:37:23.231779 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:37:23.213700 ignition[746]: op(1): [started] loading QEMU firmware config module Mar 6 02:37:23.213706 ignition[746]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 02:37:23.222960 ignition[746]: op(1): [finished] loading QEMU firmware config module Mar 6 02:37:23.287022 systemd-networkd[836]: lo: Link UP Mar 6 02:37:23.287056 systemd-networkd[836]: lo: Gained carrier Mar 6 02:37:23.289137 systemd-networkd[836]: Enumeration completed Mar 6 02:37:23.289628 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:37:23.291892 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:37:23.291897 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:37:23.293029 systemd-networkd[836]: eth0: Link UP Mar 6 02:37:23.293292 systemd-networkd[836]: eth0: Gained carrier Mar 6 02:37:23.293302 systemd-networkd[836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:37:23.297356 systemd[1]: Reached target network.target - Network. Mar 6 02:37:23.337666 systemd-networkd[836]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 02:37:23.429737 ignition[746]: parsing config with SHA512: 85c00b6e25822da8ed8cb1d6edf795d3ad99ead7d6e091d2d907e4209efce5ea4a86e1ee5c8f2f56fcb9d9e9743c1c3b8b58c5946e0ff4d43a6000ffdedc9a4e Mar 6 02:37:23.435509 unknown[746]: fetched base config from "system" Mar 6 02:37:23.436426 unknown[746]: fetched user config from "qemu" Mar 6 02:37:23.436852 ignition[746]: fetch-offline: fetch-offline passed Mar 6 02:37:23.439269 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:37:23.436941 ignition[746]: Ignition finished successfully Mar 6 02:37:23.446721 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 02:37:23.448272 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 02:37:23.490474 ignition[841]: Ignition 2.22.0 Mar 6 02:37:23.490509 ignition[841]: Stage: kargs Mar 6 02:37:23.490683 ignition[841]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:37:23.490695 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:37:23.491390 ignition[841]: kargs: kargs passed Mar 6 02:37:23.491442 ignition[841]: Ignition finished successfully Mar 6 02:37:23.505486 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 02:37:23.507268 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 02:37:23.556451 ignition[849]: Ignition 2.22.0 Mar 6 02:37:23.556485 ignition[849]: Stage: disks Mar 6 02:37:23.556702 ignition[849]: no configs at "/usr/lib/ignition/base.d" Mar 6 02:37:23.556715 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:37:23.557996 ignition[849]: disks: disks passed Mar 6 02:37:23.564041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 02:37:23.558067 ignition[849]: Ignition finished successfully Mar 6 02:37:23.567160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 02:37:23.577161 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 02:37:23.582825 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:37:23.589208 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:37:23.596862 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:37:23.604772 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 02:37:23.643340 systemd-fsck[859]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 02:37:23.650118 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 02:37:23.651622 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 02:37:23.799624 kernel: EXT4-fs (vda9): mounted filesystem 16ab7223-a8af-43d2-ad40-7e1bf0ff2a89 r/w with ordered data mode. Quota mode: none. Mar 6 02:37:23.800228 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 02:37:23.801217 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 02:37:23.810036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:37:23.814759 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 02:37:23.818390 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 02:37:23.818437 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 02:37:23.856295 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Mar 6 02:37:23.856321 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:37:23.856333 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:37:23.856344 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:37:23.856354 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:37:23.818463 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:37:23.828873 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 02:37:23.834268 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 02:37:23.857646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:37:23.893785 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 02:37:23.901891 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Mar 6 02:37:23.907824 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 02:37:23.912920 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 02:37:24.051353 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 02:37:24.059234 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 02:37:24.062929 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 02:37:24.084409 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 02:37:24.089658 kernel: BTRFS info (device vda6): last unmount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:37:24.104375 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 02:37:24.127952 ignition[981]: INFO : Ignition 2.22.0 Mar 6 02:37:24.127952 ignition[981]: INFO : Stage: mount Mar 6 02:37:24.133450 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:37:24.133450 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:37:24.133450 ignition[981]: INFO : mount: mount passed Mar 6 02:37:24.133450 ignition[981]: INFO : Ignition finished successfully Mar 6 02:37:24.132990 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 02:37:24.148048 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 02:37:24.182781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 02:37:24.204615 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Mar 6 02:37:24.210482 kernel: BTRFS info (device vda6): first mount of filesystem 993ea71e-e97d-4f5e-b5c7-fdac31a53b6b Mar 6 02:37:24.210525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 02:37:24.217488 kernel: BTRFS info (device vda6): turning on async discard Mar 6 02:37:24.217514 kernel: BTRFS info (device vda6): enabling free space tree Mar 6 02:37:24.219702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 02:37:24.266319 ignition[1010]: INFO : Ignition 2.22.0 Mar 6 02:37:24.266319 ignition[1010]: INFO : Stage: files Mar 6 02:37:24.271480 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:37:24.271480 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:37:24.271480 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Mar 6 02:37:24.281669 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 02:37:24.281669 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 02:37:24.291453 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 02:37:24.291453 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 02:37:24.291453 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 02:37:24.291453 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 02:37:24.291453 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 02:37:24.287613 unknown[1010]: wrote ssh authorized keys file for user: core Mar 6 02:37:24.360420 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 02:37:24.469177 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 02:37:24.469177 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 02:37:24.479007 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 6 02:37:24.576823 systemd-networkd[836]: eth0: Gained IPv6LL Mar 6 02:37:24.603418 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 02:37:24.789109 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 02:37:24.789109 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:37:24.798437 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 02:37:24.831461 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:37:24.831461 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 02:37:24.831461 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:37:24.831461 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:37:24.831461 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:37:24.831461 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 02:37:25.040913 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 02:37:25.545557 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 02:37:25.545557 ignition[1010]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 6 02:37:25.556266 ignition[1010]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 02:37:25.598420 ignition[1010]: INFO : files: files passed Mar 6 02:37:25.598420 ignition[1010]: INFO : Ignition finished successfully Mar 6 02:37:25.587124 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 02:37:25.593169 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 02:37:25.611057 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 02:37:25.617954 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 02:37:25.659024 initrd-setup-root-after-ignition[1038]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 02:37:25.618126 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 02:37:25.665705 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:37:25.665705 initrd-setup-root-after-ignition[1040]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:37:25.632477 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:37:25.677157 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 02:37:25.636670 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 02:37:25.642459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 02:37:25.722456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 02:37:25.722723 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 02:37:25.725986 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 02:37:25.731625 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 02:37:25.736747 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 02:37:25.737966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 02:37:25.776404 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:37:25.784240 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 02:37:25.826288 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:37:25.826610 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:37:25.832518 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 02:37:25.838439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 02:37:25.838668 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 02:37:25.851498 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 02:37:25.856924 systemd[1]: Stopped target basic.target - Basic System. Mar 6 02:37:25.859514 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 02:37:25.861902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 02:37:25.867413 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 02:37:25.878800 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 02:37:25.879074 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 02:37:25.891268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 02:37:25.894774 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 02:37:25.903230 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 02:37:25.906126 systemd[1]: Stopped target swap.target - Swaps. Mar 6 02:37:25.910762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 02:37:25.910929 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 02:37:25.918238 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:37:25.923809 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:37:25.927074 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 02:37:25.927493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:37:25.938076 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 02:37:25.938245 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 02:37:25.949422 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 02:37:25.949869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 02:37:25.952896 systemd[1]: Stopped target paths.target - Path Units. Mar 6 02:37:25.958272 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 02:37:25.962470 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:37:25.972099 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 02:37:25.972405 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 02:37:25.979454 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 02:37:25.979682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 02:37:25.982041 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 02:37:25.982219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 02:37:25.989027 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 02:37:25.989253 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 02:37:25.991464 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 02:37:25.991685 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 02:37:26.004721 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 02:37:26.014986 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 02:37:26.017455 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 02:37:26.017665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:37:26.023309 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 02:37:26.023429 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 02:37:26.042065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 02:37:26.042312 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 02:37:26.065970 ignition[1066]: INFO : Ignition 2.22.0 Mar 6 02:37:26.068328 ignition[1066]: INFO : Stage: umount Mar 6 02:37:26.068328 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 02:37:26.068328 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 02:37:26.076466 ignition[1066]: INFO : umount: umount passed Mar 6 02:37:26.076466 ignition[1066]: INFO : Ignition finished successfully Mar 6 02:37:26.069762 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 02:37:26.077644 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 02:37:26.077855 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 02:37:26.083618 systemd[1]: Stopped target network.target - Network. Mar 6 02:37:26.086134 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 02:37:26.086305 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 02:37:26.092817 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 02:37:26.092894 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 02:37:26.095278 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 02:37:26.095366 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 02:37:26.103125 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 02:37:26.103246 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 02:37:26.108366 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 02:37:26.117946 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 02:37:26.128667 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 02:37:26.128894 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 02:37:26.143264 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 02:37:26.143778 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 02:37:26.143976 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 02:37:26.147127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 02:37:26.147270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 02:37:26.152293 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 02:37:26.152402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:37:26.165125 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 02:37:26.168274 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 02:37:26.168621 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 02:37:26.176226 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 02:37:26.176385 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 02:37:26.177093 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 02:37:26.177136 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:37:26.191730 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 02:37:26.197765 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 02:37:26.197851 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 02:37:26.204175 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:37:26.204301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:37:26.213734 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 02:37:26.213806 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 02:37:26.216514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:37:26.219709 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 02:37:26.246092 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 02:37:26.246325 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 02:37:26.273892 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 02:37:26.276661 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:37:26.277130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 02:37:26.277222 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 02:37:26.283027 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 02:37:26.283084 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:37:26.290457 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 02:37:26.290599 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 02:37:26.300706 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 02:37:26.300790 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 02:37:26.307980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 02:37:26.308069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 02:37:26.321851 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 02:37:26.324821 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 02:37:26.324885 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:37:26.331173 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 02:37:26.331279 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:37:26.340511 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 02:37:26.340636 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:37:26.352264 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 02:37:26.352327 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:37:26.362846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 02:37:26.362906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:37:26.375447 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 02:37:26.375622 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 02:37:26.385317 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 02:37:26.389702 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 02:37:26.426346 systemd[1]: Switching root. Mar 6 02:37:26.465018 systemd-journald[203]: Journal stopped Mar 6 02:37:27.990429 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 6 02:37:27.990525 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 02:37:27.990610 kernel: SELinux: policy capability open_perms=1 Mar 6 02:37:27.990628 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 02:37:27.990644 kernel: SELinux: policy capability always_check_network=0 Mar 6 02:37:27.990665 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 02:37:27.990681 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 02:37:27.990697 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 02:37:27.990713 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 02:37:27.990729 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 02:37:27.990746 kernel: audit: type=1403 audit(1772764646.717:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 02:37:27.990765 systemd[1]: Successfully loaded SELinux policy in 70.577ms. Mar 6 02:37:27.990799 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.366ms. Mar 6 02:37:27.990818 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 02:37:27.990842 systemd[1]: Detected virtualization kvm. Mar 6 02:37:27.990862 systemd[1]: Detected architecture x86-64. Mar 6 02:37:27.990881 systemd[1]: Detected first boot. Mar 6 02:37:27.990898 systemd[1]: Initializing machine ID from VM UUID. Mar 6 02:37:27.990915 zram_generator::config[1112]: No configuration found. Mar 6 02:37:27.990934 kernel: Guest personality initialized and is inactive Mar 6 02:37:27.990952 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 6 02:37:27.990970 kernel: Initialized host personality Mar 6 02:37:27.990987 kernel: NET: Registered PF_VSOCK protocol family Mar 6 02:37:27.991008 systemd[1]: Populated /etc with preset unit settings. Mar 6 02:37:27.991026 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 02:37:27.991043 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 02:37:27.991061 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 02:37:27.991077 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 02:37:27.991094 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 02:37:27.991112 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 02:37:27.991130 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 02:37:27.991152 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 02:37:27.991172 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 02:37:27.991238 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 02:37:27.991270 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 02:37:27.991289 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 02:37:27.991307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 02:37:27.991326 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 02:37:27.991347 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 02:37:27.991366 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 02:37:27.991403 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 02:37:27.991422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 02:37:27.991440 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 02:37:27.991460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 02:37:27.991479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 02:37:27.991497 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 02:37:27.991514 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 02:37:27.991595 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 02:37:27.991624 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 02:37:27.991643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 02:37:27.991662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 02:37:27.991687 systemd[1]: Reached target slices.target - Slice Units. Mar 6 02:37:27.991706 systemd[1]: Reached target swap.target - Swaps. Mar 6 02:37:27.991727 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 02:37:27.991746 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 02:37:27.991766 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 02:37:27.991785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 02:37:27.991809 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 02:37:27.991828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 02:37:27.991846 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 02:37:27.991865 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 02:37:27.991884 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 02:37:27.991903 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 02:37:27.991921 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:27.991939 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 02:37:27.991958 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 02:37:27.991981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 02:37:27.991999 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 02:37:27.992018 systemd[1]: Reached target machines.target - Containers. Mar 6 02:37:27.992037 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 02:37:27.992057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:37:27.992075 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 02:37:27.992093 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 02:37:27.992111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:37:27.992136 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:37:27.992155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:37:27.992173 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 02:37:27.992234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:37:27.992255 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 02:37:27.992274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 02:37:27.992292 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 02:37:27.992311 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 02:37:27.992338 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 02:37:27.992360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:37:27.992380 kernel: ACPI: bus type drm_connector registered Mar 6 02:37:27.992397 kernel: fuse: init (API version 7.41) Mar 6 02:37:27.992414 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 02:37:27.992432 kernel: loop: module loaded Mar 6 02:37:27.992450 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 02:37:27.992468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 02:37:27.992496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 02:37:27.992520 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 02:37:27.992596 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 02:37:27.992663 systemd-journald[1197]: Collecting audit messages is disabled. Mar 6 02:37:27.992697 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 02:37:27.992724 systemd[1]: Stopped verity-setup.service. Mar 6 02:37:27.992743 systemd-journald[1197]: Journal started Mar 6 02:37:27.992774 systemd-journald[1197]: Runtime Journal (/run/log/journal/c3e90e90d7794b1492ac238eb2feefda) is 6M, max 48.3M, 42.2M free. Mar 6 02:37:27.492055 systemd[1]: Queued start job for default target multi-user.target. Mar 6 02:37:27.503629 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 02:37:27.504394 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 02:37:28.001587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:28.007603 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 02:37:28.012409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 02:37:28.016829 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 02:37:28.020457 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 02:37:28.023631 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 02:37:28.027257 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 02:37:28.030306 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 02:37:28.033492 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 02:37:28.037277 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 02:37:28.041288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 02:37:28.041675 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 02:37:28.045454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:37:28.045876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:37:28.049454 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:37:28.049779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:37:28.053098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:37:28.053476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:37:28.057692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 02:37:28.058057 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 02:37:28.061596 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:37:28.061903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:37:28.065831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 02:37:28.069596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 02:37:28.073696 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 02:37:28.077733 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 02:37:28.098782 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 02:37:28.103952 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 02:37:28.109086 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 02:37:28.112331 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 02:37:28.112391 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 02:37:28.116993 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 02:37:28.124736 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 02:37:28.127656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:37:28.129273 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 02:37:28.135730 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 02:37:28.138967 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:37:28.153748 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 02:37:28.157067 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:37:28.158737 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:37:28.167968 systemd-journald[1197]: Time spent on flushing to /var/log/journal/c3e90e90d7794b1492ac238eb2feefda is 19.199ms for 974 entries. Mar 6 02:37:28.167968 systemd-journald[1197]: System Journal (/var/log/journal/c3e90e90d7794b1492ac238eb2feefda) is 8M, max 195.6M, 187.6M free. Mar 6 02:37:28.211794 systemd-journald[1197]: Received client request to flush runtime journal. Mar 6 02:37:28.211846 kernel: loop0: detected capacity change from 0 to 128560 Mar 6 02:37:28.171830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 02:37:28.178724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 02:37:28.185473 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 02:37:28.192518 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 02:37:28.197304 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 02:37:28.201831 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 02:37:28.207914 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 02:37:28.218067 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 02:37:28.223527 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 02:37:28.236473 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 02:37:28.237853 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:37:28.244044 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 6 02:37:28.244078 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 6 02:37:28.253289 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 02:37:28.260604 kernel: loop1: detected capacity change from 0 to 228704 Mar 6 02:37:28.262899 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 02:37:28.263944 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 02:37:28.272353 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 02:37:28.300590 kernel: loop2: detected capacity change from 0 to 110984 Mar 6 02:37:28.323454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 02:37:28.330789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 02:37:28.345592 kernel: loop3: detected capacity change from 0 to 128560 Mar 6 02:37:28.365747 kernel: loop4: detected capacity change from 0 to 228704 Mar 6 02:37:28.371345 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 6 02:37:28.371374 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 6 02:37:28.376983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 02:37:28.385588 kernel: loop5: detected capacity change from 0 to 110984 Mar 6 02:37:28.400817 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 02:37:28.401648 (sd-merge)[1257]: Merged extensions into '/usr'. Mar 6 02:37:28.406652 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 02:37:28.406698 systemd[1]: Reloading... Mar 6 02:37:28.476596 zram_generator::config[1286]: No configuration found. Mar 6 02:37:28.551323 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 02:37:28.683749 systemd[1]: Reloading finished in 276 ms. Mar 6 02:37:28.717025 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 02:37:28.720435 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 02:37:28.724038 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 02:37:28.745369 systemd[1]: Starting ensure-sysext.service... Mar 6 02:37:28.748748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 02:37:28.753235 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 02:37:28.769030 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Mar 6 02:37:28.769057 systemd[1]: Reloading... Mar 6 02:37:28.777353 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 02:37:28.777400 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 02:37:28.777815 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 02:37:28.778100 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 02:37:28.779164 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 02:37:28.779470 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Mar 6 02:37:28.779669 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Mar 6 02:37:28.785223 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:37:28.785247 systemd-tmpfiles[1326]: Skipping /boot Mar 6 02:37:28.788059 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Mar 6 02:37:28.801268 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 02:37:28.801286 systemd-tmpfiles[1326]: Skipping /boot Mar 6 02:37:28.854626 zram_generator::config[1362]: No configuration found. Mar 6 02:37:28.987624 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 02:37:29.030610 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 6 02:37:29.030724 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 02:37:29.034625 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 02:37:29.042736 kernel: ACPI: button: Power Button [PWRF] Mar 6 02:37:29.119611 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 02:37:29.120019 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 02:37:29.123807 systemd[1]: Reloading finished in 354 ms. Mar 6 02:37:29.135386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 02:37:29.139446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 02:37:29.239169 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:37:29.249274 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 02:37:29.255746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 02:37:29.257668 kernel: kvm_amd: TSC scaling supported Mar 6 02:37:29.257773 kernel: kvm_amd: Nested Virtualization enabled Mar 6 02:37:29.257794 kernel: kvm_amd: Nested Paging enabled Mar 6 02:37:29.257808 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 02:37:29.261606 kernel: kvm_amd: PMU virtualization is disabled Mar 6 02:37:29.298931 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 02:37:29.305816 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 02:37:29.312798 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 02:37:29.317717 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 02:37:29.330406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 02:37:29.340892 kernel: EDAC MC: Ver: 3.0.0 Mar 6 02:37:29.340417 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 02:37:29.346860 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 02:37:29.357038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:29.357268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:37:29.359946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:37:29.369258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:37:29.378605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:37:29.381896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:37:29.382065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:37:29.382243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:29.385959 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 02:37:29.390277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:37:29.391168 augenrules[1479]: No rules Mar 6 02:37:29.398411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:37:29.403012 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:37:29.403451 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:37:29.407308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:37:29.407593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:37:29.408133 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:37:29.411013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:37:29.417138 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 02:37:29.426334 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 02:37:29.428373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:37:29.429417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:37:29.431868 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 02:37:29.431975 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 02:37:29.441725 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 02:37:29.454696 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 02:37:29.512838 systemd-networkd[1447]: lo: Link UP Mar 6 02:37:29.512849 systemd-networkd[1447]: lo: Gained carrier Mar 6 02:37:29.515770 systemd-networkd[1447]: Enumeration completed Mar 6 02:37:29.516634 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:37:29.516719 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 02:37:29.517704 systemd-networkd[1447]: eth0: Link UP Mar 6 02:37:29.518027 systemd-networkd[1447]: eth0: Gained carrier Mar 6 02:37:29.518101 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 02:37:29.523812 systemd-resolved[1448]: Positive Trust Anchors: Mar 6 02:37:29.523851 systemd-resolved[1448]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 02:37:29.523879 systemd-resolved[1448]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 02:37:29.528323 systemd-resolved[1448]: Defaulting to hostname 'linux'. Mar 6 02:37:29.536676 systemd-networkd[1447]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 02:37:29.580517 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 02:37:29.584098 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 02:37:29.587730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 02:37:29.594101 systemd[1]: Reached target network.target - Network. Mar 6 02:37:29.597150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 02:37:29.601287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:29.601653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:37:29.603653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:37:29.609262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:37:29.618830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:37:29.622731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:37:29.622878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:37:29.624528 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 02:37:29.629063 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 02:37:29.632495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 02:37:29.632749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:29.634673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:37:29.634966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:37:29.638840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:37:29.639120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:37:29.642881 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:37:29.643156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:37:29.653694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:29.662114 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:37:29.666277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 02:37:29.667714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 02:37:29.672108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 02:37:29.676515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 02:37:29.683696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 02:37:29.686717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 02:37:29.686778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 02:37:29.686830 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 02:37:29.686852 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 02:37:29.688796 systemd[1]: Finished ensure-sysext.service. Mar 6 02:37:29.692301 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 02:37:29.696800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 02:37:29.698684 augenrules[1514]: /sbin/augenrules: No change Mar 6 02:37:29.701074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 02:37:29.705773 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 02:37:29.706145 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 02:37:29.710220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 02:37:29.710526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 02:37:29.715037 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 02:37:29.715408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 02:37:29.720351 augenrules[1535]: No rules Mar 6 02:37:29.722315 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:37:29.722718 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:37:29.731937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 02:37:29.732038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 02:37:29.735108 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 02:37:29.889284 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 02:37:29.894484 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 02:37:30.575675 systemd-timesyncd[1547]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 02:37:30.575748 systemd-timesyncd[1547]: Initial clock synchronization to Fri 2026-03-06 02:37:30.575466 UTC. Mar 6 02:37:30.578260 systemd-resolved[1448]: Clock change detected. Flushing caches. Mar 6 02:37:30.578385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 02:37:30.583225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 02:37:30.590218 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 6 02:37:30.595686 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 02:37:30.603546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 02:37:30.603734 systemd[1]: Reached target paths.target - Path Units. Mar 6 02:37:30.607300 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 02:37:30.610481 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 02:37:30.614354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 02:37:30.617883 systemd[1]: Reached target timers.target - Timer Units. Mar 6 02:37:30.622428 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 02:37:30.630447 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 02:37:30.635512 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 02:37:30.639284 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 02:37:30.642821 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 02:37:30.651812 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 02:37:30.655332 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 02:37:30.660147 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 02:37:30.665051 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 02:37:30.667774 systemd[1]: Reached target basic.target - Basic System. Mar 6 02:37:30.670380 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:37:30.670456 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 02:37:30.672229 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 02:37:30.676600 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 02:37:30.688771 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 02:37:30.694470 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 02:37:30.700099 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 02:37:30.704176 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 02:37:30.706046 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 6 02:37:30.713150 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 02:37:30.720513 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 02:37:30.727046 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Mar 6 02:37:30.727348 jq[1554]: false Mar 6 02:37:30.724423 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Mar 6 02:37:30.729187 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 02:37:30.731699 extend-filesystems[1555]: Found /dev/vda6 Mar 6 02:37:30.736174 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 02:37:30.743194 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 02:37:30.744325 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 02:37:30.745019 extend-filesystems[1555]: Found /dev/vda9 Mar 6 02:37:30.746125 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Mar 6 02:37:30.746125 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 02:37:30.746104 oslogin_cache_refresh[1556]: Failure getting users, quitting Mar 6 02:37:30.746129 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 6 02:37:30.746235 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Mar 6 02:37:30.746187 oslogin_cache_refresh[1556]: Refreshing group entry cache Mar 6 02:37:30.750003 extend-filesystems[1555]: Checking size of /dev/vda9 Mar 6 02:37:30.755431 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 02:37:30.759929 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 02:37:30.760745 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Mar 6 02:37:30.760745 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 02:37:30.760722 oslogin_cache_refresh[1556]: Failure getting groups, quitting Mar 6 02:37:30.760741 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 6 02:37:30.765373 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 02:37:30.769332 extend-filesystems[1555]: Resized partition /dev/vda9 Mar 6 02:37:30.773188 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 02:37:30.779895 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 02:37:30.780300 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 02:37:30.780914 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 6 02:37:30.782365 extend-filesystems[1581]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 02:37:30.796407 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 02:37:30.795154 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 6 02:37:30.799647 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 02:37:30.800138 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 02:37:30.806728 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 02:37:30.807052 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 02:37:30.808478 jq[1577]: true Mar 6 02:37:30.831121 update_engine[1574]: I20260306 02:37:30.827826 1574 main.cc:92] Flatcar Update Engine starting Mar 6 02:37:30.843452 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 02:37:30.863190 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 02:37:30.863278 jq[1584]: true Mar 6 02:37:30.885015 extend-filesystems[1581]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 02:37:30.885015 extend-filesystems[1581]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 02:37:30.885015 extend-filesystems[1581]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 02:37:30.903269 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Mar 6 02:37:30.892721 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 02:37:30.893127 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 02:37:30.913064 tar[1583]: linux-amd64/LICENSE Mar 6 02:37:30.913064 tar[1583]: linux-amd64/helm Mar 6 02:37:30.930292 dbus-daemon[1552]: [system] SELinux support is enabled Mar 6 02:37:30.930894 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 02:37:30.938256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 02:37:30.938287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 02:37:30.942073 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 02:37:30.942099 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 02:37:30.953800 systemd-logind[1568]: Watching system buttons on /dev/input/event2 (Power Button) Mar 6 02:37:30.954268 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 02:37:30.955865 systemd[1]: Started update-engine.service - Update Engine. Mar 6 02:37:30.959156 update_engine[1574]: I20260306 02:37:30.956746 1574 update_check_scheduler.cc:74] Next update check in 7m59s Mar 6 02:37:30.961003 systemd-logind[1568]: New seat seat0. Mar 6 02:37:30.963046 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 02:37:30.966162 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 02:37:31.015172 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Mar 6 02:37:31.016843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 02:37:31.025268 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 02:37:31.055047 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 02:37:31.174291 kernel: hrtimer: interrupt took 9684269 ns Mar 6 02:37:31.319682 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 02:37:31.368547 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 02:37:31.377269 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 02:37:31.543930 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 02:37:31.544552 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 02:37:31.552662 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 02:37:31.587825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 02:37:31.596237 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 02:37:31.603665 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 02:37:31.607705 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 02:37:31.658434 systemd-networkd[1447]: eth0: Gained IPv6LL Mar 6 02:37:31.940843 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 02:37:31.949326 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 02:37:31.957658 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 02:37:31.965276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:37:32.037034 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 02:37:32.089763 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 02:37:32.098721 containerd[1585]: time="2026-03-06T02:37:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 02:37:32.100858 containerd[1585]: time="2026-03-06T02:37:32.100813884Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 02:37:32.238582 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 02:37:32.239169 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 02:37:32.245394 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 02:37:32.257481 containerd[1585]: time="2026-03-06T02:37:32.257413584Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="140.514µs" Mar 6 02:37:32.257481 containerd[1585]: time="2026-03-06T02:37:32.257457605Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 02:37:32.257641 containerd[1585]: time="2026-03-06T02:37:32.257554106Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 02:37:32.258169 containerd[1585]: time="2026-03-06T02:37:32.258107329Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 02:37:32.258219 containerd[1585]: time="2026-03-06T02:37:32.258191816Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 02:37:32.258443 containerd[1585]: time="2026-03-06T02:37:32.258307432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:37:32.258826 containerd[1585]: time="2026-03-06T02:37:32.258738457Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 02:37:32.258826 containerd[1585]: time="2026-03-06T02:37:32.258771719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:37:32.259400 containerd[1585]: time="2026-03-06T02:37:32.259336894Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 02:37:32.259400 containerd[1585]: time="2026-03-06T02:37:32.259393079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:37:32.259468 containerd[1585]: time="2026-03-06T02:37:32.259412385Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 02:37:32.259468 containerd[1585]: time="2026-03-06T02:37:32.259425128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 02:37:32.259777 containerd[1585]: time="2026-03-06T02:37:32.259731962Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 02:37:32.260568 containerd[1585]: time="2026-03-06T02:37:32.260512358Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:37:32.260568 containerd[1585]: time="2026-03-06T02:37:32.260564556Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 02:37:32.260688 containerd[1585]: time="2026-03-06T02:37:32.260581437Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 02:37:32.260797 containerd[1585]: time="2026-03-06T02:37:32.260760392Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 02:37:32.262298 containerd[1585]: time="2026-03-06T02:37:32.262206672Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 02:37:32.262469 containerd[1585]: time="2026-03-06T02:37:32.262398760Z" level=info msg="metadata content store policy set" policy=shared Mar 6 02:37:32.273832 containerd[1585]: time="2026-03-06T02:37:32.273684080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 02:37:32.274108 containerd[1585]: time="2026-03-06T02:37:32.273943513Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 02:37:32.274159 containerd[1585]: time="2026-03-06T02:37:32.274105957Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 02:37:32.274159 containerd[1585]: time="2026-03-06T02:37:32.274133218Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 02:37:32.274224 containerd[1585]: time="2026-03-06T02:37:32.274182260Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 02:37:32.274261 containerd[1585]: time="2026-03-06T02:37:32.274231992Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 02:37:32.274332 containerd[1585]: time="2026-03-06T02:37:32.274288127Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 02:37:32.274566 containerd[1585]: time="2026-03-06T02:37:32.274383294Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 02:37:32.275360 containerd[1585]: time="2026-03-06T02:37:32.275262847Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 02:37:32.275360 containerd[1585]: time="2026-03-06T02:37:32.275340562Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 02:37:32.275360 containerd[1585]: time="2026-03-06T02:37:32.275354498Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291090614Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291477356Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291565701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291653565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291674624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291695223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291734356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291752710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291771485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291811199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291837008Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 02:37:32.292018 containerd[1585]: time="2026-03-06T02:37:32.291858057Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 02:37:32.293326 containerd[1585]: time="2026-03-06T02:37:32.293299057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 02:37:32.293594 containerd[1585]: time="2026-03-06T02:37:32.293566897Z" level=info msg="Start snapshots syncer" Mar 6 02:37:32.294278 containerd[1585]: time="2026-03-06T02:37:32.294199037Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 02:37:32.295451 containerd[1585]: time="2026-03-06T02:37:32.295394850Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 02:37:32.295751 containerd[1585]: time="2026-03-06T02:37:32.295726519Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 02:37:32.299113 containerd[1585]: time="2026-03-06T02:37:32.299045546Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 02:37:32.299564 containerd[1585]: time="2026-03-06T02:37:32.299537113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 02:37:32.299694 containerd[1585]: time="2026-03-06T02:37:32.299676182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 02:37:32.299771 containerd[1585]: time="2026-03-06T02:37:32.299755971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 02:37:32.299820 containerd[1585]: time="2026-03-06T02:37:32.299808290Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 02:37:32.300012 containerd[1585]: time="2026-03-06T02:37:32.299918866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 02:37:32.300107 containerd[1585]: time="2026-03-06T02:37:32.300083934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 02:37:32.300191 containerd[1585]: time="2026-03-06T02:37:32.300170225Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 02:37:32.300360 containerd[1585]: time="2026-03-06T02:37:32.300337146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 02:37:32.300433 containerd[1585]: time="2026-03-06T02:37:32.300419080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 02:37:32.300521 containerd[1585]: time="2026-03-06T02:37:32.300505180Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 02:37:32.300731 containerd[1585]: time="2026-03-06T02:37:32.300714611Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301134465Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301153962Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301164771Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301174429Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301214324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301249781Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301306046Z" level=info msg="runtime interface created" Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301314120Z" level=info msg="created NRI interface" Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301323067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301337715Z" level=info msg="Connect containerd service" Mar 6 02:37:32.301505 containerd[1585]: time="2026-03-06T02:37:32.301376267Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 02:37:32.304109 containerd[1585]: time="2026-03-06T02:37:32.304084313Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:37:32.920889 tar[1583]: linux-amd64/README.md Mar 6 02:37:33.189651 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 02:37:33.416400 containerd[1585]: time="2026-03-06T02:37:33.416105335Z" level=info msg="Start subscribing containerd event" Mar 6 02:37:33.417699 containerd[1585]: time="2026-03-06T02:37:33.416906285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 02:37:33.417699 containerd[1585]: time="2026-03-06T02:37:33.417023524Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 02:37:33.417773 containerd[1585]: time="2026-03-06T02:37:33.417712040Z" level=info msg="Start recovering state" Mar 6 02:37:33.418388 containerd[1585]: time="2026-03-06T02:37:33.418354819Z" level=info msg="Start event monitor" Mar 6 02:37:33.418586 containerd[1585]: time="2026-03-06T02:37:33.418476467Z" level=info msg="Start cni network conf syncer for default" Mar 6 02:37:33.418676 containerd[1585]: time="2026-03-06T02:37:33.418594928Z" level=info msg="Start streaming server" Mar 6 02:37:33.418826 containerd[1585]: time="2026-03-06T02:37:33.418751791Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 02:37:33.418826 containerd[1585]: time="2026-03-06T02:37:33.418817253Z" level=info msg="runtime interface starting up..." Mar 6 02:37:33.419330 containerd[1585]: time="2026-03-06T02:37:33.418851657Z" level=info msg="starting plugins..." Mar 6 02:37:33.419330 containerd[1585]: time="2026-03-06T02:37:33.418996868Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 02:37:33.419912 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 02:37:33.422164 containerd[1585]: time="2026-03-06T02:37:33.422127022Z" level=info msg="containerd successfully booted in 1.324943s" Mar 6 02:37:34.962085 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 02:37:34.968444 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:54566.service - OpenSSH per-connection server daemon (10.0.0.1:54566). Mar 6 02:37:35.340222 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 54566 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:35.342600 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:35.352817 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 02:37:35.357260 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 02:37:35.372673 systemd-logind[1568]: New session 1 of user core. Mar 6 02:37:35.405456 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 02:37:35.451194 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 02:37:35.654680 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 02:37:35.660302 systemd-logind[1568]: New session c1 of user core. Mar 6 02:37:35.983688 systemd[1687]: Queued start job for default target default.target. Mar 6 02:37:36.079827 systemd[1687]: Created slice app.slice - User Application Slice. Mar 6 02:37:36.079929 systemd[1687]: Reached target paths.target - Paths. Mar 6 02:37:36.080103 systemd[1687]: Reached target timers.target - Timers. Mar 6 02:37:36.083057 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 02:37:36.118138 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 02:37:36.118314 systemd[1687]: Reached target sockets.target - Sockets. Mar 6 02:37:36.118363 systemd[1687]: Reached target basic.target - Basic System. Mar 6 02:37:36.118413 systemd[1687]: Reached target default.target - Main User Target. Mar 6 02:37:36.118454 systemd[1687]: Startup finished in 446ms. Mar 6 02:37:36.119231 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 02:37:36.131571 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 02:37:36.170161 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:54568.service - OpenSSH per-connection server daemon (10.0.0.1:54568). Mar 6 02:37:36.486107 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 54568 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:36.489720 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:36.499608 systemd-logind[1568]: New session 2 of user core. Mar 6 02:37:36.514255 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 02:37:36.543716 sshd[1705]: Connection closed by 10.0.0.1 port 54568 Mar 6 02:37:36.544442 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:36.550292 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:54568.service: Deactivated successfully. Mar 6 02:37:36.552818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:37:36.557392 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 02:37:36.562322 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 02:37:36.565229 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Mar 6 02:37:36.572697 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:37:36.574067 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:54576.service - OpenSSH per-connection server daemon (10.0.0.1:54576). Mar 6 02:37:36.580080 systemd[1]: Startup finished in 3.496s (kernel) + 7.023s (initrd) + 9.251s (userspace) = 19.772s. Mar 6 02:37:36.582094 systemd-logind[1568]: Removed session 2. Mar 6 02:37:36.794559 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 54576 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:36.795742 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:36.801663 systemd-logind[1568]: New session 3 of user core. Mar 6 02:37:36.808168 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 02:37:36.827441 sshd[1725]: Connection closed by 10.0.0.1 port 54576 Mar 6 02:37:36.828201 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:36.832928 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:54576.service: Deactivated successfully. Mar 6 02:37:36.835862 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 02:37:36.838158 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Mar 6 02:37:36.840596 systemd-logind[1568]: Removed session 3. Mar 6 02:37:37.882611 kubelet[1709]: E0306 02:37:37.882512 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:37:37.887360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:37:37.887691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:37:37.888236 systemd[1]: kubelet.service: Consumed 4.731s CPU time, 268.3M memory peak. Mar 6 02:37:46.850037 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:34520.service - OpenSSH per-connection server daemon (10.0.0.1:34520). Mar 6 02:37:46.924935 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 34520 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:46.928930 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:46.935267 systemd-logind[1568]: New session 4 of user core. Mar 6 02:37:46.945170 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 02:37:46.965873 sshd[1736]: Connection closed by 10.0.0.1 port 34520 Mar 6 02:37:46.966595 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:46.983814 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:34520.service: Deactivated successfully. Mar 6 02:37:46.985914 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 02:37:46.987351 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Mar 6 02:37:46.989900 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). Mar 6 02:37:46.991876 systemd-logind[1568]: Removed session 4. Mar 6 02:37:47.068878 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:47.070908 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:47.078702 systemd-logind[1568]: New session 5 of user core. Mar 6 02:37:47.086122 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 02:37:47.097774 sshd[1745]: Connection closed by 10.0.0.1 port 34528 Mar 6 02:37:47.098152 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:47.110468 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:34528.service: Deactivated successfully. Mar 6 02:37:47.113572 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 02:37:47.114805 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Mar 6 02:37:47.119686 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:34536.service - OpenSSH per-connection server daemon (10.0.0.1:34536). Mar 6 02:37:47.120443 systemd-logind[1568]: Removed session 5. Mar 6 02:37:47.205311 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 34536 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:47.207310 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:47.213519 systemd-logind[1568]: New session 6 of user core. Mar 6 02:37:47.222385 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 02:37:47.257438 sshd[1754]: Connection closed by 10.0.0.1 port 34536 Mar 6 02:37:47.258221 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:47.267173 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:34536.service: Deactivated successfully. Mar 6 02:37:47.269170 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 02:37:47.270254 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Mar 6 02:37:47.273191 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:34550.service - OpenSSH per-connection server daemon (10.0.0.1:34550). Mar 6 02:37:47.275077 systemd-logind[1568]: Removed session 6. Mar 6 02:37:47.346296 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 34550 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:47.348222 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:47.354839 systemd-logind[1568]: New session 7 of user core. Mar 6 02:37:47.362528 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 02:37:47.402161 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 02:37:47.402734 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:37:47.426910 sudo[1764]: pam_unix(sudo:session): session closed for user root Mar 6 02:37:47.430420 sshd[1763]: Connection closed by 10.0.0.1 port 34550 Mar 6 02:37:47.430908 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:47.444871 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:34550.service: Deactivated successfully. Mar 6 02:37:47.446831 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 02:37:47.448023 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Mar 6 02:37:47.450938 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:34558.service - OpenSSH per-connection server daemon (10.0.0.1:34558). Mar 6 02:37:47.452508 systemd-logind[1568]: Removed session 7. Mar 6 02:37:47.549732 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 34558 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:47.557576 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:47.568868 systemd-logind[1568]: New session 8 of user core. Mar 6 02:37:47.580679 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 02:37:47.609373 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 02:37:47.610095 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:37:47.620424 sudo[1775]: pam_unix(sudo:session): session closed for user root Mar 6 02:37:47.631931 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 02:37:47.632400 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:37:47.646104 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 02:37:47.712944 augenrules[1797]: No rules Mar 6 02:37:47.714755 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 02:37:47.715128 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 02:37:47.716273 sudo[1774]: pam_unix(sudo:session): session closed for user root Mar 6 02:37:47.717906 sshd[1773]: Connection closed by 10.0.0.1 port 34558 Mar 6 02:37:47.718363 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Mar 6 02:37:47.728509 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:34558.service: Deactivated successfully. Mar 6 02:37:47.731874 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 02:37:47.733043 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Mar 6 02:37:47.735676 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:34564.service - OpenSSH per-connection server daemon (10.0.0.1:34564). Mar 6 02:37:47.737034 systemd-logind[1568]: Removed session 8. Mar 6 02:37:47.819770 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 34564 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:37:47.822269 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:37:47.833604 systemd-logind[1568]: New session 9 of user core. Mar 6 02:37:47.847176 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 02:37:47.862295 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 02:37:47.862709 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 02:37:48.047582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 02:37:48.050212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:37:49.616568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:37:49.640589 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:37:49.938786 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 02:37:49.963684 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 02:37:49.971055 kubelet[1837]: E0306 02:37:49.971005 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:37:49.976921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:37:49.977227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:37:49.977694 systemd[1]: kubelet.service: Consumed 1.448s CPU time, 111.8M memory peak. Mar 6 02:37:51.339264 dockerd[1845]: time="2026-03-06T02:37:51.338938116Z" level=info msg="Starting up" Mar 6 02:37:51.340854 dockerd[1845]: time="2026-03-06T02:37:51.340815350Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 02:37:51.392632 dockerd[1845]: time="2026-03-06T02:37:51.392505362Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 02:37:51.519374 dockerd[1845]: time="2026-03-06T02:37:51.519106274Z" level=info msg="Loading containers: start." Mar 6 02:37:51.670104 kernel: Initializing XFRM netlink socket Mar 6 02:37:52.344475 systemd-networkd[1447]: docker0: Link UP Mar 6 02:37:52.352355 dockerd[1845]: time="2026-03-06T02:37:52.352211601Z" level=info msg="Loading containers: done." Mar 6 02:37:52.386544 dockerd[1845]: time="2026-03-06T02:37:52.386454861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 02:37:52.386762 dockerd[1845]: time="2026-03-06T02:37:52.386637782Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 02:37:52.386992 dockerd[1845]: time="2026-03-06T02:37:52.386885154Z" level=info msg="Initializing buildkit" Mar 6 02:37:52.438179 dockerd[1845]: time="2026-03-06T02:37:52.438057374Z" level=info msg="Completed buildkit initialization" Mar 6 02:37:52.451396 dockerd[1845]: time="2026-03-06T02:37:52.451268388Z" level=info msg="Daemon has completed initialization" Mar 6 02:37:52.452118 dockerd[1845]: time="2026-03-06T02:37:52.451444251Z" level=info msg="API listen on /run/docker.sock" Mar 6 02:37:52.451632 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 02:37:53.672627 containerd[1585]: time="2026-03-06T02:37:53.672539206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 02:37:54.249253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518098580.mount: Deactivated successfully. Mar 6 02:37:56.566642 containerd[1585]: time="2026-03-06T02:37:56.566512644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:37:56.567350 containerd[1585]: time="2026-03-06T02:37:56.567180229Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 02:37:56.568725 containerd[1585]: time="2026-03-06T02:37:56.568616883Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:37:56.572370 containerd[1585]: time="2026-03-06T02:37:56.572266699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:37:56.573336 containerd[1585]: time="2026-03-06T02:37:56.573295591Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.900690872s" Mar 6 02:37:56.573336 containerd[1585]: time="2026-03-06T02:37:56.573332800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 02:37:56.577047 containerd[1585]: time="2026-03-06T02:37:56.576722759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 02:37:58.829403 containerd[1585]: time="2026-03-06T02:37:58.829331286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:37:58.830767 containerd[1585]: time="2026-03-06T02:37:58.830727074Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 02:37:58.832190 containerd[1585]: time="2026-03-06T02:37:58.832110699Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:37:58.878522 containerd[1585]: time="2026-03-06T02:37:58.878387626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:37:58.880350 containerd[1585]: time="2026-03-06T02:37:58.880229159Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.303433334s" Mar 6 02:37:58.880350 containerd[1585]: time="2026-03-06T02:37:58.880299651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 02:37:58.882343 containerd[1585]: time="2026-03-06T02:37:58.882091295Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 02:38:00.047839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 02:38:00.052130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:38:00.580152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:00.736544 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:38:00.973036 kubelet[2142]: E0306 02:38:00.972077 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:38:00.977376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:38:00.977920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:38:00.978934 systemd[1]: kubelet.service: Consumed 743ms CPU time, 108.5M memory peak. Mar 6 02:38:01.277909 containerd[1585]: time="2026-03-06T02:38:01.277638761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:01.279035 containerd[1585]: time="2026-03-06T02:38:01.278978983Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 02:38:01.280235 containerd[1585]: time="2026-03-06T02:38:01.280173002Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:01.284868 containerd[1585]: time="2026-03-06T02:38:01.284786222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:01.286385 containerd[1585]: time="2026-03-06T02:38:01.286289276Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.404167324s" Mar 6 02:38:01.286385 containerd[1585]: time="2026-03-06T02:38:01.286357213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 02:38:01.288126 containerd[1585]: time="2026-03-06T02:38:01.288102341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 02:38:03.297067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223664371.mount: Deactivated successfully. Mar 6 02:38:04.386478 containerd[1585]: time="2026-03-06T02:38:04.386296038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:04.387559 containerd[1585]: time="2026-03-06T02:38:04.387472517Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 02:38:04.389936 containerd[1585]: time="2026-03-06T02:38:04.389861279Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:04.394007 containerd[1585]: time="2026-03-06T02:38:04.393915177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:04.395496 containerd[1585]: time="2026-03-06T02:38:04.395380983Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 3.107246853s" Mar 6 02:38:04.395496 containerd[1585]: time="2026-03-06T02:38:04.395469527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 02:38:04.397285 containerd[1585]: time="2026-03-06T02:38:04.396839508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 02:38:05.000376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534056023.mount: Deactivated successfully. Mar 6 02:38:06.909016 containerd[1585]: time="2026-03-06T02:38:06.908878789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:06.910031 containerd[1585]: time="2026-03-06T02:38:06.909900451Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 02:38:06.911825 containerd[1585]: time="2026-03-06T02:38:06.911707018Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:06.915611 containerd[1585]: time="2026-03-06T02:38:06.915495632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:06.916695 containerd[1585]: time="2026-03-06T02:38:06.916554895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.519666437s" Mar 6 02:38:06.916695 containerd[1585]: time="2026-03-06T02:38:06.916668065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 02:38:06.918406 containerd[1585]: time="2026-03-06T02:38:06.918345964Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 02:38:07.308942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213698593.mount: Deactivated successfully. Mar 6 02:38:07.317516 containerd[1585]: time="2026-03-06T02:38:07.317371670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:38:07.318397 containerd[1585]: time="2026-03-06T02:38:07.318284499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 02:38:07.320032 containerd[1585]: time="2026-03-06T02:38:07.319865376Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:38:07.322775 containerd[1585]: time="2026-03-06T02:38:07.322676052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 02:38:07.323794 containerd[1585]: time="2026-03-06T02:38:07.323690278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 405.294541ms" Mar 6 02:38:07.323794 containerd[1585]: time="2026-03-06T02:38:07.323740040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 02:38:07.324717 containerd[1585]: time="2026-03-06T02:38:07.324589065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 02:38:07.762037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195514624.mount: Deactivated successfully. Mar 6 02:38:11.048540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 02:38:11.052255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:38:11.660407 containerd[1585]: time="2026-03-06T02:38:11.660316818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:11.661488 containerd[1585]: time="2026-03-06T02:38:11.661331820Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 02:38:11.663001 containerd[1585]: time="2026-03-06T02:38:11.662884571Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:11.671267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:11.672242 containerd[1585]: time="2026-03-06T02:38:11.672214109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:11.675350 containerd[1585]: time="2026-03-06T02:38:11.675109256Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 4.35039502s" Mar 6 02:38:11.675350 containerd[1585]: time="2026-03-06T02:38:11.675228688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 02:38:11.695522 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 02:38:11.824744 kubelet[2287]: E0306 02:38:11.824493 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 02:38:11.843902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 02:38:11.844194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 02:38:11.844681 systemd[1]: kubelet.service: Consumed 702ms CPU time, 113.1M memory peak. Mar 6 02:38:15.814609 update_engine[1574]: I20260306 02:38:15.814247 1574 update_attempter.cc:509] Updating boot flags... Mar 6 02:38:16.587428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:16.587753 systemd[1]: kubelet.service: Consumed 702ms CPU time, 113.1M memory peak. Mar 6 02:38:16.590395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:38:16.621019 systemd[1]: Reload requested from client PID 2343 ('systemctl') (unit session-9.scope)... Mar 6 02:38:16.621051 systemd[1]: Reloading... Mar 6 02:38:16.755068 zram_generator::config[2392]: No configuration found. Mar 6 02:38:16.997318 systemd[1]: Reloading finished in 375 ms. Mar 6 02:38:17.073524 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 02:38:17.073713 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 02:38:17.074205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:17.074267 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.2M memory peak. Mar 6 02:38:17.076102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:38:17.250173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:17.266414 (kubelet)[2434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:38:17.330009 kubelet[2434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:38:17.330922 kubelet[2434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 02:38:17.330922 kubelet[2434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:38:17.330922 kubelet[2434]: I0306 02:38:17.330231 2434 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 02:38:17.484785 kubelet[2434]: I0306 02:38:17.484650 2434 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 02:38:17.484785 kubelet[2434]: I0306 02:38:17.484751 2434 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:38:17.485371 kubelet[2434]: I0306 02:38:17.485299 2434 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 02:38:17.537640 kubelet[2434]: E0306 02:38:17.537256 2434 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 02:38:17.540328 kubelet[2434]: I0306 02:38:17.540210 2434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:38:17.560498 kubelet[2434]: I0306 02:38:17.560398 2434 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:38:17.572769 kubelet[2434]: I0306 02:38:17.572678 2434 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 02:38:17.573518 kubelet[2434]: I0306 02:38:17.573456 2434 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:38:17.573783 kubelet[2434]: I0306 02:38:17.573503 2434 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:38:17.574125 kubelet[2434]: I0306 02:38:17.573828 2434 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 02:38:17.574125 kubelet[2434]: I0306 02:38:17.573838 2434 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 02:38:17.574249 kubelet[2434]: I0306 02:38:17.574201 2434 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:38:17.580327 kubelet[2434]: I0306 02:38:17.580240 2434 kubelet.go:480] "Attempting to sync node with API server" Mar 6 02:38:17.580327 kubelet[2434]: I0306 02:38:17.580329 2434 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:38:17.580630 kubelet[2434]: I0306 02:38:17.580516 2434 kubelet.go:386] "Adding apiserver pod source" Mar 6 02:38:17.580772 kubelet[2434]: I0306 02:38:17.580674 2434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:38:17.585513 kubelet[2434]: I0306 02:38:17.585443 2434 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:38:17.586891 kubelet[2434]: E0306 02:38:17.586211 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:38:17.586891 kubelet[2434]: E0306 02:38:17.586211 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:38:17.586891 kubelet[2434]: I0306 02:38:17.586360 2434 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:38:17.587698 kubelet[2434]: W0306 02:38:17.587616 2434 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 02:38:17.594719 kubelet[2434]: I0306 02:38:17.594669 2434 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 02:38:17.594827 kubelet[2434]: I0306 02:38:17.594800 2434 server.go:1289] "Started kubelet" Mar 6 02:38:17.595795 kubelet[2434]: I0306 02:38:17.595044 2434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:38:17.595795 kubelet[2434]: I0306 02:38:17.595617 2434 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:38:17.595923 kubelet[2434]: I0306 02:38:17.595782 2434 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:38:17.597298 kubelet[2434]: I0306 02:38:17.597280 2434 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:38:17.597728 kubelet[2434]: I0306 02:38:17.597665 2434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 02:38:17.598667 kubelet[2434]: I0306 02:38:17.598649 2434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:38:17.599222 kubelet[2434]: E0306 02:38:17.598202 2434 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a2019967c6171 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 02:38:17.594716529 +0000 UTC m=+0.322864712,LastTimestamp:2026-03-06 02:38:17.594716529 +0000 UTC m=+0.322864712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 02:38:17.600361 kubelet[2434]: E0306 02:38:17.600310 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:17.600468 kubelet[2434]: I0306 02:38:17.600422 2434 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 02:38:17.600785 kubelet[2434]: I0306 02:38:17.600733 2434 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 02:38:17.601016 kubelet[2434]: I0306 02:38:17.600921 2434 reconciler.go:26] "Reconciler: start to sync state" Mar 6 02:38:17.601525 kubelet[2434]: E0306 02:38:17.601504 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:38:17.603369 kubelet[2434]: E0306 02:38:17.602584 2434 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:38:17.603369 kubelet[2434]: E0306 02:38:17.602653 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" Mar 6 02:38:17.603613 kubelet[2434]: I0306 02:38:17.603591 2434 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:38:17.604853 kubelet[2434]: I0306 02:38:17.604825 2434 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:38:17.604853 kubelet[2434]: I0306 02:38:17.604853 2434 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:38:17.609328 kubelet[2434]: I0306 02:38:17.609234 2434 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 02:38:17.623061 kubelet[2434]: I0306 02:38:17.622882 2434 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 02:38:17.623061 kubelet[2434]: I0306 02:38:17.622899 2434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 02:38:17.623061 kubelet[2434]: I0306 02:38:17.622914 2434 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:38:17.687185 kubelet[2434]: I0306 02:38:17.687123 2434 policy_none.go:49] "None policy: Start" Mar 6 02:38:17.687301 kubelet[2434]: I0306 02:38:17.687222 2434 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 02:38:17.687301 kubelet[2434]: I0306 02:38:17.687286 2434 state_mem.go:35] "Initializing new in-memory state store" Mar 6 02:38:17.694924 kubelet[2434]: I0306 02:38:17.694892 2434 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 02:38:17.695543 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 02:38:17.696123 kubelet[2434]: I0306 02:38:17.696062 2434 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 02:38:17.696123 kubelet[2434]: I0306 02:38:17.696109 2434 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:38:17.696195 kubelet[2434]: I0306 02:38:17.696143 2434 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 02:38:17.696195 kubelet[2434]: E0306 02:38:17.696188 2434 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:38:17.696838 kubelet[2434]: E0306 02:38:17.696680 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:38:17.700478 kubelet[2434]: E0306 02:38:17.700441 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:17.707127 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 02:38:17.710856 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 02:38:17.730261 kubelet[2434]: E0306 02:38:17.730182 2434 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:38:17.730674 kubelet[2434]: I0306 02:38:17.730451 2434 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 02:38:17.730674 kubelet[2434]: I0306 02:38:17.730477 2434 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:38:17.730916 kubelet[2434]: I0306 02:38:17.730867 2434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 02:38:17.732254 kubelet[2434]: E0306 02:38:17.732237 2434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:38:17.732441 kubelet[2434]: E0306 02:38:17.732425 2434 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 02:38:17.803484 kubelet[2434]: E0306 02:38:17.803375 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" Mar 6 02:38:17.831806 kubelet[2434]: I0306 02:38:17.831402 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:38:17.831806 kubelet[2434]: E0306 02:38:17.831782 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 6 02:38:17.837160 systemd[1]: Created slice kubepods-burstable-pod16481bc2bf787289a5f9718cd6e416e6.slice - libcontainer container kubepods-burstable-pod16481bc2bf787289a5f9718cd6e416e6.slice. Mar 6 02:38:17.850154 kubelet[2434]: E0306 02:38:17.850071 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:17.852764 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 6 02:38:17.868845 kubelet[2434]: E0306 02:38:17.868781 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:17.872247 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 6 02:38:17.874648 kubelet[2434]: E0306 02:38:17.874614 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:17.903324 kubelet[2434]: I0306 02:38:17.903249 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16481bc2bf787289a5f9718cd6e416e6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16481bc2bf787289a5f9718cd6e416e6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:17.903324 kubelet[2434]: I0306 02:38:17.903292 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16481bc2bf787289a5f9718cd6e416e6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16481bc2bf787289a5f9718cd6e416e6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:17.903324 kubelet[2434]: I0306 02:38:17.903308 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16481bc2bf787289a5f9718cd6e416e6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16481bc2bf787289a5f9718cd6e416e6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:17.903324 kubelet[2434]: I0306 02:38:17.903322 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:17.903503 kubelet[2434]: I0306 02:38:17.903384 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:17.903503 kubelet[2434]: I0306 02:38:17.903401 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:17.903503 kubelet[2434]: I0306 02:38:17.903415 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:17.903503 kubelet[2434]: I0306 02:38:17.903434 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:17.903503 kubelet[2434]: I0306 02:38:17.903478 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:18.034438 kubelet[2434]: I0306 02:38:18.034406 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:38:18.034808 kubelet[2434]: E0306 02:38:18.034738 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 6 02:38:18.142267 kubelet[2434]: E0306 02:38:18.142021 2434 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a2019967c6171 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 02:38:17.594716529 +0000 UTC m=+0.322864712,LastTimestamp:2026-03-06 02:38:17.594716529 +0000 UTC m=+0.322864712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 02:38:18.151587 kubelet[2434]: E0306 02:38:18.151489 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:18.152803 containerd[1585]: time="2026-03-06T02:38:18.152719410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16481bc2bf787289a5f9718cd6e416e6,Namespace:kube-system,Attempt:0,}" Mar 6 02:38:18.170241 kubelet[2434]: E0306 02:38:18.170222 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:18.170911 containerd[1585]: time="2026-03-06T02:38:18.170674122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 02:38:18.175365 kubelet[2434]: E0306 02:38:18.175316 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:18.175821 containerd[1585]: time="2026-03-06T02:38:18.175763884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 02:38:18.193012 containerd[1585]: time="2026-03-06T02:38:18.192277400Z" level=info msg="connecting to shim b4ce465595f8895624bdd724aa9bf55cc8b19230c0d668978e350268f6c1e920" address="unix:///run/containerd/s/a8b06fd8ccc573158960c72f15a55df00b8f5834640ec3687477ca8f53725c8e" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:38:18.205036 kubelet[2434]: E0306 02:38:18.204942 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" Mar 6 02:38:18.229721 containerd[1585]: time="2026-03-06T02:38:18.229626844Z" level=info msg="connecting to shim 052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24" address="unix:///run/containerd/s/efae7b547f516690dbc7595ab8d6c474c7ec85ac6a7e326de6b5a4eb0728f9f3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:38:18.229844 containerd[1585]: time="2026-03-06T02:38:18.229659182Z" level=info msg="connecting to shim c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318" address="unix:///run/containerd/s/8f274b6cd1ada61386384a79c2f8b36b0572fb273619f7b49966bc3a29cb8cf7" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:38:18.403625 systemd[1]: Started cri-containerd-b4ce465595f8895624bdd724aa9bf55cc8b19230c0d668978e350268f6c1e920.scope - libcontainer container b4ce465595f8895624bdd724aa9bf55cc8b19230c0d668978e350268f6c1e920. Mar 6 02:38:18.437420 kubelet[2434]: I0306 02:38:18.436907 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:38:18.437420 kubelet[2434]: E0306 02:38:18.437372 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Mar 6 02:38:18.467142 systemd[1]: Started cri-containerd-052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24.scope - libcontainer container 052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24. Mar 6 02:38:18.472121 systemd[1]: Started cri-containerd-c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318.scope - libcontainer container c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318. Mar 6 02:38:18.532008 kubelet[2434]: E0306 02:38:18.531737 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 02:38:18.535363 kubelet[2434]: E0306 02:38:18.535306 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 02:38:18.559893 containerd[1585]: time="2026-03-06T02:38:18.559807062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16481bc2bf787289a5f9718cd6e416e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4ce465595f8895624bdd724aa9bf55cc8b19230c0d668978e350268f6c1e920\"" Mar 6 02:38:18.561635 kubelet[2434]: E0306 02:38:18.561537 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:18.571128 containerd[1585]: time="2026-03-06T02:38:18.571057077Z" level=info msg="CreateContainer within sandbox \"b4ce465595f8895624bdd724aa9bf55cc8b19230c0d668978e350268f6c1e920\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 02:38:18.576488 kubelet[2434]: E0306 02:38:18.576466 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 02:38:18.578610 containerd[1585]: time="2026-03-06T02:38:18.578511378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24\"" Mar 6 02:38:18.579194 kubelet[2434]: E0306 02:38:18.579134 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:18.584306 containerd[1585]: time="2026-03-06T02:38:18.584257466Z" level=info msg="CreateContainer within sandbox \"052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 02:38:18.592606 containerd[1585]: time="2026-03-06T02:38:18.592432077Z" level=info msg="Container ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:38:18.592914 containerd[1585]: time="2026-03-06T02:38:18.592831540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318\"" Mar 6 02:38:18.594005 kubelet[2434]: E0306 02:38:18.593792 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:18.599426 containerd[1585]: time="2026-03-06T02:38:18.599390892Z" level=info msg="CreateContainer within sandbox \"c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 02:38:18.602380 containerd[1585]: time="2026-03-06T02:38:18.602343437Z" level=info msg="Container 8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:38:18.607313 containerd[1585]: time="2026-03-06T02:38:18.607209079Z" level=info msg="CreateContainer within sandbox \"b4ce465595f8895624bdd724aa9bf55cc8b19230c0d668978e350268f6c1e920\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c\"" Mar 6 02:38:18.609803 containerd[1585]: time="2026-03-06T02:38:18.609737768Z" level=info msg="StartContainer for \"ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c\"" Mar 6 02:38:18.611442 containerd[1585]: time="2026-03-06T02:38:18.611346603Z" level=info msg="connecting to shim ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c" address="unix:///run/containerd/s/a8b06fd8ccc573158960c72f15a55df00b8f5834640ec3687477ca8f53725c8e" protocol=ttrpc version=3 Mar 6 02:38:18.613312 containerd[1585]: time="2026-03-06T02:38:18.613241010Z" level=info msg="CreateContainer within sandbox \"052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5\"" Mar 6 02:38:18.613977 containerd[1585]: time="2026-03-06T02:38:18.613798848Z" level=info msg="StartContainer for \"8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5\"" Mar 6 02:38:18.615480 containerd[1585]: time="2026-03-06T02:38:18.615432911Z" level=info msg="connecting to shim 8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5" address="unix:///run/containerd/s/efae7b547f516690dbc7595ab8d6c474c7ec85ac6a7e326de6b5a4eb0728f9f3" protocol=ttrpc version=3 Mar 6 02:38:18.616912 containerd[1585]: time="2026-03-06T02:38:18.616866571Z" level=info msg="Container c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:38:18.626068 containerd[1585]: time="2026-03-06T02:38:18.625667937Z" level=info msg="CreateContainer within sandbox \"c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6\"" Mar 6 02:38:18.627767 containerd[1585]: time="2026-03-06T02:38:18.627487235Z" level=info msg="StartContainer for \"c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6\"" Mar 6 02:38:18.629037 containerd[1585]: time="2026-03-06T02:38:18.628757279Z" level=info msg="connecting to shim c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6" address="unix:///run/containerd/s/8f274b6cd1ada61386384a79c2f8b36b0572fb273619f7b49966bc3a29cb8cf7" protocol=ttrpc version=3 Mar 6 02:38:18.642169 systemd[1]: Started cri-containerd-ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c.scope - libcontainer container ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c. Mar 6 02:38:18.648148 systemd[1]: Started cri-containerd-8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5.scope - libcontainer container 8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5. Mar 6 02:38:18.661114 systemd[1]: Started cri-containerd-c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6.scope - libcontainer container c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6. Mar 6 02:38:18.697035 kubelet[2434]: E0306 02:38:18.696887 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 02:38:18.806456 containerd[1585]: time="2026-03-06T02:38:18.806385249Z" level=info msg="StartContainer for \"ed8c63033b3739b7535dbab6ed8653ac1a4171d6750fe2508216db120a0bb87c\" returns successfully" Mar 6 02:38:18.819365 containerd[1585]: time="2026-03-06T02:38:18.819251334Z" level=info msg="StartContainer for \"c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6\" returns successfully" Mar 6 02:38:18.819892 containerd[1585]: time="2026-03-06T02:38:18.819761413Z" level=info msg="StartContainer for \"8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5\" returns successfully" Mar 6 02:38:19.241193 kubelet[2434]: I0306 02:38:19.241137 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:38:19.759237 kubelet[2434]: E0306 02:38:19.759168 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:19.759807 kubelet[2434]: E0306 02:38:19.759345 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:19.762732 kubelet[2434]: E0306 02:38:19.762670 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:19.762998 kubelet[2434]: E0306 02:38:19.762896 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:19.771121 kubelet[2434]: E0306 02:38:19.771049 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:19.771371 kubelet[2434]: E0306 02:38:19.771325 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:20.302511 kubelet[2434]: E0306 02:38:20.302397 2434 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 02:38:20.398666 kubelet[2434]: I0306 02:38:20.398531 2434 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 02:38:20.398666 kubelet[2434]: E0306 02:38:20.398601 2434 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 02:38:20.414301 kubelet[2434]: E0306 02:38:20.414170 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:20.515522 kubelet[2434]: E0306 02:38:20.515415 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:20.616872 kubelet[2434]: E0306 02:38:20.616647 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:20.717477 kubelet[2434]: E0306 02:38:20.717383 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:20.767693 kubelet[2434]: E0306 02:38:20.767532 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:20.768200 kubelet[2434]: E0306 02:38:20.767729 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:20.768200 kubelet[2434]: E0306 02:38:20.767763 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:20.768200 kubelet[2434]: E0306 02:38:20.767866 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:20.768200 kubelet[2434]: E0306 02:38:20.768052 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 02:38:20.768200 kubelet[2434]: E0306 02:38:20.768195 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:20.817735 kubelet[2434]: E0306 02:38:20.817592 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:20.918509 kubelet[2434]: E0306 02:38:20.918292 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:21.019690 kubelet[2434]: E0306 02:38:21.019463 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:21.120060 kubelet[2434]: E0306 02:38:21.119892 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 02:38:21.203167 kubelet[2434]: I0306 02:38:21.202922 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:21.213523 kubelet[2434]: E0306 02:38:21.213450 2434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:21.213523 kubelet[2434]: I0306 02:38:21.213473 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:21.215629 kubelet[2434]: E0306 02:38:21.215511 2434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:21.215629 kubelet[2434]: I0306 02:38:21.215615 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:21.217367 kubelet[2434]: E0306 02:38:21.217299 2434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:21.582520 kubelet[2434]: I0306 02:38:21.582444 2434 apiserver.go:52] "Watching apiserver" Mar 6 02:38:21.602316 kubelet[2434]: I0306 02:38:21.602222 2434 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 02:38:21.770151 kubelet[2434]: I0306 02:38:21.769764 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:21.771147 kubelet[2434]: I0306 02:38:21.770904 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:21.786508 kubelet[2434]: E0306 02:38:21.786413 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:21.792868 kubelet[2434]: E0306 02:38:21.792774 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:22.772631 kubelet[2434]: E0306 02:38:22.772352 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:22.772631 kubelet[2434]: E0306 02:38:22.772501 2434 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:23.226398 systemd[1]: Reload requested from client PID 2719 ('systemctl') (unit session-9.scope)... Mar 6 02:38:23.226467 systemd[1]: Reloading... Mar 6 02:38:23.338120 zram_generator::config[2758]: No configuration found. Mar 6 02:38:23.644170 systemd[1]: Reloading finished in 416 ms. Mar 6 02:38:23.709765 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:38:23.725470 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 02:38:23.726050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:23.726112 systemd[1]: kubelet.service: Consumed 1.027s CPU time, 134.4M memory peak. Mar 6 02:38:23.730425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 02:38:23.995753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 02:38:24.009524 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 02:38:24.117074 kubelet[2807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:38:24.117074 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 02:38:24.117074 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 02:38:24.117074 kubelet[2807]: I0306 02:38:24.116367 2807 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 02:38:24.136494 kubelet[2807]: I0306 02:38:24.136202 2807 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 02:38:24.136494 kubelet[2807]: I0306 02:38:24.136280 2807 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 02:38:24.136494 kubelet[2807]: I0306 02:38:24.136503 2807 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 02:38:24.138354 kubelet[2807]: I0306 02:38:24.138215 2807 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 02:38:24.143071 kubelet[2807]: I0306 02:38:24.142530 2807 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 02:38:24.160245 kubelet[2807]: I0306 02:38:24.160212 2807 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 02:38:24.190358 kubelet[2807]: I0306 02:38:24.190245 2807 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 02:38:24.191438 kubelet[2807]: I0306 02:38:24.191301 2807 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 02:38:24.191522 kubelet[2807]: I0306 02:38:24.191384 2807 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 02:38:24.191743 kubelet[2807]: I0306 02:38:24.191531 2807 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 02:38:24.191743 kubelet[2807]: I0306 02:38:24.191602 2807 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 02:38:24.191743 kubelet[2807]: I0306 02:38:24.191682 2807 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:38:24.192886 kubelet[2807]: I0306 02:38:24.192229 2807 kubelet.go:480] "Attempting to sync node with API server" Mar 6 02:38:24.192886 kubelet[2807]: I0306 02:38:24.192254 2807 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 02:38:24.192886 kubelet[2807]: I0306 02:38:24.192288 2807 kubelet.go:386] "Adding apiserver pod source" Mar 6 02:38:24.192886 kubelet[2807]: I0306 02:38:24.192315 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 02:38:24.206163 kubelet[2807]: I0306 02:38:24.206125 2807 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 02:38:24.209098 kubelet[2807]: I0306 02:38:24.206835 2807 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 02:38:24.223763 kubelet[2807]: I0306 02:38:24.223714 2807 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 02:38:24.223827 kubelet[2807]: I0306 02:38:24.223811 2807 server.go:1289] "Started kubelet" Mar 6 02:38:24.226409 kubelet[2807]: I0306 02:38:24.224160 2807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 02:38:24.229092 kubelet[2807]: I0306 02:38:24.228640 2807 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 02:38:24.230116 kubelet[2807]: I0306 02:38:24.225238 2807 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 02:38:24.236064 kubelet[2807]: I0306 02:38:24.235366 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 02:38:24.236064 kubelet[2807]: I0306 02:38:24.235697 2807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 02:38:24.237446 kubelet[2807]: I0306 02:38:24.237338 2807 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 02:38:24.243305 kubelet[2807]: I0306 02:38:24.242698 2807 factory.go:223] Registration of the systemd container factory successfully Mar 6 02:38:24.243305 kubelet[2807]: I0306 02:38:24.242748 2807 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 02:38:24.243305 kubelet[2807]: I0306 02:38:24.243134 2807 reconciler.go:26] "Reconciler: start to sync state" Mar 6 02:38:24.243305 kubelet[2807]: I0306 02:38:24.243166 2807 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 02:38:24.254303 kubelet[2807]: I0306 02:38:24.254106 2807 server.go:317] "Adding debug handlers to kubelet server" Mar 6 02:38:24.254907 kubelet[2807]: E0306 02:38:24.254800 2807 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 02:38:24.258473 sudo[2826]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 6 02:38:24.260390 sudo[2826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 6 02:38:24.262365 kubelet[2807]: I0306 02:38:24.262220 2807 factory.go:223] Registration of the containerd container factory successfully Mar 6 02:38:24.341182 kubelet[2807]: I0306 02:38:24.340375 2807 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 02:38:24.352135 kubelet[2807]: I0306 02:38:24.351298 2807 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 02:38:24.352135 kubelet[2807]: I0306 02:38:24.351332 2807 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 02:38:24.352135 kubelet[2807]: I0306 02:38:24.351358 2807 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 02:38:24.352135 kubelet[2807]: I0306 02:38:24.351367 2807 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 02:38:24.352135 kubelet[2807]: E0306 02:38:24.351415 2807 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 02:38:24.394876 kubelet[2807]: I0306 02:38:24.394715 2807 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 02:38:24.395132 kubelet[2807]: I0306 02:38:24.394931 2807 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 02:38:24.395132 kubelet[2807]: I0306 02:38:24.395125 2807 state_mem.go:36] "Initialized new in-memory state store" Mar 6 02:38:24.395460 kubelet[2807]: I0306 02:38:24.395254 2807 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 02:38:24.395460 kubelet[2807]: I0306 02:38:24.395268 2807 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 02:38:24.395627 kubelet[2807]: I0306 02:38:24.395501 2807 policy_none.go:49] "None policy: Start" Mar 6 02:38:24.395627 kubelet[2807]: I0306 02:38:24.395528 2807 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 02:38:24.395627 kubelet[2807]: I0306 02:38:24.395621 2807 state_mem.go:35] "Initializing new in-memory state store" Mar 6 02:38:24.395744 kubelet[2807]: I0306 02:38:24.395729 2807 state_mem.go:75] "Updated machine memory state" Mar 6 02:38:24.407066 kubelet[2807]: E0306 02:38:24.406878 2807 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 02:38:24.407418 kubelet[2807]: I0306 02:38:24.407274 2807 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 02:38:24.407418 kubelet[2807]: I0306 02:38:24.407361 2807 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 02:38:24.407850 kubelet[2807]: I0306 02:38:24.407717 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 02:38:24.411477 kubelet[2807]: E0306 02:38:24.410775 2807 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 02:38:24.454113 kubelet[2807]: I0306 02:38:24.453804 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:24.454599 kubelet[2807]: I0306 02:38:24.454404 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:24.455023 kubelet[2807]: I0306 02:38:24.454881 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:24.483384 kubelet[2807]: E0306 02:38:24.483266 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:24.484936 kubelet[2807]: E0306 02:38:24.484816 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:24.534772 kubelet[2807]: I0306 02:38:24.534494 2807 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 02:38:24.544432 kubelet[2807]: I0306 02:38:24.544302 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16481bc2bf787289a5f9718cd6e416e6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16481bc2bf787289a5f9718cd6e416e6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:24.544432 kubelet[2807]: I0306 02:38:24.544408 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16481bc2bf787289a5f9718cd6e416e6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16481bc2bf787289a5f9718cd6e416e6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:24.544692 kubelet[2807]: I0306 02:38:24.544444 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:24.544692 kubelet[2807]: I0306 02:38:24.544478 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:24.544692 kubelet[2807]: I0306 02:38:24.544504 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:24.544692 kubelet[2807]: I0306 02:38:24.544524 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16481bc2bf787289a5f9718cd6e416e6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16481bc2bf787289a5f9718cd6e416e6\") " pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:24.544692 kubelet[2807]: I0306 02:38:24.544635 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:24.544884 kubelet[2807]: I0306 02:38:24.544749 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 02:38:24.544884 kubelet[2807]: I0306 02:38:24.544784 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:24.552490 kubelet[2807]: I0306 02:38:24.552278 2807 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 02:38:24.552490 kubelet[2807]: I0306 02:38:24.552396 2807 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 02:38:24.772641 kubelet[2807]: E0306 02:38:24.772481 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:24.784446 kubelet[2807]: E0306 02:38:24.784178 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:24.786214 kubelet[2807]: E0306 02:38:24.785414 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:24.876229 sudo[2826]: pam_unix(sudo:session): session closed for user root Mar 6 02:38:25.194763 kubelet[2807]: I0306 02:38:25.194469 2807 apiserver.go:52] "Watching apiserver" Mar 6 02:38:25.244062 kubelet[2807]: I0306 02:38:25.243883 2807 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 02:38:25.390104 kubelet[2807]: I0306 02:38:25.389867 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:25.390104 kubelet[2807]: E0306 02:38:25.389939 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:25.391198 kubelet[2807]: I0306 02:38:25.390855 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:25.417108 kubelet[2807]: E0306 02:38:25.416915 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 02:38:25.418239 kubelet[2807]: E0306 02:38:25.417605 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 02:38:25.418375 kubelet[2807]: E0306 02:38:25.418318 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:25.421253 kubelet[2807]: E0306 02:38:25.420097 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:25.470327 kubelet[2807]: I0306 02:38:25.470188 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.470167921 podStartE2EDuration="4.470167921s" podCreationTimestamp="2026-03-06 02:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:38:25.444124281 +0000 UTC m=+1.422635297" watchObservedRunningTime="2026-03-06 02:38:25.470167921 +0000 UTC m=+1.448678928" Mar 6 02:38:25.472706 kubelet[2807]: I0306 02:38:25.472290 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.47228194 podStartE2EDuration="1.47228194s" podCreationTimestamp="2026-03-06 02:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:38:25.469709516 +0000 UTC m=+1.448220513" watchObservedRunningTime="2026-03-06 02:38:25.47228194 +0000 UTC m=+1.450792957" Mar 6 02:38:25.519624 kubelet[2807]: I0306 02:38:25.519393 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.519370523 podStartE2EDuration="4.519370523s" podCreationTimestamp="2026-03-06 02:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:38:25.498527262 +0000 UTC m=+1.477038289" watchObservedRunningTime="2026-03-06 02:38:25.519370523 +0000 UTC m=+1.497881540" Mar 6 02:38:26.400088 kubelet[2807]: E0306 02:38:26.399454 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:26.401353 kubelet[2807]: E0306 02:38:26.399939 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:26.402209 kubelet[2807]: E0306 02:38:26.401178 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:28.174252 sudo[1810]: pam_unix(sudo:session): session closed for user root Mar 6 02:38:28.190226 sshd[1809]: Connection closed by 10.0.0.1 port 34564 Mar 6 02:38:28.192224 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Mar 6 02:38:28.209424 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:34564.service: Deactivated successfully. Mar 6 02:38:28.218498 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 02:38:28.224186 systemd[1]: session-9.scope: Consumed 10.489s CPU time, 278.6M memory peak. Mar 6 02:38:28.235293 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Mar 6 02:38:28.257812 systemd-logind[1568]: Removed session 9. Mar 6 02:38:28.606819 kubelet[2807]: I0306 02:38:28.602672 2807 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 02:38:28.616174 containerd[1585]: time="2026-03-06T02:38:28.616137662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 02:38:28.619900 kubelet[2807]: I0306 02:38:28.617479 2807 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 02:38:29.829367 kubelet[2807]: I0306 02:38:29.828418 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cni-path\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.829367 kubelet[2807]: I0306 02:38:29.828782 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-etc-cni-netd\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.829367 kubelet[2807]: I0306 02:38:29.828812 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-net\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.829367 kubelet[2807]: I0306 02:38:29.828831 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qc8\" (UniqueName: \"kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-kube-api-access-r9qc8\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.829367 kubelet[2807]: I0306 02:38:29.828854 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-run\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.829367 kubelet[2807]: I0306 02:38:29.828872 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-hostproc\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831296 kubelet[2807]: I0306 02:38:29.828889 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-lib-modules\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831296 kubelet[2807]: I0306 02:38:29.828909 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04615d8d-d639-4265-8b38-27bf180e384c-cilium-config-path\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831296 kubelet[2807]: I0306 02:38:29.828930 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4b45bc89-23ac-4db1-93e2-1ab0f0beb05b-kube-proxy\") pod \"kube-proxy-qxrl6\" (UID: \"4b45bc89-23ac-4db1-93e2-1ab0f0beb05b\") " pod="kube-system/kube-proxy-qxrl6" Mar 6 02:38:29.831296 kubelet[2807]: I0306 02:38:29.829179 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-cgroup\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831296 kubelet[2807]: I0306 02:38:29.829206 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04615d8d-d639-4265-8b38-27bf180e384c-clustermesh-secrets\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831296 kubelet[2807]: I0306 02:38:29.829227 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b45bc89-23ac-4db1-93e2-1ab0f0beb05b-xtables-lock\") pod \"kube-proxy-qxrl6\" (UID: \"4b45bc89-23ac-4db1-93e2-1ab0f0beb05b\") " pod="kube-system/kube-proxy-qxrl6" Mar 6 02:38:29.831487 kubelet[2807]: I0306 02:38:29.829261 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87jgx\" (UniqueName: \"kubernetes.io/projected/4b45bc89-23ac-4db1-93e2-1ab0f0beb05b-kube-api-access-87jgx\") pod \"kube-proxy-qxrl6\" (UID: \"4b45bc89-23ac-4db1-93e2-1ab0f0beb05b\") " pod="kube-system/kube-proxy-qxrl6" Mar 6 02:38:29.831487 kubelet[2807]: I0306 02:38:29.829283 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-bpf-maps\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831487 kubelet[2807]: I0306 02:38:29.829310 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-xtables-lock\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831487 kubelet[2807]: I0306 02:38:29.829330 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-kernel\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831487 kubelet[2807]: I0306 02:38:29.829359 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-hubble-tls\") pod \"cilium-wg99k\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " pod="kube-system/cilium-wg99k" Mar 6 02:38:29.831487 kubelet[2807]: I0306 02:38:29.829387 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b45bc89-23ac-4db1-93e2-1ab0f0beb05b-lib-modules\") pod \"kube-proxy-qxrl6\" (UID: \"4b45bc89-23ac-4db1-93e2-1ab0f0beb05b\") " pod="kube-system/kube-proxy-qxrl6" Mar 6 02:38:29.878403 systemd[1]: Created slice kubepods-besteffort-pod4b45bc89_23ac_4db1_93e2_1ab0f0beb05b.slice - libcontainer container kubepods-besteffort-pod4b45bc89_23ac_4db1_93e2_1ab0f0beb05b.slice. Mar 6 02:38:29.928135 systemd[1]: Created slice kubepods-burstable-pod04615d8d_d639_4265_8b38_27bf180e384c.slice - libcontainer container kubepods-burstable-pod04615d8d_d639_4265_8b38_27bf180e384c.slice. Mar 6 02:38:29.994460 systemd[1]: Created slice kubepods-besteffort-pod26ca3e81_9f8d_4bee_808b_95f2420e0514.slice - libcontainer container kubepods-besteffort-pod26ca3e81_9f8d_4bee_808b_95f2420e0514.slice. Mar 6 02:38:30.035197 kubelet[2807]: I0306 02:38:30.034484 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26ca3e81-9f8d-4bee-808b-95f2420e0514-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wm4mc\" (UID: \"26ca3e81-9f8d-4bee-808b-95f2420e0514\") " pod="kube-system/cilium-operator-6c4d7847fc-wm4mc" Mar 6 02:38:30.035197 kubelet[2807]: I0306 02:38:30.034765 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8thdn\" (UniqueName: \"kubernetes.io/projected/26ca3e81-9f8d-4bee-808b-95f2420e0514-kube-api-access-8thdn\") pod \"cilium-operator-6c4d7847fc-wm4mc\" (UID: \"26ca3e81-9f8d-4bee-808b-95f2420e0514\") " pod="kube-system/cilium-operator-6c4d7847fc-wm4mc" Mar 6 02:38:30.138807 kubelet[2807]: E0306 02:38:30.138149 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:30.326425 kubelet[2807]: E0306 02:38:30.321688 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:30.326648 containerd[1585]: time="2026-03-06T02:38:30.323529320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wm4mc,Uid:26ca3e81-9f8d-4bee-808b-95f2420e0514,Namespace:kube-system,Attempt:0,}" Mar 6 02:38:30.431722 kubelet[2807]: E0306 02:38:30.431241 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:30.517093 kubelet[2807]: E0306 02:38:30.515311 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:30.517639 containerd[1585]: time="2026-03-06T02:38:30.517486564Z" level=info msg="connecting to shim 47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115" address="unix:///run/containerd/s/c9c64843c7712223719699d358911a8386ba0556da2650a3d34a0fc6c62ce0cd" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:38:30.518742 containerd[1585]: time="2026-03-06T02:38:30.518699526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxrl6,Uid:4b45bc89-23ac-4db1-93e2-1ab0f0beb05b,Namespace:kube-system,Attempt:0,}" Mar 6 02:38:30.562505 kubelet[2807]: E0306 02:38:30.562336 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:30.569327 containerd[1585]: time="2026-03-06T02:38:30.568395469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wg99k,Uid:04615d8d-d639-4265-8b38-27bf180e384c,Namespace:kube-system,Attempt:0,}" Mar 6 02:38:30.751451 containerd[1585]: time="2026-03-06T02:38:30.748943709Z" level=info msg="connecting to shim 8a8be9f429d1d95ebfe14a0451f36bca806a829703734d36efeb294330639943" address="unix:///run/containerd/s/0d2997ec4675fb082b45e4e20b23f63854ab120809ec5c96fbaf38575a44668e" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:38:30.926196 systemd[1]: Started cri-containerd-47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115.scope - libcontainer container 47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115. Mar 6 02:38:30.949925 containerd[1585]: time="2026-03-06T02:38:30.949763000Z" level=info msg="connecting to shim dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73" address="unix:///run/containerd/s/78faf9ba7f8c706d8ea5a94a2f499bae0ac815d0ebee46788c7e574f085f26b6" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:38:31.105871 systemd[1]: Started cri-containerd-8a8be9f429d1d95ebfe14a0451f36bca806a829703734d36efeb294330639943.scope - libcontainer container 8a8be9f429d1d95ebfe14a0451f36bca806a829703734d36efeb294330639943. Mar 6 02:38:31.365195 systemd[1]: Started cri-containerd-dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73.scope - libcontainer container dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73. Mar 6 02:38:31.457261 containerd[1585]: time="2026-03-06T02:38:31.453408230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wm4mc,Uid:26ca3e81-9f8d-4bee-808b-95f2420e0514,Namespace:kube-system,Attempt:0,} returns sandbox id \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\"" Mar 6 02:38:31.468502 kubelet[2807]: E0306 02:38:31.468470 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:31.481918 containerd[1585]: time="2026-03-06T02:38:31.481309248Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 6 02:38:31.484255 containerd[1585]: time="2026-03-06T02:38:31.482443684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxrl6,Uid:4b45bc89-23ac-4db1-93e2-1ab0f0beb05b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a8be9f429d1d95ebfe14a0451f36bca806a829703734d36efeb294330639943\"" Mar 6 02:38:31.489327 kubelet[2807]: E0306 02:38:31.488817 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:31.515151 containerd[1585]: time="2026-03-06T02:38:31.512346714Z" level=info msg="CreateContainer within sandbox \"8a8be9f429d1d95ebfe14a0451f36bca806a829703734d36efeb294330639943\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 02:38:31.595944 containerd[1585]: time="2026-03-06T02:38:31.592938827Z" level=info msg="Container c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:38:31.705279 containerd[1585]: time="2026-03-06T02:38:31.700505694Z" level=info msg="CreateContainer within sandbox \"8a8be9f429d1d95ebfe14a0451f36bca806a829703734d36efeb294330639943\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4\"" Mar 6 02:38:31.705279 containerd[1585]: time="2026-03-06T02:38:31.705179928Z" level=info msg="StartContainer for \"c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4\"" Mar 6 02:38:31.709384 containerd[1585]: time="2026-03-06T02:38:31.707689347Z" level=info msg="connecting to shim c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4" address="unix:///run/containerd/s/0d2997ec4675fb082b45e4e20b23f63854ab120809ec5c96fbaf38575a44668e" protocol=ttrpc version=3 Mar 6 02:38:31.813454 containerd[1585]: time="2026-03-06T02:38:31.813411224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wg99k,Uid:04615d8d-d639-4265-8b38-27bf180e384c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\"" Mar 6 02:38:31.819812 kubelet[2807]: E0306 02:38:31.819783 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:31.845361 systemd[1]: Started cri-containerd-c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4.scope - libcontainer container c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4. Mar 6 02:38:32.237737 containerd[1585]: time="2026-03-06T02:38:32.237444584Z" level=info msg="StartContainer for \"c0d69b55a0e23f5005e4d9275fb6dc6d1a5a0a01074ce9a4d6d705929cc04dd4\" returns successfully" Mar 6 02:38:32.520437 kubelet[2807]: E0306 02:38:32.520202 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:32.674350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206156015.mount: Deactivated successfully. Mar 6 02:38:32.939450 kubelet[2807]: E0306 02:38:32.932486 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:33.093141 kubelet[2807]: I0306 02:38:33.090811 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qxrl6" podStartSLOduration=4.090789241 podStartE2EDuration="4.090789241s" podCreationTimestamp="2026-03-06 02:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:38:32.600523253 +0000 UTC m=+8.579034290" watchObservedRunningTime="2026-03-06 02:38:33.090789241 +0000 UTC m=+9.069300257" Mar 6 02:38:33.562168 kubelet[2807]: E0306 02:38:33.560280 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:35.572388 kubelet[2807]: E0306 02:38:35.571352 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:36.624293 containerd[1585]: time="2026-03-06T02:38:36.623736544Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:37.875315 containerd[1585]: time="2026-03-06T02:38:37.875196544Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 6 02:38:37.879438 containerd[1585]: time="2026-03-06T02:38:37.878699146Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:38:37.883506 containerd[1585]: time="2026-03-06T02:38:37.883372016Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.401920994s" Mar 6 02:38:37.883703 containerd[1585]: time="2026-03-06T02:38:37.883520814Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 6 02:38:37.889443 containerd[1585]: time="2026-03-06T02:38:37.889188185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 6 02:38:37.899206 containerd[1585]: time="2026-03-06T02:38:37.898843835Z" level=info msg="CreateContainer within sandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 6 02:38:37.947264 containerd[1585]: time="2026-03-06T02:38:37.944836161Z" level=info msg="Container a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:38:37.974643 containerd[1585]: time="2026-03-06T02:38:37.974364936Z" level=info msg="CreateContainer within sandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\"" Mar 6 02:38:37.978445 containerd[1585]: time="2026-03-06T02:38:37.976496976Z" level=info msg="StartContainer for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\"" Mar 6 02:38:37.983842 containerd[1585]: time="2026-03-06T02:38:37.983277965Z" level=info msg="connecting to shim a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c" address="unix:///run/containerd/s/c9c64843c7712223719699d358911a8386ba0556da2650a3d34a0fc6c62ce0cd" protocol=ttrpc version=3 Mar 6 02:38:38.125909 systemd[1]: Started cri-containerd-a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c.scope - libcontainer container a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c. Mar 6 02:38:38.328864 containerd[1585]: time="2026-03-06T02:38:38.328822759Z" level=info msg="StartContainer for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" returns successfully" Mar 6 02:38:38.603915 kubelet[2807]: E0306 02:38:38.602839 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:39.621205 kubelet[2807]: E0306 02:38:39.619698 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:38:59.656432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901515612.mount: Deactivated successfully. Mar 6 02:39:11.026589 containerd[1585]: time="2026-03-06T02:39:11.025836182Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:39:11.029277 containerd[1585]: time="2026-03-06T02:39:11.029214218Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 6 02:39:11.033321 containerd[1585]: time="2026-03-06T02:39:11.032828084Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 02:39:11.035727 containerd[1585]: time="2026-03-06T02:39:11.034881867Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 33.145547318s" Mar 6 02:39:11.035727 containerd[1585]: time="2026-03-06T02:39:11.035544508Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 6 02:39:11.054184 containerd[1585]: time="2026-03-06T02:39:11.053421929Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 02:39:11.103848 containerd[1585]: time="2026-03-06T02:39:11.103517116Z" level=info msg="Container fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:11.106215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558135620.mount: Deactivated successfully. Mar 6 02:39:11.135391 containerd[1585]: time="2026-03-06T02:39:11.134934894Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\"" Mar 6 02:39:11.137492 containerd[1585]: time="2026-03-06T02:39:11.137462510Z" level=info msg="StartContainer for \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\"" Mar 6 02:39:11.140925 containerd[1585]: time="2026-03-06T02:39:11.140591626Z" level=info msg="connecting to shim fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2" address="unix:///run/containerd/s/78faf9ba7f8c706d8ea5a94a2f499bae0ac815d0ebee46788c7e574f085f26b6" protocol=ttrpc version=3 Mar 6 02:39:11.193505 systemd[1]: Started cri-containerd-fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2.scope - libcontainer container fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2. Mar 6 02:39:11.419719 containerd[1585]: time="2026-03-06T02:39:11.419549236Z" level=info msg="StartContainer for \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\" returns successfully" Mar 6 02:39:11.575872 systemd[1]: cri-containerd-fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2.scope: Deactivated successfully. Mar 6 02:39:11.576791 systemd[1]: cri-containerd-fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2.scope: Consumed 244ms CPU time, 6.5M memory peak, 4K read from disk, 3.2M written to disk. Mar 6 02:39:11.717274 containerd[1585]: time="2026-03-06T02:39:11.716504625Z" level=info msg="received container exit event container_id:\"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\" id:\"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\" pid:3301 exited_at:{seconds:1772764751 nanos:584287830}" Mar 6 02:39:11.865236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2-rootfs.mount: Deactivated successfully. Mar 6 02:39:12.241214 kubelet[2807]: E0306 02:39:12.240400 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:12.258813 containerd[1585]: time="2026-03-06T02:39:12.258445179Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 02:39:12.301194 kubelet[2807]: I0306 02:39:12.298890 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wm4mc" podStartSLOduration=36.886885271 podStartE2EDuration="43.298873838s" podCreationTimestamp="2026-03-06 02:38:29 +0000 UTC" firstStartedPulling="2026-03-06 02:38:31.475444441 +0000 UTC m=+7.453955439" lastFinishedPulling="2026-03-06 02:38:37.887433009 +0000 UTC m=+13.865944006" observedRunningTime="2026-03-06 02:38:38.677302967 +0000 UTC m=+14.655813964" watchObservedRunningTime="2026-03-06 02:39:12.298873838 +0000 UTC m=+48.277384835" Mar 6 02:39:12.341445 containerd[1585]: time="2026-03-06T02:39:12.341203042Z" level=info msg="Container a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:12.341592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248905696.mount: Deactivated successfully. Mar 6 02:39:12.383185 containerd[1585]: time="2026-03-06T02:39:12.381788881Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\"" Mar 6 02:39:12.388491 containerd[1585]: time="2026-03-06T02:39:12.388452898Z" level=info msg="StartContainer for \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\"" Mar 6 02:39:12.400508 containerd[1585]: time="2026-03-06T02:39:12.399748298Z" level=info msg="connecting to shim a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462" address="unix:///run/containerd/s/78faf9ba7f8c706d8ea5a94a2f499bae0ac815d0ebee46788c7e574f085f26b6" protocol=ttrpc version=3 Mar 6 02:39:12.476804 systemd[1]: Started cri-containerd-a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462.scope - libcontainer container a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462. Mar 6 02:39:12.620765 containerd[1585]: time="2026-03-06T02:39:12.620710199Z" level=info msg="StartContainer for \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\" returns successfully" Mar 6 02:39:12.672356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 02:39:12.674477 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:39:12.676916 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:39:12.681494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 02:39:12.686854 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 02:39:12.689855 systemd[1]: cri-containerd-a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462.scope: Deactivated successfully. Mar 6 02:39:12.696439 containerd[1585]: time="2026-03-06T02:39:12.695876480Z" level=info msg="received container exit event container_id:\"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\" id:\"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\" pid:3346 exited_at:{seconds:1772764752 nanos:692267232}" Mar 6 02:39:12.793270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 02:39:13.257937 kubelet[2807]: E0306 02:39:13.257742 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:13.281930 containerd[1585]: time="2026-03-06T02:39:13.281565007Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 02:39:13.295808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462-rootfs.mount: Deactivated successfully. Mar 6 02:39:13.349264 containerd[1585]: time="2026-03-06T02:39:13.349108446Z" level=info msg="Container 5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:13.368913 containerd[1585]: time="2026-03-06T02:39:13.368770843Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\"" Mar 6 02:39:13.370736 containerd[1585]: time="2026-03-06T02:39:13.370588629Z" level=info msg="StartContainer for \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\"" Mar 6 02:39:13.374412 containerd[1585]: time="2026-03-06T02:39:13.374342737Z" level=info msg="connecting to shim 5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65" address="unix:///run/containerd/s/78faf9ba7f8c706d8ea5a94a2f499bae0ac815d0ebee46788c7e574f085f26b6" protocol=ttrpc version=3 Mar 6 02:39:13.456794 systemd[1]: Started cri-containerd-5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65.scope - libcontainer container 5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65. Mar 6 02:39:13.715741 containerd[1585]: time="2026-03-06T02:39:13.715370689Z" level=info msg="StartContainer for \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\" returns successfully" Mar 6 02:39:13.723541 systemd[1]: cri-containerd-5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65.scope: Deactivated successfully. Mar 6 02:39:13.737439 containerd[1585]: time="2026-03-06T02:39:13.737396923Z" level=info msg="received container exit event container_id:\"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\" id:\"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\" pid:3394 exited_at:{seconds:1772764753 nanos:736384059}" Mar 6 02:39:14.271148 kubelet[2807]: E0306 02:39:14.270381 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:14.285270 containerd[1585]: time="2026-03-06T02:39:14.284578585Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 02:39:14.296488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65-rootfs.mount: Deactivated successfully. Mar 6 02:39:14.325434 containerd[1585]: time="2026-03-06T02:39:14.324815724Z" level=info msg="Container e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:14.353405 containerd[1585]: time="2026-03-06T02:39:14.352941954Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\"" Mar 6 02:39:14.358261 containerd[1585]: time="2026-03-06T02:39:14.356745636Z" level=info msg="StartContainer for \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\"" Mar 6 02:39:14.358261 containerd[1585]: time="2026-03-06T02:39:14.357876849Z" level=info msg="connecting to shim e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22" address="unix:///run/containerd/s/78faf9ba7f8c706d8ea5a94a2f499bae0ac815d0ebee46788c7e574f085f26b6" protocol=ttrpc version=3 Mar 6 02:39:14.482354 systemd[1]: Started cri-containerd-e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22.scope - libcontainer container e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22. Mar 6 02:39:14.586818 systemd[1]: cri-containerd-e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22.scope: Deactivated successfully. Mar 6 02:39:14.595059 containerd[1585]: time="2026-03-06T02:39:14.594537825Z" level=info msg="received container exit event container_id:\"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\" id:\"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\" pid:3434 exited_at:{seconds:1772764754 nanos:592927677}" Mar 6 02:39:14.607307 containerd[1585]: time="2026-03-06T02:39:14.606863181Z" level=info msg="StartContainer for \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\" returns successfully" Mar 6 02:39:14.741251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22-rootfs.mount: Deactivated successfully. Mar 6 02:39:15.288868 kubelet[2807]: E0306 02:39:15.288703 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:15.302272 containerd[1585]: time="2026-03-06T02:39:15.301543382Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 02:39:15.416388 containerd[1585]: time="2026-03-06T02:39:15.416241149Z" level=info msg="Container 7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:15.417257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582745327.mount: Deactivated successfully. Mar 6 02:39:15.450565 containerd[1585]: time="2026-03-06T02:39:15.450298958Z" level=info msg="CreateContainer within sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\"" Mar 6 02:39:15.453176 containerd[1585]: time="2026-03-06T02:39:15.452456399Z" level=info msg="StartContainer for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\"" Mar 6 02:39:15.456227 containerd[1585]: time="2026-03-06T02:39:15.456200020Z" level=info msg="connecting to shim 7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7" address="unix:///run/containerd/s/78faf9ba7f8c706d8ea5a94a2f499bae0ac815d0ebee46788c7e574f085f26b6" protocol=ttrpc version=3 Mar 6 02:39:15.530751 systemd[1]: Started cri-containerd-7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7.scope - libcontainer container 7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7. Mar 6 02:39:15.697793 containerd[1585]: time="2026-03-06T02:39:15.697757242Z" level=info msg="StartContainer for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" returns successfully" Mar 6 02:39:16.102384 kubelet[2807]: I0306 02:39:16.102351 2807 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 02:39:16.266922 systemd[1]: Created slice kubepods-burstable-pod005945dc_df09_41ca_aee6_f245261ff362.slice - libcontainer container kubepods-burstable-pod005945dc_df09_41ca_aee6_f245261ff362.slice. Mar 6 02:39:16.286386 systemd[1]: Created slice kubepods-burstable-podff8922bf_148f_4d3c_9f1c_cbf5cb046cf4.slice - libcontainer container kubepods-burstable-podff8922bf_148f_4d3c_9f1c_cbf5cb046cf4.slice. Mar 6 02:39:16.315514 kubelet[2807]: E0306 02:39:16.314870 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:16.359552 kubelet[2807]: I0306 02:39:16.357725 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4rn\" (UniqueName: \"kubernetes.io/projected/ff8922bf-148f-4d3c-9f1c-cbf5cb046cf4-kube-api-access-fx4rn\") pod \"coredns-674b8bbfcf-944bs\" (UID: \"ff8922bf-148f-4d3c-9f1c-cbf5cb046cf4\") " pod="kube-system/coredns-674b8bbfcf-944bs" Mar 6 02:39:16.359552 kubelet[2807]: I0306 02:39:16.357878 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pxd\" (UniqueName: \"kubernetes.io/projected/005945dc-df09-41ca-aee6-f245261ff362-kube-api-access-c5pxd\") pod \"coredns-674b8bbfcf-tbmsm\" (UID: \"005945dc-df09-41ca-aee6-f245261ff362\") " pod="kube-system/coredns-674b8bbfcf-tbmsm" Mar 6 02:39:16.359552 kubelet[2807]: I0306 02:39:16.357912 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff8922bf-148f-4d3c-9f1c-cbf5cb046cf4-config-volume\") pod \"coredns-674b8bbfcf-944bs\" (UID: \"ff8922bf-148f-4d3c-9f1c-cbf5cb046cf4\") " pod="kube-system/coredns-674b8bbfcf-944bs" Mar 6 02:39:16.362281 kubelet[2807]: I0306 02:39:16.357938 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/005945dc-df09-41ca-aee6-f245261ff362-config-volume\") pod \"coredns-674b8bbfcf-tbmsm\" (UID: \"005945dc-df09-41ca-aee6-f245261ff362\") " pod="kube-system/coredns-674b8bbfcf-tbmsm" Mar 6 02:39:16.366336 kubelet[2807]: I0306 02:39:16.365882 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wg99k" podStartSLOduration=8.162051845 podStartE2EDuration="47.365862312s" podCreationTimestamp="2026-03-06 02:38:29 +0000 UTC" firstStartedPulling="2026-03-06 02:38:31.835734812 +0000 UTC m=+7.814245809" lastFinishedPulling="2026-03-06 02:39:11.039545278 +0000 UTC m=+47.018056276" observedRunningTime="2026-03-06 02:39:16.365403669 +0000 UTC m=+52.343914666" watchObservedRunningTime="2026-03-06 02:39:16.365862312 +0000 UTC m=+52.344373309" Mar 6 02:39:16.581527 kubelet[2807]: E0306 02:39:16.581437 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:16.591225 containerd[1585]: time="2026-03-06T02:39:16.590485571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tbmsm,Uid:005945dc-df09-41ca-aee6-f245261ff362,Namespace:kube-system,Attempt:0,}" Mar 6 02:39:16.603418 kubelet[2807]: E0306 02:39:16.603250 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:16.607251 containerd[1585]: time="2026-03-06T02:39:16.606583083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-944bs,Uid:ff8922bf-148f-4d3c-9f1c-cbf5cb046cf4,Namespace:kube-system,Attempt:0,}" Mar 6 02:39:17.352735 kubelet[2807]: E0306 02:39:17.352328 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:18.357876 kubelet[2807]: E0306 02:39:18.357204 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:19.331135 systemd-networkd[1447]: cilium_host: Link UP Mar 6 02:39:19.332782 systemd-networkd[1447]: cilium_net: Link UP Mar 6 02:39:19.333394 systemd-networkd[1447]: cilium_net: Gained carrier Mar 6 02:39:19.334163 systemd-networkd[1447]: cilium_host: Gained carrier Mar 6 02:39:19.816740 systemd-networkd[1447]: cilium_vxlan: Link UP Mar 6 02:39:19.816751 systemd-networkd[1447]: cilium_vxlan: Gained carrier Mar 6 02:39:20.138819 systemd-networkd[1447]: cilium_host: Gained IPv6LL Mar 6 02:39:20.201502 systemd-networkd[1447]: cilium_net: Gained IPv6LL Mar 6 02:39:20.394389 kernel: NET: Registered PF_ALG protocol family Mar 6 02:39:21.162269 systemd-networkd[1447]: cilium_vxlan: Gained IPv6LL Mar 6 02:39:22.521163 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:47836.service - OpenSSH per-connection server daemon (10.0.0.1:47836). Mar 6 02:39:22.648267 sshd[3907]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:22.651920 sshd-session[3907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:22.670778 systemd-logind[1568]: New session 10 of user core. Mar 6 02:39:22.686173 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 02:39:22.967903 systemd-networkd[1447]: lxc_health: Link UP Mar 6 02:39:22.973373 systemd-networkd[1447]: lxc_health: Gained carrier Mar 6 02:39:23.384260 sshd[3930]: Connection closed by 10.0.0.1 port 47836 Mar 6 02:39:23.383453 sshd-session[3907]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:23.388798 systemd-networkd[1447]: lxc7678f347e9e1: Link UP Mar 6 02:39:23.407485 kernel: eth0: renamed from tmp10bbe Mar 6 02:39:23.439523 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:47836.service: Deactivated successfully. Mar 6 02:39:23.447482 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 02:39:23.449940 systemd-networkd[1447]: lxc7678f347e9e1: Gained carrier Mar 6 02:39:23.461417 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Mar 6 02:39:23.469840 systemd-logind[1568]: Removed session 10. Mar 6 02:39:23.846212 systemd-networkd[1447]: lxc48140fcbecc8: Link UP Mar 6 02:39:23.863181 kernel: eth0: renamed from tmp14425 Mar 6 02:39:23.881680 systemd-networkd[1447]: lxc48140fcbecc8: Gained carrier Mar 6 02:39:24.567473 kubelet[2807]: E0306 02:39:24.567303 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:24.873663 systemd-networkd[1447]: lxc_health: Gained IPv6LL Mar 6 02:39:24.876366 systemd-networkd[1447]: lxc7678f347e9e1: Gained IPv6LL Mar 6 02:39:25.324357 systemd-networkd[1447]: lxc48140fcbecc8: Gained IPv6LL Mar 6 02:39:25.418922 kubelet[2807]: E0306 02:39:25.418824 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:26.428839 kubelet[2807]: E0306 02:39:26.428403 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:28.414663 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:47842.service - OpenSSH per-connection server daemon (10.0.0.1:47842). Mar 6 02:39:28.576334 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 47842 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:28.579284 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:28.594089 systemd-logind[1568]: New session 11 of user core. Mar 6 02:39:28.606766 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 02:39:28.990280 sshd[4006]: Connection closed by 10.0.0.1 port 47842 Mar 6 02:39:28.990374 sshd-session[4003]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:28.996907 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:47842.service: Deactivated successfully. Mar 6 02:39:29.005261 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 02:39:29.008279 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Mar 6 02:39:29.015724 systemd-logind[1568]: Removed session 11. Mar 6 02:39:30.599402 containerd[1585]: time="2026-03-06T02:39:30.598920092Z" level=info msg="connecting to shim 10bbe018f4e48ccd99aaa6127f7a380657e04f40b2ddf2ddd474e09458e7f9d5" address="unix:///run/containerd/s/4d87ac7cb7b367064f14ea5fd4c12c04c02e341bdb2e654ced03cb8a68192881" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:39:30.600777 containerd[1585]: time="2026-03-06T02:39:30.600380843Z" level=info msg="connecting to shim 14425121fbdd825b43ee73257432334f446957720d6b9e5069c5faf51c836bf4" address="unix:///run/containerd/s/2168d49147b859e53d252979de66c2779848e65accd62278aa14d7d219f900f1" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:39:30.704789 systemd[1]: Started cri-containerd-14425121fbdd825b43ee73257432334f446957720d6b9e5069c5faf51c836bf4.scope - libcontainer container 14425121fbdd825b43ee73257432334f446957720d6b9e5069c5faf51c836bf4. Mar 6 02:39:30.733564 systemd[1]: Started cri-containerd-10bbe018f4e48ccd99aaa6127f7a380657e04f40b2ddf2ddd474e09458e7f9d5.scope - libcontainer container 10bbe018f4e48ccd99aaa6127f7a380657e04f40b2ddf2ddd474e09458e7f9d5. Mar 6 02:39:30.795802 systemd-resolved[1448]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:39:30.820312 systemd-resolved[1448]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 02:39:30.982843 containerd[1585]: time="2026-03-06T02:39:30.982632077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tbmsm,Uid:005945dc-df09-41ca-aee6-f245261ff362,Namespace:kube-system,Attempt:0,} returns sandbox id \"10bbe018f4e48ccd99aaa6127f7a380657e04f40b2ddf2ddd474e09458e7f9d5\"" Mar 6 02:39:30.986213 containerd[1585]: time="2026-03-06T02:39:30.986186174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-944bs,Uid:ff8922bf-148f-4d3c-9f1c-cbf5cb046cf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"14425121fbdd825b43ee73257432334f446957720d6b9e5069c5faf51c836bf4\"" Mar 6 02:39:30.987698 kubelet[2807]: E0306 02:39:30.987665 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:30.989756 kubelet[2807]: E0306 02:39:30.989736 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:31.004257 containerd[1585]: time="2026-03-06T02:39:31.003805398Z" level=info msg="CreateContainer within sandbox \"10bbe018f4e48ccd99aaa6127f7a380657e04f40b2ddf2ddd474e09458e7f9d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:39:31.014881 containerd[1585]: time="2026-03-06T02:39:31.013858852Z" level=info msg="CreateContainer within sandbox \"14425121fbdd825b43ee73257432334f446957720d6b9e5069c5faf51c836bf4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 02:39:31.087183 containerd[1585]: time="2026-03-06T02:39:31.086594391Z" level=info msg="Container eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:31.097310 containerd[1585]: time="2026-03-06T02:39:31.096652714Z" level=info msg="Container 737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:39:31.124835 containerd[1585]: time="2026-03-06T02:39:31.124649326Z" level=info msg="CreateContainer within sandbox \"10bbe018f4e48ccd99aaa6127f7a380657e04f40b2ddf2ddd474e09458e7f9d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197\"" Mar 6 02:39:31.134713 containerd[1585]: time="2026-03-06T02:39:31.134254000Z" level=info msg="StartContainer for \"eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197\"" Mar 6 02:39:31.136186 containerd[1585]: time="2026-03-06T02:39:31.135917246Z" level=info msg="connecting to shim eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197" address="unix:///run/containerd/s/4d87ac7cb7b367064f14ea5fd4c12c04c02e341bdb2e654ced03cb8a68192881" protocol=ttrpc version=3 Mar 6 02:39:31.139843 containerd[1585]: time="2026-03-06T02:39:31.139217850Z" level=info msg="CreateContainer within sandbox \"14425121fbdd825b43ee73257432334f446957720d6b9e5069c5faf51c836bf4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1\"" Mar 6 02:39:31.151196 containerd[1585]: time="2026-03-06T02:39:31.150405191Z" level=info msg="StartContainer for \"737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1\"" Mar 6 02:39:31.165091 containerd[1585]: time="2026-03-06T02:39:31.164733332Z" level=info msg="connecting to shim 737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1" address="unix:///run/containerd/s/2168d49147b859e53d252979de66c2779848e65accd62278aa14d7d219f900f1" protocol=ttrpc version=3 Mar 6 02:39:31.231232 systemd[1]: Started cri-containerd-eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197.scope - libcontainer container eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197. Mar 6 02:39:31.250585 systemd[1]: Started cri-containerd-737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1.scope - libcontainer container 737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1. Mar 6 02:39:31.445285 containerd[1585]: time="2026-03-06T02:39:31.440816013Z" level=info msg="StartContainer for \"eb7e816bfcded70d0bd014ff8346d98f854fddb05a09a56fb5fc37a6cd6fc197\" returns successfully" Mar 6 02:39:31.485592 containerd[1585]: time="2026-03-06T02:39:31.484827408Z" level=info msg="StartContainer for \"737cdb7b3ca0a94b3ad8b737960f9310714e96a414db980e499977b0387c41f1\" returns successfully" Mar 6 02:39:31.510763 kubelet[2807]: E0306 02:39:31.504775 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:31.516914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167777264.mount: Deactivated successfully. Mar 6 02:39:32.525225 kubelet[2807]: E0306 02:39:32.524819 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:32.527117 kubelet[2807]: E0306 02:39:32.526324 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:32.601162 kubelet[2807]: I0306 02:39:32.600847 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-944bs" podStartSLOduration=63.600819839 podStartE2EDuration="1m3.600819839s" podCreationTimestamp="2026-03-06 02:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:39:32.595363736 +0000 UTC m=+68.573874774" watchObservedRunningTime="2026-03-06 02:39:32.600819839 +0000 UTC m=+68.579330866" Mar 6 02:39:32.604156 kubelet[2807]: I0306 02:39:32.602564 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tbmsm" podStartSLOduration=63.602437011 podStartE2EDuration="1m3.602437011s" podCreationTimestamp="2026-03-06 02:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:39:31.574173094 +0000 UTC m=+67.552684101" watchObservedRunningTime="2026-03-06 02:39:32.602437011 +0000 UTC m=+68.580948008" Mar 6 02:39:33.528351 kubelet[2807]: E0306 02:39:33.527756 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:33.529180 kubelet[2807]: E0306 02:39:33.528576 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:34.017805 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:37792.service - OpenSSH per-connection server daemon (10.0.0.1:37792). Mar 6 02:39:34.210382 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 37792 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:34.214366 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:34.248569 systemd-logind[1568]: New session 12 of user core. Mar 6 02:39:34.300839 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 02:39:34.539670 kubelet[2807]: E0306 02:39:34.538907 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:34.660164 sshd[4198]: Connection closed by 10.0.0.1 port 37792 Mar 6 02:39:34.661348 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:34.671238 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:37792.service: Deactivated successfully. Mar 6 02:39:34.675415 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 02:39:34.680151 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Mar 6 02:39:34.685333 systemd-logind[1568]: Removed session 12. Mar 6 02:39:39.685328 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:37802.service - OpenSSH per-connection server daemon (10.0.0.1:37802). Mar 6 02:39:39.802892 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 37802 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:39.805924 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:39.822339 systemd-logind[1568]: New session 13 of user core. Mar 6 02:39:39.834582 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 02:39:40.179703 sshd[4220]: Connection closed by 10.0.0.1 port 37802 Mar 6 02:39:40.180614 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:40.198603 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:37802.service: Deactivated successfully. Mar 6 02:39:40.204665 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 02:39:40.209786 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Mar 6 02:39:40.215170 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:38532.service - OpenSSH per-connection server daemon (10.0.0.1:38532). Mar 6 02:39:40.220844 systemd-logind[1568]: Removed session 13. Mar 6 02:39:40.336271 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 38532 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:40.339798 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:40.361205 systemd-logind[1568]: New session 14 of user core. Mar 6 02:39:40.380679 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 02:39:40.803942 sshd[4238]: Connection closed by 10.0.0.1 port 38532 Mar 6 02:39:40.804593 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:40.838771 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:38532.service: Deactivated successfully. Mar 6 02:39:40.849288 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 02:39:40.855240 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Mar 6 02:39:40.866665 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:38536.service - OpenSSH per-connection server daemon (10.0.0.1:38536). Mar 6 02:39:40.869918 systemd-logind[1568]: Removed session 14. Mar 6 02:39:41.038326 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 38536 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:41.042659 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:41.059364 systemd-logind[1568]: New session 15 of user core. Mar 6 02:39:41.075805 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 02:39:41.406862 sshd[4253]: Connection closed by 10.0.0.1 port 38536 Mar 6 02:39:41.407881 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:41.423276 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:38536.service: Deactivated successfully. Mar 6 02:39:41.434631 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 02:39:41.440258 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Mar 6 02:39:41.446830 systemd-logind[1568]: Removed session 15. Mar 6 02:39:45.354249 kubelet[2807]: E0306 02:39:45.353902 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:47.841425 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:38542.service - OpenSSH per-connection server daemon (10.0.0.1:38542). Mar 6 02:39:48.001375 kubelet[2807]: E0306 02:39:47.986367 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.447s" Mar 6 02:39:48.264245 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 38542 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:39:48.564914 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:39:48.684583 systemd-logind[1568]: New session 16 of user core. Mar 6 02:39:48.790748 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 02:39:50.062617 kubelet[2807]: E0306 02:39:50.060928 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.64s" Mar 6 02:39:53.593560 sshd[4271]: Connection closed by 10.0.0.1 port 38542 Mar 6 02:39:53.876898 kubelet[2807]: E0306 02:39:53.590810 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:39:53.594308 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Mar 6 02:39:54.114844 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:38542.service: Deactivated successfully. Mar 6 02:39:56.423251 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 02:39:56.575792 systemd[1]: session-16.scope: Consumed 2.184s CPU time, 18.2M memory peak. Mar 6 02:39:56.703707 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Mar 6 02:39:56.712857 systemd-logind[1568]: Removed session 16. Mar 6 02:39:59.353853 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:46780.service - OpenSSH per-connection server daemon (10.0.0.1:46780). Mar 6 02:40:23.341435 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 46780 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:23.375798 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:23.416578 systemd[1]: cri-containerd-8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5.scope: Deactivated successfully. Mar 6 02:40:23.453684 systemd[1]: cri-containerd-8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5.scope: Consumed 16.045s CPU time, 64M memory peak, 9.1M read from disk. Mar 6 02:40:23.581867 systemd-logind[1568]: New session 17 of user core. Mar 6 02:40:23.598299 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 02:40:23.890696 systemd[1]: cri-containerd-c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6.scope: Deactivated successfully. Mar 6 02:40:23.896307 systemd[1]: cri-containerd-c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6.scope: Consumed 7.487s CPU time, 20.6M memory peak, 348K read from disk. Mar 6 02:40:24.321425 kubelet[2807]: E0306 02:40:24.320715 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.249s" Mar 6 02:40:24.372370 containerd[1585]: time="2026-03-06T02:40:24.370146211Z" level=info msg="received container exit event container_id:\"c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6\" id:\"c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6\" pid:2663 exit_status:1 exited_at:{seconds:1772764824 nanos:42639031}" Mar 6 02:40:24.392811 containerd[1585]: time="2026-03-06T02:40:24.371769883Z" level=info msg="received container exit event container_id:\"8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5\" id:\"8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5\" pid:2656 exit_status:1 exited_at:{seconds:1772764824 nanos:186593060}" Mar 6 02:40:24.515717 kubelet[2807]: E0306 02:40:24.515676 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:24.519724 kubelet[2807]: E0306 02:40:24.519694 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:24.522203 kubelet[2807]: E0306 02:40:24.522178 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:24.548811 sshd[4294]: Connection closed by 10.0.0.1 port 46780 Mar 6 02:40:24.548698 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:24.575658 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:46780.service: Deactivated successfully. Mar 6 02:40:24.580938 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:46780.service: Consumed 3.687s CPU time, 3.4M memory peak. Mar 6 02:40:24.598765 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 02:40:24.608236 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Mar 6 02:40:24.616752 kubelet[2807]: E0306 02:40:24.614653 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:24.619653 systemd-logind[1568]: Removed session 17. Mar 6 02:40:25.000569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5-rootfs.mount: Deactivated successfully. Mar 6 02:40:25.010275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6-rootfs.mount: Deactivated successfully. Mar 6 02:40:25.600781 kubelet[2807]: I0306 02:40:25.599617 2807 scope.go:117] "RemoveContainer" containerID="c981e34b151fd380a512b064cc48d2b7c245a97f37addf6fb3d7772093778ad6" Mar 6 02:40:25.602704 kubelet[2807]: E0306 02:40:25.602208 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:25.666180 kubelet[2807]: I0306 02:40:25.665623 2807 scope.go:117] "RemoveContainer" containerID="8a1d370a2c8f101591ff6da08c8403bd9dea23d0d76879d57b447ec0806754c5" Mar 6 02:40:25.666180 kubelet[2807]: E0306 02:40:25.665721 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:25.718345 containerd[1585]: time="2026-03-06T02:40:25.706817878Z" level=info msg="CreateContainer within sandbox \"c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 6 02:40:25.721265 containerd[1585]: time="2026-03-06T02:40:25.709163092Z" level=info msg="CreateContainer within sandbox \"052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 6 02:40:25.834839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748301697.mount: Deactivated successfully. Mar 6 02:40:25.843597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126874849.mount: Deactivated successfully. Mar 6 02:40:25.854882 containerd[1585]: time="2026-03-06T02:40:25.853368120Z" level=info msg="Container 3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:40:25.854882 containerd[1585]: time="2026-03-06T02:40:25.854262213Z" level=info msg="Container fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:40:25.896268 containerd[1585]: time="2026-03-06T02:40:25.895769442Z" level=info msg="CreateContainer within sandbox \"c09eb847970601bbbc806c400d304be8e3a32982816d7283052c358b3517b318\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9\"" Mar 6 02:40:25.900865 containerd[1585]: time="2026-03-06T02:40:25.900818069Z" level=info msg="StartContainer for \"fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9\"" Mar 6 02:40:25.920590 containerd[1585]: time="2026-03-06T02:40:25.920197320Z" level=info msg="CreateContainer within sandbox \"052e23e4f7bee0cf495add3887be804af91f89a5a3d162361762654177154f24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4\"" Mar 6 02:40:25.927602 containerd[1585]: time="2026-03-06T02:40:25.923427351Z" level=info msg="StartContainer for \"3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4\"" Mar 6 02:40:25.927602 containerd[1585]: time="2026-03-06T02:40:25.924237127Z" level=info msg="connecting to shim fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9" address="unix:///run/containerd/s/8f274b6cd1ada61386384a79c2f8b36b0572fb273619f7b49966bc3a29cb8cf7" protocol=ttrpc version=3 Mar 6 02:40:25.936595 containerd[1585]: time="2026-03-06T02:40:25.936421427Z" level=info msg="connecting to shim 3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4" address="unix:///run/containerd/s/efae7b547f516690dbc7595ab8d6c474c7ec85ac6a7e326de6b5a4eb0728f9f3" protocol=ttrpc version=3 Mar 6 02:40:26.026786 systemd[1]: Started cri-containerd-3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4.scope - libcontainer container 3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4. Mar 6 02:40:26.058893 systemd[1]: Started cri-containerd-fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9.scope - libcontainer container fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9. Mar 6 02:40:26.308823 containerd[1585]: time="2026-03-06T02:40:26.308780144Z" level=info msg="StartContainer for \"fa443f44c7066f0d097d3195c7ad28606a595a0e15720374dde700a4882430c9\" returns successfully" Mar 6 02:40:26.374213 containerd[1585]: time="2026-03-06T02:40:26.373199195Z" level=info msg="StartContainer for \"3eb10be721eec672b145eab269eee689267c891fc037e2c32d1a2d3e968a3ed4\" returns successfully" Mar 6 02:40:26.713215 kubelet[2807]: E0306 02:40:26.710265 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:26.741746 kubelet[2807]: E0306 02:40:26.740308 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:27.733352 kubelet[2807]: E0306 02:40:27.733209 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:28.749648 kubelet[2807]: E0306 02:40:28.749253 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:29.581650 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Mar 6 02:40:29.752796 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:29.755433 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:29.774624 systemd-logind[1568]: New session 18 of user core. Mar 6 02:40:29.783394 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 02:40:30.098304 kubelet[2807]: E0306 02:40:30.097419 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:30.171791 sshd[4409]: Connection closed by 10.0.0.1 port 53120 Mar 6 02:40:30.169180 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:30.180760 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:53120.service: Deactivated successfully. Mar 6 02:40:30.187228 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 02:40:30.192337 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Mar 6 02:40:30.196918 systemd-logind[1568]: Removed session 18. Mar 6 02:40:34.359434 kubelet[2807]: E0306 02:40:34.358903 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:35.191233 systemd[1]: Started sshd@18-10.0.0.110:22-10.0.0.1:33402.service - OpenSSH per-connection server daemon (10.0.0.1:33402). Mar 6 02:40:35.322731 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:35.324864 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:35.342658 systemd-logind[1568]: New session 19 of user core. Mar 6 02:40:35.356428 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 02:40:35.528255 kubelet[2807]: E0306 02:40:35.526589 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:35.706718 sshd[4428]: Connection closed by 10.0.0.1 port 33402 Mar 6 02:40:35.707758 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:35.716722 systemd[1]: sshd@18-10.0.0.110:22-10.0.0.1:33402.service: Deactivated successfully. Mar 6 02:40:35.721441 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 02:40:35.727724 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Mar 6 02:40:35.731912 systemd-logind[1568]: Removed session 19. Mar 6 02:40:37.355643 kubelet[2807]: E0306 02:40:37.355411 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:40.959227 systemd[1]: Started sshd@19-10.0.0.110:22-10.0.0.1:45868.service - OpenSSH per-connection server daemon (10.0.0.1:45868). Mar 6 02:40:40.964833 kubelet[2807]: E0306 02:40:40.961584 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:42.063848 kubelet[2807]: E0306 02:40:42.063802 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:42.229929 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 45868 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:42.236595 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:42.270898 systemd-logind[1568]: New session 20 of user core. Mar 6 02:40:42.290333 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 02:40:42.799443 sshd[4445]: Connection closed by 10.0.0.1 port 45868 Mar 6 02:40:42.800620 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:42.811749 systemd[1]: sshd@19-10.0.0.110:22-10.0.0.1:45868.service: Deactivated successfully. Mar 6 02:40:42.825432 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 02:40:42.835824 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Mar 6 02:40:42.844267 systemd-logind[1568]: Removed session 20. Mar 6 02:40:44.764192 kubelet[2807]: E0306 02:40:44.763094 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:45.585311 kubelet[2807]: E0306 02:40:45.581374 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:47.901791 systemd[1]: Started sshd@20-10.0.0.110:22-10.0.0.1:45872.service - OpenSSH per-connection server daemon (10.0.0.1:45872). Mar 6 02:40:48.111903 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 45872 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:48.117398 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:48.153294 systemd-logind[1568]: New session 21 of user core. Mar 6 02:40:48.164615 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 02:40:48.880940 sshd[4461]: Connection closed by 10.0.0.1 port 45872 Mar 6 02:40:48.883904 sshd-session[4458]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:48.904758 systemd[1]: sshd@20-10.0.0.110:22-10.0.0.1:45872.service: Deactivated successfully. Mar 6 02:40:48.910638 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 02:40:48.920259 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Mar 6 02:40:48.944587 systemd[1]: Started sshd@21-10.0.0.110:22-10.0.0.1:45878.service - OpenSSH per-connection server daemon (10.0.0.1:45878). Mar 6 02:40:48.951439 systemd-logind[1568]: Removed session 21. Mar 6 02:40:49.114767 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 45878 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:49.118827 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:49.137174 systemd-logind[1568]: New session 22 of user core. Mar 6 02:40:49.161789 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 02:40:50.466812 sshd[4477]: Connection closed by 10.0.0.1 port 45878 Mar 6 02:40:50.468402 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:50.482282 systemd[1]: sshd@21-10.0.0.110:22-10.0.0.1:45878.service: Deactivated successfully. Mar 6 02:40:50.488701 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 02:40:50.491619 systemd[1]: session-22.scope: Consumed 1.246s CPU time, 49.6M memory peak. Mar 6 02:40:50.494719 systemd-logind[1568]: Session 22 logged out. Waiting for processes to exit. Mar 6 02:40:50.502691 systemd[1]: Started sshd@22-10.0.0.110:22-10.0.0.1:45016.service - OpenSSH per-connection server daemon (10.0.0.1:45016). Mar 6 02:40:50.507243 systemd-logind[1568]: Removed session 22. Mar 6 02:40:50.956715 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 45016 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:50.978209 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:50.999319 systemd-logind[1568]: New session 23 of user core. Mar 6 02:40:51.014702 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 02:40:52.973156 sshd[4492]: Connection closed by 10.0.0.1 port 45016 Mar 6 02:40:52.973698 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:53.003160 systemd[1]: sshd@22-10.0.0.110:22-10.0.0.1:45016.service: Deactivated successfully. Mar 6 02:40:53.008374 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 02:40:53.008851 systemd[1]: session-23.scope: Consumed 1.585s CPU time, 41.8M memory peak. Mar 6 02:40:53.011664 systemd-logind[1568]: Session 23 logged out. Waiting for processes to exit. Mar 6 02:40:53.024381 systemd[1]: Started sshd@23-10.0.0.110:22-10.0.0.1:45024.service - OpenSSH per-connection server daemon (10.0.0.1:45024). Mar 6 02:40:53.033269 systemd-logind[1568]: Removed session 23. Mar 6 02:40:53.155736 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 45024 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:53.160246 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:53.177396 systemd-logind[1568]: New session 24 of user core. Mar 6 02:40:53.190681 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 02:40:53.946163 sshd[4518]: Connection closed by 10.0.0.1 port 45024 Mar 6 02:40:53.947651 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:53.961857 systemd[1]: sshd@23-10.0.0.110:22-10.0.0.1:45024.service: Deactivated successfully. Mar 6 02:40:53.969801 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 02:40:53.974182 systemd-logind[1568]: Session 24 logged out. Waiting for processes to exit. Mar 6 02:40:53.986680 systemd[1]: Started sshd@24-10.0.0.110:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Mar 6 02:40:53.992431 systemd-logind[1568]: Removed session 24. Mar 6 02:40:54.096625 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:54.098717 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:54.114593 systemd-logind[1568]: New session 25 of user core. Mar 6 02:40:54.128613 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 02:40:54.428856 sshd[4532]: Connection closed by 10.0.0.1 port 45034 Mar 6 02:40:54.432436 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:54.442377 systemd[1]: sshd@24-10.0.0.110:22-10.0.0.1:45034.service: Deactivated successfully. Mar 6 02:40:54.451342 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 02:40:54.460671 systemd-logind[1568]: Session 25 logged out. Waiting for processes to exit. Mar 6 02:40:54.468339 systemd-logind[1568]: Removed session 25. Mar 6 02:40:59.353903 kubelet[2807]: E0306 02:40:59.353639 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:40:59.466400 systemd[1]: Started sshd@25-10.0.0.110:22-10.0.0.1:45042.service - OpenSSH per-connection server daemon (10.0.0.1:45042). Mar 6 02:40:59.602072 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 45042 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:40:59.606830 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:40:59.623316 systemd-logind[1568]: New session 26 of user core. Mar 6 02:40:59.635723 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 02:40:59.889193 sshd[4550]: Connection closed by 10.0.0.1 port 45042 Mar 6 02:40:59.890229 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Mar 6 02:40:59.902363 systemd[1]: sshd@25-10.0.0.110:22-10.0.0.1:45042.service: Deactivated successfully. Mar 6 02:40:59.909823 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 02:40:59.917209 systemd-logind[1568]: Session 26 logged out. Waiting for processes to exit. Mar 6 02:40:59.924373 systemd-logind[1568]: Removed session 26. Mar 6 02:41:04.913938 systemd[1]: Started sshd@26-10.0.0.110:22-10.0.0.1:56992.service - OpenSSH per-connection server daemon (10.0.0.1:56992). Mar 6 02:41:05.053906 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 56992 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:41:05.057917 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:41:05.074202 systemd-logind[1568]: New session 27 of user core. Mar 6 02:41:05.097670 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 02:41:05.314686 sshd[4568]: Connection closed by 10.0.0.1 port 56992 Mar 6 02:41:05.315363 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Mar 6 02:41:05.320759 systemd[1]: sshd@26-10.0.0.110:22-10.0.0.1:56992.service: Deactivated successfully. Mar 6 02:41:05.325151 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 02:41:05.328853 systemd-logind[1568]: Session 27 logged out. Waiting for processes to exit. Mar 6 02:41:05.331797 systemd-logind[1568]: Removed session 27. Mar 6 02:41:10.332900 systemd[1]: Started sshd@27-10.0.0.110:22-10.0.0.1:39052.service - OpenSSH per-connection server daemon (10.0.0.1:39052). Mar 6 02:41:10.392389 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 39052 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:41:10.394433 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:41:10.401120 systemd-logind[1568]: New session 28 of user core. Mar 6 02:41:10.411289 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 6 02:41:10.495150 sshd[4585]: Connection closed by 10.0.0.1 port 39052 Mar 6 02:41:10.495630 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Mar 6 02:41:10.508196 systemd[1]: sshd@27-10.0.0.110:22-10.0.0.1:39052.service: Deactivated successfully. Mar 6 02:41:10.510593 systemd[1]: session-28.scope: Deactivated successfully. Mar 6 02:41:10.512233 systemd-logind[1568]: Session 28 logged out. Waiting for processes to exit. Mar 6 02:41:10.515127 systemd[1]: Started sshd@28-10.0.0.110:22-10.0.0.1:39054.service - OpenSSH per-connection server daemon (10.0.0.1:39054). Mar 6 02:41:10.517079 systemd-logind[1568]: Removed session 28. Mar 6 02:41:10.570480 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 39054 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:41:10.572428 sshd-session[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:41:10.579700 systemd-logind[1568]: New session 29 of user core. Mar 6 02:41:10.594283 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 6 02:41:12.104079 containerd[1585]: time="2026-03-06T02:41:12.103541982Z" level=info msg="StopContainer for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" with timeout 30 (s)" Mar 6 02:41:12.120216 containerd[1585]: time="2026-03-06T02:41:12.120157793Z" level=info msg="Stop container \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" with signal terminated" Mar 6 02:41:12.147794 systemd[1]: cri-containerd-a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c.scope: Deactivated successfully. Mar 6 02:41:12.148417 systemd[1]: cri-containerd-a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c.scope: Consumed 3.955s CPU time, 29.7M memory peak, 1.3M read from disk, 4K written to disk. Mar 6 02:41:12.151846 containerd[1585]: time="2026-03-06T02:41:12.151709957Z" level=info msg="received container exit event container_id:\"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" id:\"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" pid:3232 exited_at:{seconds:1772764872 nanos:150658940}" Mar 6 02:41:12.161593 containerd[1585]: time="2026-03-06T02:41:12.161467676Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 02:41:12.174705 containerd[1585]: time="2026-03-06T02:41:12.174636927Z" level=info msg="StopContainer for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" with timeout 2 (s)" Mar 6 02:41:12.175228 containerd[1585]: time="2026-03-06T02:41:12.175195950Z" level=info msg="Stop container \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" with signal terminated" Mar 6 02:41:12.191531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c-rootfs.mount: Deactivated successfully. Mar 6 02:41:12.198613 systemd-networkd[1447]: lxc_health: Link DOWN Mar 6 02:41:12.199540 systemd-networkd[1447]: lxc_health: Lost carrier Mar 6 02:41:12.212362 containerd[1585]: time="2026-03-06T02:41:12.212261706Z" level=info msg="StopContainer for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" returns successfully" Mar 6 02:41:12.213710 containerd[1585]: time="2026-03-06T02:41:12.213673209Z" level=info msg="StopPodSandbox for \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\"" Mar 6 02:41:12.213769 containerd[1585]: time="2026-03-06T02:41:12.213731828Z" level=info msg="Container to stop \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:41:12.224740 systemd[1]: cri-containerd-47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115.scope: Deactivated successfully. Mar 6 02:41:12.230116 systemd[1]: cri-containerd-7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7.scope: Deactivated successfully. Mar 6 02:41:12.230641 systemd[1]: cri-containerd-7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7.scope: Consumed 21.213s CPU time, 130.3M memory peak, 220K read from disk, 13.3M written to disk. Mar 6 02:41:12.233904 containerd[1585]: time="2026-03-06T02:41:12.233849183Z" level=info msg="received container exit event container_id:\"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" id:\"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" pid:3470 exited_at:{seconds:1772764872 nanos:233473782}" Mar 6 02:41:12.236747 containerd[1585]: time="2026-03-06T02:41:12.236674173Z" level=info msg="received sandbox exit event container_id:\"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" id:\"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" exit_status:137 exited_at:{seconds:1772764872 nanos:236332984}" monitor_name=podsandbox Mar 6 02:41:12.272066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7-rootfs.mount: Deactivated successfully. Mar 6 02:41:12.283289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115-rootfs.mount: Deactivated successfully. Mar 6 02:41:12.285002 containerd[1585]: time="2026-03-06T02:41:12.284680757Z" level=info msg="shim disconnected" id=47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115 namespace=k8s.io Mar 6 02:41:12.285002 containerd[1585]: time="2026-03-06T02:41:12.284716855Z" level=warning msg="cleaning up after shim disconnected" id=47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115 namespace=k8s.io Mar 6 02:41:12.285002 containerd[1585]: time="2026-03-06T02:41:12.284760035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 02:41:12.290117 containerd[1585]: time="2026-03-06T02:41:12.290093024Z" level=info msg="StopContainer for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" returns successfully" Mar 6 02:41:12.290841 containerd[1585]: time="2026-03-06T02:41:12.290806671Z" level=info msg="StopPodSandbox for \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\"" Mar 6 02:41:12.291038 containerd[1585]: time="2026-03-06T02:41:12.290883254Z" level=info msg="Container to stop \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:41:12.291038 containerd[1585]: time="2026-03-06T02:41:12.290900116Z" level=info msg="Container to stop \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:41:12.291038 containerd[1585]: time="2026-03-06T02:41:12.290912920Z" level=info msg="Container to stop \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:41:12.291038 containerd[1585]: time="2026-03-06T02:41:12.290925242Z" level=info msg="Container to stop \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:41:12.291038 containerd[1585]: time="2026-03-06T02:41:12.290937486Z" level=info msg="Container to stop \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 02:41:12.300370 systemd[1]: cri-containerd-dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73.scope: Deactivated successfully. Mar 6 02:41:12.303555 containerd[1585]: time="2026-03-06T02:41:12.303482388Z" level=info msg="received sandbox exit event container_id:\"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" id:\"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" exit_status:137 exited_at:{seconds:1772764872 nanos:303035983}" monitor_name=podsandbox Mar 6 02:41:12.316348 containerd[1585]: time="2026-03-06T02:41:12.316235861Z" level=info msg="TearDown network for sandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" successfully" Mar 6 02:41:12.316348 containerd[1585]: time="2026-03-06T02:41:12.316283650Z" level=info msg="StopPodSandbox for \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" returns successfully" Mar 6 02:41:12.320224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115-shm.mount: Deactivated successfully. Mar 6 02:41:12.325073 containerd[1585]: time="2026-03-06T02:41:12.324744638Z" level=info msg="received sandbox container exit event sandbox_id:\"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" exit_status:137 exited_at:{seconds:1772764872 nanos:236332984}" monitor_name=criService Mar 6 02:41:12.337779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73-rootfs.mount: Deactivated successfully. Mar 6 02:41:12.346626 containerd[1585]: time="2026-03-06T02:41:12.346474084Z" level=info msg="shim disconnected" id=dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73 namespace=k8s.io Mar 6 02:41:12.346626 containerd[1585]: time="2026-03-06T02:41:12.346553492Z" level=warning msg="cleaning up after shim disconnected" id=dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73 namespace=k8s.io Mar 6 02:41:12.346626 containerd[1585]: time="2026-03-06T02:41:12.346563030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 02:41:12.368556 containerd[1585]: time="2026-03-06T02:41:12.368373402Z" level=info msg="received sandbox container exit event sandbox_id:\"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" exit_status:137 exited_at:{seconds:1772764872 nanos:303035983}" monitor_name=criService Mar 6 02:41:12.368728 containerd[1585]: time="2026-03-06T02:41:12.368689438Z" level=info msg="TearDown network for sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" successfully" Mar 6 02:41:12.368728 containerd[1585]: time="2026-03-06T02:41:12.368707451Z" level=info msg="StopPodSandbox for \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" returns successfully" Mar 6 02:41:12.432748 kubelet[2807]: I0306 02:41:12.432682 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-hostproc\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.432748 kubelet[2807]: I0306 02:41:12.432748 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-lib-modules\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.432748 kubelet[2807]: I0306 02:41:12.432771 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-net\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.432748 kubelet[2807]: I0306 02:41:12.432785 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-cgroup\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.432748 kubelet[2807]: I0306 02:41:12.432807 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04615d8d-d639-4265-8b38-27bf180e384c-clustermesh-secrets\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.432748 kubelet[2807]: I0306 02:41:12.432824 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-etc-cni-netd\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.434677 kubelet[2807]: I0306 02:41:12.432840 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-bpf-maps\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.434677 kubelet[2807]: I0306 02:41:12.432853 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-kernel\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.434677 kubelet[2807]: I0306 02:41:12.432870 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-hubble-tls\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.434677 kubelet[2807]: I0306 02:41:12.432885 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-xtables-lock\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.434677 kubelet[2807]: I0306 02:41:12.432901 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26ca3e81-9f8d-4bee-808b-95f2420e0514-cilium-config-path\") pod \"26ca3e81-9f8d-4bee-808b-95f2420e0514\" (UID: \"26ca3e81-9f8d-4bee-808b-95f2420e0514\") " Mar 6 02:41:12.434677 kubelet[2807]: I0306 02:41:12.432913 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cni-path\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.434811 kubelet[2807]: I0306 02:41:12.432927 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8thdn\" (UniqueName: \"kubernetes.io/projected/26ca3e81-9f8d-4bee-808b-95f2420e0514-kube-api-access-8thdn\") pod \"26ca3e81-9f8d-4bee-808b-95f2420e0514\" (UID: \"26ca3e81-9f8d-4bee-808b-95f2420e0514\") " Mar 6 02:41:12.434811 kubelet[2807]: I0306 02:41:12.432985 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434811 kubelet[2807]: I0306 02:41:12.433050 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434811 kubelet[2807]: I0306 02:41:12.432924 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-hostproc" (OuterVolumeSpecName: "hostproc") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434811 kubelet[2807]: I0306 02:41:12.433013 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434922 kubelet[2807]: I0306 02:41:12.433026 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434922 kubelet[2807]: I0306 02:41:12.432924 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434922 kubelet[2807]: I0306 02:41:12.433040 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434922 kubelet[2807]: I0306 02:41:12.433103 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.434922 kubelet[2807]: I0306 02:41:12.433015 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9qc8\" (UniqueName: \"kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-kube-api-access-r9qc8\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433139 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-run\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433157 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04615d8d-d639-4265-8b38-27bf180e384c-cilium-config-path\") pod \"04615d8d-d639-4265-8b38-27bf180e384c\" (UID: \"04615d8d-d639-4265-8b38-27bf180e384c\") " Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433211 2807 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433222 2807 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433232 2807 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433240 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433248 2807 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435147 kubelet[2807]: I0306 02:41:12.433256 2807 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435323 kubelet[2807]: I0306 02:41:12.433264 2807 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435323 kubelet[2807]: I0306 02:41:12.433271 2807 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.435323 kubelet[2807]: I0306 02:41:12.434201 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.435323 kubelet[2807]: I0306 02:41:12.434229 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cni-path" (OuterVolumeSpecName: "cni-path") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 02:41:12.439880 kubelet[2807]: I0306 02:41:12.439769 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:41:12.440720 kubelet[2807]: I0306 02:41:12.440692 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04615d8d-d639-4265-8b38-27bf180e384c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:41:12.441084 kubelet[2807]: I0306 02:41:12.440851 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04615d8d-d639-4265-8b38-27bf180e384c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 02:41:12.441664 kubelet[2807]: I0306 02:41:12.441617 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-kube-api-access-r9qc8" (OuterVolumeSpecName: "kube-api-access-r9qc8") pod "04615d8d-d639-4265-8b38-27bf180e384c" (UID: "04615d8d-d639-4265-8b38-27bf180e384c"). InnerVolumeSpecName "kube-api-access-r9qc8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:41:12.441847 kubelet[2807]: I0306 02:41:12.441774 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ca3e81-9f8d-4bee-808b-95f2420e0514-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26ca3e81-9f8d-4bee-808b-95f2420e0514" (UID: "26ca3e81-9f8d-4bee-808b-95f2420e0514"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 02:41:12.443027 kubelet[2807]: I0306 02:41:12.442911 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ca3e81-9f8d-4bee-808b-95f2420e0514-kube-api-access-8thdn" (OuterVolumeSpecName: "kube-api-access-8thdn") pod "26ca3e81-9f8d-4bee-808b-95f2420e0514" (UID: "26ca3e81-9f8d-4bee-808b-95f2420e0514"). InnerVolumeSpecName "kube-api-access-8thdn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 02:41:12.533998 kubelet[2807]: I0306 02:41:12.533882 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26ca3e81-9f8d-4bee-808b-95f2420e0514-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.533998 kubelet[2807]: I0306 02:41:12.533938 2807 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.533998 kubelet[2807]: I0306 02:41:12.533999 2807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8thdn\" (UniqueName: \"kubernetes.io/projected/26ca3e81-9f8d-4bee-808b-95f2420e0514-kube-api-access-8thdn\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.534194 kubelet[2807]: I0306 02:41:12.534013 2807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9qc8\" (UniqueName: \"kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-kube-api-access-r9qc8\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.534194 kubelet[2807]: I0306 02:41:12.534026 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04615d8d-d639-4265-8b38-27bf180e384c-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.534194 kubelet[2807]: I0306 02:41:12.534038 2807 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04615d8d-d639-4265-8b38-27bf180e384c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.534194 kubelet[2807]: I0306 02:41:12.534051 2807 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04615d8d-d639-4265-8b38-27bf180e384c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:12.534194 kubelet[2807]: I0306 02:41:12.534063 2807 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04615d8d-d639-4265-8b38-27bf180e384c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 6 02:41:13.035008 kubelet[2807]: I0306 02:41:13.034881 2807 scope.go:117] "RemoveContainer" containerID="a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c" Mar 6 02:41:13.036874 containerd[1585]: time="2026-03-06T02:41:13.036788315Z" level=info msg="RemoveContainer for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\"" Mar 6 02:41:13.043225 systemd[1]: Removed slice kubepods-besteffort-pod26ca3e81_9f8d_4bee_808b_95f2420e0514.slice - libcontainer container kubepods-besteffort-pod26ca3e81_9f8d_4bee_808b_95f2420e0514.slice. Mar 6 02:41:13.043467 systemd[1]: kubepods-besteffort-pod26ca3e81_9f8d_4bee_808b_95f2420e0514.slice: Consumed 4.068s CPU time, 30M memory peak, 1.3M read from disk, 4K written to disk. Mar 6 02:41:13.062466 containerd[1585]: time="2026-03-06T02:41:13.062346460Z" level=info msg="RemoveContainer for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" returns successfully" Mar 6 02:41:13.062703 kubelet[2807]: I0306 02:41:13.062676 2807 scope.go:117] "RemoveContainer" containerID="a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c" Mar 6 02:41:13.064184 containerd[1585]: time="2026-03-06T02:41:13.064050289Z" level=error msg="ContainerStatus for \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\": not found" Mar 6 02:41:13.064411 kubelet[2807]: E0306 02:41:13.064350 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\": not found" containerID="a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c" Mar 6 02:41:13.064571 kubelet[2807]: I0306 02:41:13.064416 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c"} err="failed to get container status \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a962fed57fa6aa49ae53addc2ce8df8aeab7a5fb138351da899520a1d8cc848c\": not found" Mar 6 02:41:13.064571 kubelet[2807]: I0306 02:41:13.064550 2807 scope.go:117] "RemoveContainer" containerID="7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7" Mar 6 02:41:13.067032 containerd[1585]: time="2026-03-06T02:41:13.066573005Z" level=info msg="RemoveContainer for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\"" Mar 6 02:41:13.066670 systemd[1]: Removed slice kubepods-burstable-pod04615d8d_d639_4265_8b38_27bf180e384c.slice - libcontainer container kubepods-burstable-pod04615d8d_d639_4265_8b38_27bf180e384c.slice. Mar 6 02:41:13.066775 systemd[1]: kubepods-burstable-pod04615d8d_d639_4265_8b38_27bf180e384c.slice: Consumed 21.895s CPU time, 130.6M memory peak, 284K read from disk, 16.6M written to disk. Mar 6 02:41:13.073434 containerd[1585]: time="2026-03-06T02:41:13.073364553Z" level=info msg="RemoveContainer for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" returns successfully" Mar 6 02:41:13.073691 kubelet[2807]: I0306 02:41:13.073629 2807 scope.go:117] "RemoveContainer" containerID="e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22" Mar 6 02:41:13.075729 containerd[1585]: time="2026-03-06T02:41:13.075682763Z" level=info msg="RemoveContainer for \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\"" Mar 6 02:41:13.081351 containerd[1585]: time="2026-03-06T02:41:13.081319289Z" level=info msg="RemoveContainer for \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\" returns successfully" Mar 6 02:41:13.081591 kubelet[2807]: I0306 02:41:13.081540 2807 scope.go:117] "RemoveContainer" containerID="5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65" Mar 6 02:41:13.087154 containerd[1585]: time="2026-03-06T02:41:13.087084646Z" level=info msg="RemoveContainer for \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\"" Mar 6 02:41:13.092689 containerd[1585]: time="2026-03-06T02:41:13.092594816Z" level=info msg="RemoveContainer for \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\" returns successfully" Mar 6 02:41:13.092940 kubelet[2807]: I0306 02:41:13.092897 2807 scope.go:117] "RemoveContainer" containerID="a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462" Mar 6 02:41:13.095003 containerd[1585]: time="2026-03-06T02:41:13.094614290Z" level=info msg="RemoveContainer for \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\"" Mar 6 02:41:13.103151 containerd[1585]: time="2026-03-06T02:41:13.103094857Z" level=info msg="RemoveContainer for \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\" returns successfully" Mar 6 02:41:13.103437 kubelet[2807]: I0306 02:41:13.103346 2807 scope.go:117] "RemoveContainer" containerID="fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2" Mar 6 02:41:13.105146 containerd[1585]: time="2026-03-06T02:41:13.105056039Z" level=info msg="RemoveContainer for \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\"" Mar 6 02:41:13.108936 containerd[1585]: time="2026-03-06T02:41:13.108838178Z" level=info msg="RemoveContainer for \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\" returns successfully" Mar 6 02:41:13.109147 kubelet[2807]: I0306 02:41:13.109093 2807 scope.go:117] "RemoveContainer" containerID="7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7" Mar 6 02:41:13.109366 containerd[1585]: time="2026-03-06T02:41:13.109333077Z" level=error msg="ContainerStatus for \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\": not found" Mar 6 02:41:13.109817 kubelet[2807]: E0306 02:41:13.109661 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\": not found" containerID="7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7" Mar 6 02:41:13.109817 kubelet[2807]: I0306 02:41:13.109690 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7"} err="failed to get container status \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7890e34947429ef1d43d614012310cb520a48d20430aab01b4bdc3c221955fd7\": not found" Mar 6 02:41:13.109817 kubelet[2807]: I0306 02:41:13.109709 2807 scope.go:117] "RemoveContainer" containerID="e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22" Mar 6 02:41:13.109939 containerd[1585]: time="2026-03-06T02:41:13.109842146Z" level=error msg="ContainerStatus for \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\": not found" Mar 6 02:41:13.110181 kubelet[2807]: E0306 02:41:13.110023 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\": not found" containerID="e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22" Mar 6 02:41:13.110181 kubelet[2807]: I0306 02:41:13.110042 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22"} err="failed to get container status \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\": rpc error: code = NotFound desc = an error occurred when try to find container \"e10f5f63eccb9f9437b4afb649add1e002f7f33091e7fb8054cd768462151e22\": not found" Mar 6 02:41:13.110181 kubelet[2807]: I0306 02:41:13.110057 2807 scope.go:117] "RemoveContainer" containerID="5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65" Mar 6 02:41:13.110279 containerd[1585]: time="2026-03-06T02:41:13.110256659Z" level=error msg="ContainerStatus for \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\": not found" Mar 6 02:41:13.110400 kubelet[2807]: E0306 02:41:13.110367 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\": not found" containerID="5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65" Mar 6 02:41:13.110400 kubelet[2807]: I0306 02:41:13.110385 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65"} err="failed to get container status \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e040e22b2052a724ae56c30af84f415d66542df77bee48f431329a4825c5a65\": not found" Mar 6 02:41:13.110400 kubelet[2807]: I0306 02:41:13.110397 2807 scope.go:117] "RemoveContainer" containerID="a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462" Mar 6 02:41:13.110604 containerd[1585]: time="2026-03-06T02:41:13.110564067Z" level=error msg="ContainerStatus for \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\": not found" Mar 6 02:41:13.110883 kubelet[2807]: E0306 02:41:13.110823 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\": not found" containerID="a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462" Mar 6 02:41:13.110935 kubelet[2807]: I0306 02:41:13.110887 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462"} err="failed to get container status \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\": rpc error: code = NotFound desc = an error occurred when try to find container \"a60f26edb60984d593a4997faebc516477116e68ad676a5d8ea2bbcc60c84462\": not found" Mar 6 02:41:13.110935 kubelet[2807]: I0306 02:41:13.110904 2807 scope.go:117] "RemoveContainer" containerID="fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2" Mar 6 02:41:13.111204 containerd[1585]: time="2026-03-06T02:41:13.111177468Z" level=error msg="ContainerStatus for \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\": not found" Mar 6 02:41:13.111360 kubelet[2807]: E0306 02:41:13.111325 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\": not found" containerID="fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2" Mar 6 02:41:13.111401 kubelet[2807]: I0306 02:41:13.111362 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2"} err="failed to get container status \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb6d2be8ee3edc0d093775d2b96f52704972bd14af0be500b5ad2383f67f9de2\": not found" Mar 6 02:41:13.189365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73-shm.mount: Deactivated successfully. Mar 6 02:41:13.189564 systemd[1]: var-lib-kubelet-pods-26ca3e81\x2d9f8d\x2d4bee\x2d808b\x2d95f2420e0514-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8thdn.mount: Deactivated successfully. Mar 6 02:41:13.189656 systemd[1]: var-lib-kubelet-pods-04615d8d\x2dd639\x2d4265\x2d8b38\x2d27bf180e384c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9qc8.mount: Deactivated successfully. Mar 6 02:41:13.189761 systemd[1]: var-lib-kubelet-pods-04615d8d\x2dd639\x2d4265\x2d8b38\x2d27bf180e384c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 6 02:41:13.189849 systemd[1]: var-lib-kubelet-pods-04615d8d\x2dd639\x2d4265\x2d8b38\x2d27bf180e384c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 6 02:41:14.059176 sshd[4601]: Connection closed by 10.0.0.1 port 39054 Mar 6 02:41:14.060767 sshd-session[4598]: pam_unix(sshd:session): session closed for user core Mar 6 02:41:14.079270 systemd[1]: sshd@28-10.0.0.110:22-10.0.0.1:39054.service: Deactivated successfully. Mar 6 02:41:14.081468 systemd[1]: session-29.scope: Deactivated successfully. Mar 6 02:41:14.084911 systemd-logind[1568]: Session 29 logged out. Waiting for processes to exit. Mar 6 02:41:14.085328 systemd[1]: Started sshd@29-10.0.0.110:22-10.0.0.1:39056.service - OpenSSH per-connection server daemon (10.0.0.1:39056). Mar 6 02:41:14.106877 systemd-logind[1568]: Removed session 29. Mar 6 02:41:14.157304 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 39056 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:41:14.159890 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:41:14.166546 systemd-logind[1568]: New session 30 of user core. Mar 6 02:41:14.186388 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 6 02:41:14.355440 kubelet[2807]: I0306 02:41:14.355304 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04615d8d-d639-4265-8b38-27bf180e384c" path="/var/lib/kubelet/pods/04615d8d-d639-4265-8b38-27bf180e384c/volumes" Mar 6 02:41:14.356445 kubelet[2807]: I0306 02:41:14.356368 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ca3e81-9f8d-4bee-808b-95f2420e0514" path="/var/lib/kubelet/pods/26ca3e81-9f8d-4bee-808b-95f2420e0514/volumes" Mar 6 02:41:14.633172 sshd[4752]: Connection closed by 10.0.0.1 port 39056 Mar 6 02:41:14.634438 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Mar 6 02:41:14.646018 systemd[1]: sshd@29-10.0.0.110:22-10.0.0.1:39056.service: Deactivated successfully. Mar 6 02:41:14.649677 systemd[1]: session-30.scope: Deactivated successfully. Mar 6 02:41:14.651632 systemd-logind[1568]: Session 30 logged out. Waiting for processes to exit. Mar 6 02:41:14.657248 systemd-logind[1568]: Removed session 30. Mar 6 02:41:14.661371 systemd[1]: Started sshd@30-10.0.0.110:22-10.0.0.1:39058.service - OpenSSH per-connection server daemon (10.0.0.1:39058). Mar 6 02:41:14.705488 systemd[1]: Created slice kubepods-burstable-pod2d423f2d_d9f8_407f_bb41_d13a5e03b9bd.slice - libcontainer container kubepods-burstable-pod2d423f2d_d9f8_407f_bb41_d13a5e03b9bd.slice. Mar 6 02:41:14.734895 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 39058 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:41:14.736603 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:41:14.743614 systemd-logind[1568]: New session 31 of user core. Mar 6 02:41:14.757300 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 6 02:41:14.761574 kubelet[2807]: E0306 02:41:14.761441 2807 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 02:41:14.773172 sshd[4767]: Connection closed by 10.0.0.1 port 39058 Mar 6 02:41:14.773637 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Mar 6 02:41:14.786670 systemd[1]: sshd@30-10.0.0.110:22-10.0.0.1:39058.service: Deactivated successfully. Mar 6 02:41:14.789424 systemd[1]: session-31.scope: Deactivated successfully. Mar 6 02:41:14.790622 systemd-logind[1568]: Session 31 logged out. Waiting for processes to exit. Mar 6 02:41:14.794035 systemd[1]: Started sshd@31-10.0.0.110:22-10.0.0.1:39064.service - OpenSSH per-connection server daemon (10.0.0.1:39064). Mar 6 02:41:14.794874 systemd-logind[1568]: Removed session 31. Mar 6 02:41:14.853338 kubelet[2807]: I0306 02:41:14.853253 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-hostproc\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853338 kubelet[2807]: I0306 02:41:14.853327 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-cilium-config-path\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853463 kubelet[2807]: I0306 02:41:14.853352 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-hubble-tls\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853463 kubelet[2807]: I0306 02:41:14.853367 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6tj9\" (UniqueName: \"kubernetes.io/projected/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-kube-api-access-m6tj9\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853463 kubelet[2807]: I0306 02:41:14.853385 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-cilium-cgroup\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853463 kubelet[2807]: I0306 02:41:14.853398 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-lib-modules\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853463 kubelet[2807]: I0306 02:41:14.853412 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-bpf-maps\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853463 kubelet[2807]: I0306 02:41:14.853444 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-etc-cni-netd\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853629 kubelet[2807]: I0306 02:41:14.853477 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-xtables-lock\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853629 kubelet[2807]: I0306 02:41:14.853524 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-host-proc-sys-net\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853629 kubelet[2807]: I0306 02:41:14.853556 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-cni-path\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853629 kubelet[2807]: I0306 02:41:14.853603 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-cilium-run\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853758 kubelet[2807]: I0306 02:41:14.853645 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-host-proc-sys-kernel\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853758 kubelet[2807]: I0306 02:41:14.853660 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-clustermesh-secrets\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853758 kubelet[2807]: I0306 02:41:14.853689 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d423f2d-d9f8-407f-bb41-d13a5e03b9bd-cilium-ipsec-secrets\") pod \"cilium-wq9rz\" (UID: \"2d423f2d-d9f8-407f-bb41-d13a5e03b9bd\") " pod="kube-system/cilium-wq9rz" Mar 6 02:41:14.853833 sshd[4774]: Accepted publickey for core from 10.0.0.1 port 39064 ssh2: RSA SHA256:gQgg0JALxZbqjBb2wrGCiVnsME27E+oGvBgjcLvBgwM Mar 6 02:41:14.855674 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 02:41:14.862429 systemd-logind[1568]: New session 32 of user core. Mar 6 02:41:14.871276 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 6 02:41:15.011715 kubelet[2807]: E0306 02:41:15.010551 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:15.011844 containerd[1585]: time="2026-03-06T02:41:15.011806974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wq9rz,Uid:2d423f2d-d9f8-407f-bb41-d13a5e03b9bd,Namespace:kube-system,Attempt:0,}" Mar 6 02:41:15.036249 containerd[1585]: time="2026-03-06T02:41:15.036176195Z" level=info msg="connecting to shim ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3" address="unix:///run/containerd/s/df6d9e2a9d3018398162ce92958c8e6b2b13be1eebcbbb2025a7eeb11186d865" namespace=k8s.io protocol=ttrpc version=3 Mar 6 02:41:15.082491 systemd[1]: Started cri-containerd-ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3.scope - libcontainer container ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3. Mar 6 02:41:15.127713 containerd[1585]: time="2026-03-06T02:41:15.127641538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wq9rz,Uid:2d423f2d-d9f8-407f-bb41-d13a5e03b9bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\"" Mar 6 02:41:15.128752 kubelet[2807]: E0306 02:41:15.128705 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:15.137009 containerd[1585]: time="2026-03-06T02:41:15.135484073Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 02:41:15.147805 containerd[1585]: time="2026-03-06T02:41:15.147698385Z" level=info msg="Container 9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:41:15.155752 containerd[1585]: time="2026-03-06T02:41:15.155692813Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f\"" Mar 6 02:41:15.156435 containerd[1585]: time="2026-03-06T02:41:15.156347810Z" level=info msg="StartContainer for \"9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f\"" Mar 6 02:41:15.161667 containerd[1585]: time="2026-03-06T02:41:15.161612232Z" level=info msg="connecting to shim 9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f" address="unix:///run/containerd/s/df6d9e2a9d3018398162ce92958c8e6b2b13be1eebcbbb2025a7eeb11186d865" protocol=ttrpc version=3 Mar 6 02:41:15.194229 systemd[1]: Started cri-containerd-9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f.scope - libcontainer container 9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f. Mar 6 02:41:15.256925 containerd[1585]: time="2026-03-06T02:41:15.256813051Z" level=info msg="StartContainer for \"9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f\" returns successfully" Mar 6 02:41:15.280832 systemd[1]: cri-containerd-9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f.scope: Deactivated successfully. Mar 6 02:41:15.283367 containerd[1585]: time="2026-03-06T02:41:15.283278337Z" level=info msg="received container exit event container_id:\"9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f\" id:\"9e54cb8c1e86f367c95c27f71b47705c35c19183c71879eaa88a2f174d572e7f\" pid:4847 exited_at:{seconds:1772764875 nanos:282585281}" Mar 6 02:41:15.337248 kubelet[2807]: I0306 02:41:15.337174 2807 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-06T02:41:15Z","lastTransitionTime":"2026-03-06T02:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 6 02:41:16.067288 kubelet[2807]: E0306 02:41:16.067036 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:16.071987 containerd[1585]: time="2026-03-06T02:41:16.071866348Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 02:41:16.085032 containerd[1585]: time="2026-03-06T02:41:16.084967300Z" level=info msg="Container 28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:41:16.095339 containerd[1585]: time="2026-03-06T02:41:16.095243750Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750\"" Mar 6 02:41:16.097047 containerd[1585]: time="2026-03-06T02:41:16.096264319Z" level=info msg="StartContainer for \"28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750\"" Mar 6 02:41:16.097864 containerd[1585]: time="2026-03-06T02:41:16.097791479Z" level=info msg="connecting to shim 28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750" address="unix:///run/containerd/s/df6d9e2a9d3018398162ce92958c8e6b2b13be1eebcbbb2025a7eeb11186d865" protocol=ttrpc version=3 Mar 6 02:41:16.133283 systemd[1]: Started cri-containerd-28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750.scope - libcontainer container 28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750. Mar 6 02:41:16.177910 containerd[1585]: time="2026-03-06T02:41:16.177850213Z" level=info msg="StartContainer for \"28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750\" returns successfully" Mar 6 02:41:16.190343 systemd[1]: cri-containerd-28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750.scope: Deactivated successfully. Mar 6 02:41:16.192591 containerd[1585]: time="2026-03-06T02:41:16.192539142Z" level=info msg="received container exit event container_id:\"28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750\" id:\"28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750\" pid:4892 exited_at:{seconds:1772764876 nanos:192103830}" Mar 6 02:41:16.223651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28faf1f8bf3101c8f1c04777dc0d16fa10f2eee2f066356c50c7879bd4198750-rootfs.mount: Deactivated successfully. Mar 6 02:41:17.071734 kubelet[2807]: E0306 02:41:17.071329 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:17.084056 containerd[1585]: time="2026-03-06T02:41:17.082347070Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 02:41:17.099414 containerd[1585]: time="2026-03-06T02:41:17.099318541Z" level=info msg="Container fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:41:17.109264 containerd[1585]: time="2026-03-06T02:41:17.109186878Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d\"" Mar 6 02:41:17.110084 containerd[1585]: time="2026-03-06T02:41:17.110043006Z" level=info msg="StartContainer for \"fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d\"" Mar 6 02:41:17.111738 containerd[1585]: time="2026-03-06T02:41:17.111687164Z" level=info msg="connecting to shim fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d" address="unix:///run/containerd/s/df6d9e2a9d3018398162ce92958c8e6b2b13be1eebcbbb2025a7eeb11186d865" protocol=ttrpc version=3 Mar 6 02:41:17.134218 systemd[1]: Started cri-containerd-fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d.scope - libcontainer container fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d. Mar 6 02:41:17.249332 containerd[1585]: time="2026-03-06T02:41:17.249286527Z" level=info msg="StartContainer for \"fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d\" returns successfully" Mar 6 02:41:17.250913 systemd[1]: cri-containerd-fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d.scope: Deactivated successfully. Mar 6 02:41:17.254102 containerd[1585]: time="2026-03-06T02:41:17.254068799Z" level=info msg="received container exit event container_id:\"fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d\" id:\"fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d\" pid:4936 exited_at:{seconds:1772764877 nanos:253801182}" Mar 6 02:41:17.282641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa12967480eafd4da3f8b068cb6e49c9c421f187c2438d5fe54625c9b995cc3d-rootfs.mount: Deactivated successfully. Mar 6 02:41:18.078663 kubelet[2807]: E0306 02:41:18.078579 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:18.083312 containerd[1585]: time="2026-03-06T02:41:18.083226221Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 02:41:18.095069 containerd[1585]: time="2026-03-06T02:41:18.094930803Z" level=info msg="Container f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:41:18.108303 containerd[1585]: time="2026-03-06T02:41:18.108220788Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43\"" Mar 6 02:41:18.109185 containerd[1585]: time="2026-03-06T02:41:18.109000416Z" level=info msg="StartContainer for \"f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43\"" Mar 6 02:41:18.110590 containerd[1585]: time="2026-03-06T02:41:18.110463106Z" level=info msg="connecting to shim f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43" address="unix:///run/containerd/s/df6d9e2a9d3018398162ce92958c8e6b2b13be1eebcbbb2025a7eeb11186d865" protocol=ttrpc version=3 Mar 6 02:41:18.140240 systemd[1]: Started cri-containerd-f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43.scope - libcontainer container f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43. Mar 6 02:41:18.183120 systemd[1]: cri-containerd-f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43.scope: Deactivated successfully. Mar 6 02:41:18.185940 containerd[1585]: time="2026-03-06T02:41:18.185876969Z" level=info msg="received container exit event container_id:\"f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43\" id:\"f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43\" pid:4975 exited_at:{seconds:1772764878 nanos:183908044}" Mar 6 02:41:18.187897 containerd[1585]: time="2026-03-06T02:41:18.187838146Z" level=info msg="StartContainer for \"f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43\" returns successfully" Mar 6 02:41:18.216441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7119906261a2546cc5373a21eb559fd556ffe90cd315f9b32d75d67b491fc43-rootfs.mount: Deactivated successfully. Mar 6 02:41:19.086125 kubelet[2807]: E0306 02:41:19.085927 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:19.093535 containerd[1585]: time="2026-03-06T02:41:19.093429702Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 02:41:19.111886 containerd[1585]: time="2026-03-06T02:41:19.111832647Z" level=info msg="Container d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896: CDI devices from CRI Config.CDIDevices: []" Mar 6 02:41:19.115399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859731987.mount: Deactivated successfully. Mar 6 02:41:19.122488 containerd[1585]: time="2026-03-06T02:41:19.122373195Z" level=info msg="CreateContainer within sandbox \"ca466d9f37091e05b0f862b7192843ec9ebf6145381c2ba4d4cabbb9fedce5a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896\"" Mar 6 02:41:19.123563 containerd[1585]: time="2026-03-06T02:41:19.123352151Z" level=info msg="StartContainer for \"d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896\"" Mar 6 02:41:19.124927 containerd[1585]: time="2026-03-06T02:41:19.124842962Z" level=info msg="connecting to shim d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896" address="unix:///run/containerd/s/df6d9e2a9d3018398162ce92958c8e6b2b13be1eebcbbb2025a7eeb11186d865" protocol=ttrpc version=3 Mar 6 02:41:19.161313 systemd[1]: Started cri-containerd-d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896.scope - libcontainer container d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896. Mar 6 02:41:19.237371 containerd[1585]: time="2026-03-06T02:41:19.237291976Z" level=info msg="StartContainer for \"d0e5f7816cbf3fb59b2e95e7a540197742b30a2a037692e29ffd25136191d896\" returns successfully" Mar 6 02:41:19.825044 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 6 02:41:20.094373 kubelet[2807]: E0306 02:41:20.093491 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:20.113287 kubelet[2807]: I0306 02:41:20.113146 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wq9rz" podStartSLOduration=6.113110189 podStartE2EDuration="6.113110189s" podCreationTimestamp="2026-03-06 02:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 02:41:20.113103383 +0000 UTC m=+176.091614420" watchObservedRunningTime="2026-03-06 02:41:20.113110189 +0000 UTC m=+176.091621186" Mar 6 02:41:21.096904 kubelet[2807]: E0306 02:41:21.096387 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:23.416282 systemd-networkd[1447]: lxc_health: Link UP Mar 6 02:41:23.417438 systemd-networkd[1447]: lxc_health: Gained carrier Mar 6 02:41:24.580686 containerd[1585]: time="2026-03-06T02:41:24.580598467Z" level=info msg="StopPodSandbox for \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\"" Mar 6 02:41:24.581358 containerd[1585]: time="2026-03-06T02:41:24.580815332Z" level=info msg="TearDown network for sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" successfully" Mar 6 02:41:24.581358 containerd[1585]: time="2026-03-06T02:41:24.580835641Z" level=info msg="StopPodSandbox for \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" returns successfully" Mar 6 02:41:24.581849 containerd[1585]: time="2026-03-06T02:41:24.581733997Z" level=info msg="RemovePodSandbox for \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\"" Mar 6 02:41:24.581849 containerd[1585]: time="2026-03-06T02:41:24.581805028Z" level=info msg="Forcibly stopping sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\"" Mar 6 02:41:24.582115 containerd[1585]: time="2026-03-06T02:41:24.582011263Z" level=info msg="TearDown network for sandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" successfully" Mar 6 02:41:24.584412 containerd[1585]: time="2026-03-06T02:41:24.584347402Z" level=info msg="Ensure that sandbox dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73 in task-service has been cleanup successfully" Mar 6 02:41:24.589990 containerd[1585]: time="2026-03-06T02:41:24.589704856Z" level=info msg="RemovePodSandbox \"dae3493f0d3e328ae4c66428a70c782340c9488f4186204f068dceff3d01da73\" returns successfully" Mar 6 02:41:24.590444 containerd[1585]: time="2026-03-06T02:41:24.590372517Z" level=info msg="StopPodSandbox for \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\"" Mar 6 02:41:24.590585 containerd[1585]: time="2026-03-06T02:41:24.590536002Z" level=info msg="TearDown network for sandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" successfully" Mar 6 02:41:24.590585 containerd[1585]: time="2026-03-06T02:41:24.590549748Z" level=info msg="StopPodSandbox for \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" returns successfully" Mar 6 02:41:24.593005 containerd[1585]: time="2026-03-06T02:41:24.591069297Z" level=info msg="RemovePodSandbox for \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\"" Mar 6 02:41:24.593005 containerd[1585]: time="2026-03-06T02:41:24.591099293Z" level=info msg="Forcibly stopping sandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\"" Mar 6 02:41:24.593005 containerd[1585]: time="2026-03-06T02:41:24.591161469Z" level=info msg="TearDown network for sandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" successfully" Mar 6 02:41:24.593005 containerd[1585]: time="2026-03-06T02:41:24.592910262Z" level=info msg="Ensure that sandbox 47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115 in task-service has been cleanup successfully" Mar 6 02:41:24.598443 containerd[1585]: time="2026-03-06T02:41:24.598245293Z" level=info msg="RemovePodSandbox \"47abb40d17ece9443498d3258ef918bc63afe378b642317a285c4198a6b1a115\" returns successfully" Mar 6 02:41:24.619111 systemd-networkd[1447]: lxc_health: Gained IPv6LL Mar 6 02:41:25.013678 kubelet[2807]: E0306 02:41:25.013249 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:25.112110 kubelet[2807]: E0306 02:41:25.111601 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 02:41:29.959318 sshd[4777]: Connection closed by 10.0.0.1 port 39064 Mar 6 02:41:29.960382 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Mar 6 02:41:29.968935 systemd[1]: sshd@31-10.0.0.110:22-10.0.0.1:39064.service: Deactivated successfully. Mar 6 02:41:29.974733 systemd[1]: session-32.scope: Deactivated successfully. Mar 6 02:41:29.976112 systemd-logind[1568]: Session 32 logged out. Waiting for processes to exit. Mar 6 02:41:29.978133 systemd-logind[1568]: Removed session 32.