Mar 13 00:46:10.071207 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:46:10.071226 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:46:10.071237 kernel: BIOS-provided physical RAM map: Mar 13 00:46:10.071243 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 13 00:46:10.071248 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 13 00:46:10.071254 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:46:10.071260 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 13 00:46:10.071266 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 13 00:46:10.071272 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:46:10.071277 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:46:10.071283 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:46:10.071291 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:46:10.071297 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:46:10.071302 kernel: NX (Execute Disable) protection: active Mar 13 00:46:10.071309 kernel: APIC: Static calls initialized Mar 13 00:46:10.071315 kernel: SMBIOS 2.8 present. Mar 13 00:46:10.071324 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 13 00:46:10.071330 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:46:10.071335 kernel: Hypervisor detected: KVM Mar 13 00:46:10.071341 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:46:10.071347 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:46:10.071353 kernel: kvm-clock: using sched offset of 5480794539 cycles Mar 13 00:46:10.071359 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:46:10.071366 kernel: tsc: Detected 2445.426 MHz processor Mar 13 00:46:10.071372 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:46:10.071379 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:46:10.071387 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:46:10.071393 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:46:10.071399 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:46:10.071405 kernel: Using GB pages for direct mapping Mar 13 00:46:10.071412 kernel: ACPI: Early table checksum verification disabled Mar 13 00:46:10.071418 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 13 00:46:10.071424 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071430 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071436 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071445 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 13 00:46:10.071451 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071457 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071463 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071469 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:46:10.071479 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 13 00:46:10.071485 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 13 00:46:10.071494 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 13 00:46:10.071500 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 13 00:46:10.071507 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 13 00:46:10.071513 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 13 00:46:10.071519 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 13 00:46:10.071525 kernel: No NUMA configuration found Mar 13 00:46:10.071532 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 13 00:46:10.071540 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 13 00:46:10.071547 kernel: Zone ranges: Mar 13 00:46:10.071553 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:46:10.071560 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 13 00:46:10.071566 kernel: Normal empty Mar 13 00:46:10.071572 kernel: Device empty Mar 13 00:46:10.071578 kernel: Movable zone start for each node Mar 13 00:46:10.071585 kernel: Early memory node ranges Mar 13 00:46:10.071626 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:46:10.071634 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 13 00:46:10.071643 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 13 00:46:10.071649 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:46:10.071656 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:46:10.071662 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 13 00:46:10.071669 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:46:10.071675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:46:10.071681 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:46:10.071688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:46:10.071694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:46:10.071702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:46:10.071709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:46:10.071715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:46:10.071722 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:46:10.071728 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:46:10.071734 kernel: TSC deadline timer available Mar 13 00:46:10.071741 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:46:10.071747 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:46:10.071753 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:46:10.071759 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:46:10.071768 kernel: CPU topo: Num. cores per package: 4 Mar 13 00:46:10.071774 kernel: CPU topo: Num. threads per package: 4 Mar 13 00:46:10.071781 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 13 00:46:10.071787 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:46:10.071793 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:46:10.071799 kernel: kvm-guest: setup PV sched yield Mar 13 00:46:10.071806 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:46:10.071812 kernel: Booting paravirtualized kernel on KVM Mar 13 00:46:10.071819 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:46:10.071827 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 13 00:46:10.071834 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 13 00:46:10.071840 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 13 00:46:10.071846 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 13 00:46:10.071853 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:46:10.071859 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:46:10.071866 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:46:10.071873 kernel: random: crng init done Mar 13 00:46:10.071879 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:46:10.071888 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:46:10.071894 kernel: Fallback order for Node 0: 0 Mar 13 00:46:10.071901 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 13 00:46:10.071907 kernel: Policy zone: DMA32 Mar 13 00:46:10.071913 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:46:10.071920 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 13 00:46:10.071926 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:46:10.071933 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:46:10.071939 kernel: Dynamic Preempt: voluntary Mar 13 00:46:10.071947 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:46:10.071954 kernel: rcu: RCU event tracing is enabled. Mar 13 00:46:10.071961 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 13 00:46:10.071968 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:46:10.071974 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:46:10.071980 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:46:10.071987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:46:10.071993 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 13 00:46:10.072000 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:46:10.072008 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:46:10.072015 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:46:10.072022 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 13 00:46:10.072080 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:46:10.072095 kernel: Console: colour VGA+ 80x25 Mar 13 00:46:10.072104 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:46:10.072111 kernel: ACPI: Core revision 20240827 Mar 13 00:46:10.072118 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:46:10.072125 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:46:10.072131 kernel: x2apic enabled Mar 13 00:46:10.072138 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:46:10.072145 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:46:10.072154 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:46:10.072160 kernel: kvm-guest: setup PV IPIs Mar 13 00:46:10.072167 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:46:10.072174 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:46:10.072180 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 13 00:46:10.072189 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:46:10.072196 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:46:10.072203 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:46:10.072209 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:46:10.072216 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:46:10.072223 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:46:10.072229 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:46:10.072236 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:46:10.072246 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:46:10.072252 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:46:10.072259 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:46:10.072266 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:46:10.072272 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:46:10.072279 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:46:10.072286 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:46:10.072292 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:46:10.072324 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:46:10.072335 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 13 00:46:10.072342 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:46:10.072348 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:46:10.072355 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:46:10.072362 kernel: landlock: Up and running. Mar 13 00:46:10.072368 kernel: SELinux: Initializing. Mar 13 00:46:10.072375 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:46:10.072382 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:46:10.072389 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:46:10.072397 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 13 00:46:10.072429 kernel: signal: max sigframe size: 1776 Mar 13 00:46:10.072436 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:46:10.072443 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:46:10.072450 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:46:10.072456 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:46:10.072463 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:46:10.072470 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:46:10.072476 kernel: .... node #0, CPUs: #1 #2 #3 Mar 13 00:46:10.072486 kernel: smp: Brought up 1 node, 4 CPUs Mar 13 00:46:10.072492 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 13 00:46:10.072500 kernel: Memory: 2420724K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 13 00:46:10.072506 kernel: devtmpfs: initialized Mar 13 00:46:10.072513 kernel: x86/mm: Memory block size: 128MB Mar 13 00:46:10.072543 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:46:10.072550 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 13 00:46:10.072557 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:46:10.072563 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:46:10.072572 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:46:10.072579 kernel: audit: type=2000 audit(1773362765.831:1): state=initialized audit_enabled=0 res=1 Mar 13 00:46:10.072586 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:46:10.072621 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:46:10.072629 kernel: cpuidle: using governor menu Mar 13 00:46:10.072636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:46:10.072642 kernel: dca service started, version 1.12.1 Mar 13 00:46:10.072649 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:46:10.072656 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:46:10.072665 kernel: PCI: Using configuration type 1 for base access Mar 13 00:46:10.072672 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:46:10.072679 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:46:10.072685 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:46:10.072692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:46:10.072699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:46:10.072705 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:46:10.072712 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:46:10.072719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:46:10.072727 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:46:10.072734 kernel: ACPI: Interpreter enabled Mar 13 00:46:10.072741 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:46:10.072747 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:46:10.072754 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:46:10.072761 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:46:10.072767 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:46:10.072774 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:46:10.072958 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:46:10.073153 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:46:10.073276 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:46:10.073286 kernel: PCI host bridge to bus 0000:00 Mar 13 00:46:10.073407 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:46:10.073516 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:46:10.073668 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:46:10.073783 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 13 00:46:10.073890 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:46:10.073997 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 13 00:46:10.074190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:46:10.074325 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:46:10.074450 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:46:10.074572 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:46:10.074733 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:46:10.074850 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:46:10.074965 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:46:10.075155 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:46:10.075276 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 13 00:46:10.075390 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:46:10.075510 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:46:10.075676 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:46:10.075795 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 13 00:46:10.075911 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:46:10.076209 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:46:10.076337 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:46:10.076454 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 13 00:46:10.076574 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:46:10.076733 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 13 00:46:10.076855 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:46:10.077013 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:46:10.077237 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:46:10.077372 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:46:10.077493 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 13 00:46:10.077648 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 13 00:46:10.077776 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:46:10.077890 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:46:10.077899 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:46:10.077907 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:46:10.077913 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:46:10.077920 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:46:10.077930 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:46:10.077937 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:46:10.077943 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:46:10.077950 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:46:10.077957 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:46:10.077964 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:46:10.077970 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:46:10.077977 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:46:10.077984 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:46:10.077992 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:46:10.077999 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:46:10.078006 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:46:10.078012 kernel: iommu: Default domain type: Translated Mar 13 00:46:10.078019 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:46:10.078085 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:46:10.078093 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:46:10.078099 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 13 00:46:10.078106 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 13 00:46:10.078232 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:46:10.078347 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:46:10.078460 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:46:10.078469 kernel: vgaarb: loaded Mar 13 00:46:10.078476 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:46:10.078483 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:46:10.078490 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:46:10.078496 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:46:10.078506 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:46:10.078513 kernel: pnp: PnP ACPI init Mar 13 00:46:10.078678 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:46:10.078689 kernel: pnp: PnP ACPI: found 6 devices Mar 13 00:46:10.078696 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:46:10.078703 kernel: NET: Registered PF_INET protocol family Mar 13 00:46:10.078710 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:46:10.078716 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:46:10.078726 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:46:10.078733 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:46:10.078740 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:46:10.078747 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:46:10.078754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:46:10.078760 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:46:10.078767 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:46:10.078774 kernel: NET: Registered PF_XDP protocol family Mar 13 00:46:10.078881 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:46:10.078991 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:46:10.079161 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:46:10.079270 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 13 00:46:10.079376 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:46:10.079482 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 13 00:46:10.079491 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:46:10.079498 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:46:10.079505 kernel: Initialise system trusted keyrings Mar 13 00:46:10.079511 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:46:10.079522 kernel: Key type asymmetric registered Mar 13 00:46:10.079528 kernel: Asymmetric key parser 'x509' registered Mar 13 00:46:10.079535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:46:10.079542 kernel: io scheduler mq-deadline registered Mar 13 00:46:10.079549 kernel: io scheduler kyber registered Mar 13 00:46:10.079555 kernel: io scheduler bfq registered Mar 13 00:46:10.079562 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:46:10.079569 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:46:10.079576 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:46:10.079585 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:46:10.079627 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:46:10.079635 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:46:10.079642 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:46:10.079648 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:46:10.079655 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:46:10.079662 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:46:10.079788 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 13 00:46:10.079912 kernel: rtc_cmos 00:04: registered as rtc0 Mar 13 00:46:10.080082 kernel: rtc_cmos 00:04: setting system clock to 2026-03-13T00:46:09 UTC (1773362769) Mar 13 00:46:10.080206 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:46:10.080216 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:46:10.080223 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:46:10.080230 kernel: Segment Routing with IPv6 Mar 13 00:46:10.080237 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:46:10.080243 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:46:10.080250 kernel: Key type dns_resolver registered Mar 13 00:46:10.080260 kernel: IPI shorthand broadcast: enabled Mar 13 00:46:10.080267 kernel: sched_clock: Marking stable (3178023082, 367963665)->(3667122829, -121136082) Mar 13 00:46:10.080274 kernel: registered taskstats version 1 Mar 13 00:46:10.080281 kernel: Loading compiled-in X.509 certificates Mar 13 00:46:10.080288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:46:10.080294 kernel: Demotion targets for Node 0: null Mar 13 00:46:10.080301 kernel: Key type .fscrypt registered Mar 13 00:46:10.080307 kernel: Key type fscrypt-provisioning registered Mar 13 00:46:10.080314 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:46:10.080323 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:46:10.080329 kernel: ima: No architecture policies found Mar 13 00:46:10.080336 kernel: clk: Disabling unused clocks Mar 13 00:46:10.080343 kernel: Warning: unable to open an initial console. Mar 13 00:46:10.080350 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:46:10.080357 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:46:10.080363 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:46:10.080397 kernel: Run /init as init process Mar 13 00:46:10.080407 kernel: with arguments: Mar 13 00:46:10.080414 kernel: /init Mar 13 00:46:10.080420 kernel: with environment: Mar 13 00:46:10.080427 kernel: HOME=/ Mar 13 00:46:10.080433 kernel: TERM=linux Mar 13 00:46:10.080441 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:46:10.080451 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:46:10.080458 systemd[1]: Detected virtualization kvm. Mar 13 00:46:10.080467 systemd[1]: Detected architecture x86-64. Mar 13 00:46:10.080498 systemd[1]: Running in initrd. Mar 13 00:46:10.080505 systemd[1]: No hostname configured, using default hostname. Mar 13 00:46:10.080513 systemd[1]: Hostname set to . Mar 13 00:46:10.080520 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:46:10.080527 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:46:10.080534 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:46:10.080574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:46:10.080652 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:46:10.080661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:46:10.080668 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:46:10.080676 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:46:10.080685 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:46:10.080718 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:46:10.080726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:46:10.080755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:46:10.080795 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:46:10.080823 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:46:10.080831 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:46:10.080838 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:46:10.080845 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:46:10.080855 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:46:10.080862 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:46:10.080870 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:46:10.080877 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:46:10.080885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:46:10.080892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:46:10.080899 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:46:10.080906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:46:10.080914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:46:10.080923 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:46:10.080931 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:46:10.080938 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:46:10.080945 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:46:10.080952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:46:10.080960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:46:10.080968 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:46:10.081075 systemd-journald[203]: Collecting audit messages is disabled. Mar 13 00:46:10.081123 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:46:10.081132 systemd-journald[203]: Journal started Mar 13 00:46:10.081153 systemd-journald[203]: Runtime Journal (/run/log/journal/632a474ca1794d83ab8074b1b9d201e2) is 6M, max 48.3M, 42.2M free. Mar 13 00:46:10.081130 systemd-modules-load[204]: Inserted module 'overlay' Mar 13 00:46:10.091758 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:46:10.091775 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:46:10.097549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:46:10.103389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:46:10.124117 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:46:10.126178 kernel: Bridge firewalling registered Mar 13 00:46:10.126097 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 13 00:46:10.128644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:46:10.134634 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:46:10.311303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:46:10.323443 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:46:10.333113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:46:10.342677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:46:10.357451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:46:10.367759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:46:10.383886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:46:10.388865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:46:10.392996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:46:10.405239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:46:10.414888 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:46:10.456530 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:46:10.472132 systemd-resolved[243]: Positive Trust Anchors: Mar 13 00:46:10.472141 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:46:10.472166 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:46:10.474532 systemd-resolved[243]: Defaulting to hostname 'linux'. Mar 13 00:46:10.475718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:46:10.476352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:46:10.627127 kernel: SCSI subsystem initialized Mar 13 00:46:10.637215 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:46:10.649140 kernel: iscsi: registered transport (tcp) Mar 13 00:46:10.672069 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:46:10.672107 kernel: QLogic iSCSI HBA Driver Mar 13 00:46:10.697928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:46:10.718465 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:46:10.722972 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:46:10.793314 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:46:10.798661 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:46:10.866143 kernel: raid6: avx2x4 gen() 34700 MB/s Mar 13 00:46:10.884142 kernel: raid6: avx2x2 gen() 31727 MB/s Mar 13 00:46:10.904548 kernel: raid6: avx2x1 gen() 22103 MB/s Mar 13 00:46:10.904649 kernel: raid6: using algorithm avx2x4 gen() 34700 MB/s Mar 13 00:46:10.925010 kernel: raid6: .... xor() 4926 MB/s, rmw enabled Mar 13 00:46:10.925085 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:46:10.946123 kernel: xor: automatically using best checksumming function avx Mar 13 00:46:11.108118 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:46:11.118294 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:46:11.123808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:46:11.161869 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 13 00:46:11.167929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:46:11.170282 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:46:11.205663 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Mar 13 00:46:11.250686 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:46:11.260215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:46:11.345893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:46:11.355261 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:46:11.396126 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 13 00:46:11.439181 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 13 00:46:11.441008 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:46:11.443334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:46:11.461288 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:46:11.461308 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:46:11.461322 kernel: GPT:9289727 != 19775487 Mar 13 00:46:11.461334 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:46:11.461346 kernel: GPT:9289727 != 19775487 Mar 13 00:46:11.444168 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:46:11.470898 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:46:11.470975 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:46:11.471850 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:46:11.481831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:46:11.489357 kernel: libata version 3.00 loaded. Mar 13 00:46:11.494295 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:46:11.503178 kernel: AES CTR mode by8 optimization enabled Mar 13 00:46:11.503200 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:46:11.503380 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:46:11.515578 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:46:11.515797 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:46:11.515941 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:46:11.527529 kernel: scsi host0: ahci Mar 13 00:46:11.527803 kernel: scsi host1: ahci Mar 13 00:46:11.534732 kernel: scsi host2: ahci Mar 13 00:46:11.540090 kernel: scsi host3: ahci Mar 13 00:46:11.547081 kernel: scsi host4: ahci Mar 13 00:46:11.552126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:46:11.574737 kernel: scsi host5: ahci Mar 13 00:46:11.574948 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 13 00:46:11.574960 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 13 00:46:11.574976 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 13 00:46:11.574985 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 13 00:46:11.574995 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 13 00:46:11.575004 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 13 00:46:11.580512 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:46:11.771997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:46:11.786592 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:46:11.797768 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:46:11.805930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:46:11.808725 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:46:11.842281 disk-uuid[621]: Primary Header is updated. Mar 13 00:46:11.842281 disk-uuid[621]: Secondary Entries is updated. Mar 13 00:46:11.842281 disk-uuid[621]: Secondary Header is updated. Mar 13 00:46:11.852910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:46:11.876072 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:46:11.876108 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:46:11.880101 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:46:11.885221 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:46:11.885244 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:46:11.890442 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:46:11.890465 kernel: ata3.00: applying bridge limits Mar 13 00:46:11.893126 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:46:11.896161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:46:11.903967 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:46:11.904360 kernel: ata3.00: configured for UDMA/100 Mar 13 00:46:11.909257 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:46:11.961262 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:46:11.961521 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:46:11.974323 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:46:12.306324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:46:12.311123 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:46:12.321572 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:46:12.325872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:46:12.334281 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:46:12.369390 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:46:12.867107 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:46:12.867588 disk-uuid[622]: The operation has completed successfully. Mar 13 00:46:12.905190 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:46:12.908334 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:46:12.945594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:46:12.978782 sh[650]: Success Mar 13 00:46:13.005194 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:46:13.005229 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:46:13.009193 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:46:13.023078 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:46:13.060212 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:46:13.062703 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:46:13.084197 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:46:13.105221 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (662) Mar 13 00:46:13.105239 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:46:13.105250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:46:13.116654 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:46:13.116674 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:46:13.118496 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:46:13.122105 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:46:13.124524 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:46:13.125506 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:46:13.132202 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:46:13.175135 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (693) Mar 13 00:46:13.182685 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:46:13.182708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:46:13.190925 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:46:13.190948 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:46:13.200201 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:46:13.203788 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:46:13.211696 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:46:13.300544 ignition[748]: Ignition 2.22.0 Mar 13 00:46:13.300598 ignition[748]: Stage: fetch-offline Mar 13 00:46:13.302818 ignition[748]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:46:13.302833 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:46:13.302962 ignition[748]: parsed url from cmdline: "" Mar 13 00:46:13.302967 ignition[748]: no config URL provided Mar 13 00:46:13.302973 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:46:13.302982 ignition[748]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:46:13.303131 ignition[748]: op(1): [started] loading QEMU firmware config module Mar 13 00:46:13.303137 ignition[748]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 13 00:46:13.327768 ignition[748]: op(1): [finished] loading QEMU firmware config module Mar 13 00:46:13.342853 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:46:13.349356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:46:13.396472 systemd-networkd[839]: lo: Link UP Mar 13 00:46:13.396510 systemd-networkd[839]: lo: Gained carrier Mar 13 00:46:13.398237 systemd-networkd[839]: Enumeration completed Mar 13 00:46:13.398347 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:46:13.399237 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:46:13.399242 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:46:13.401287 systemd-networkd[839]: eth0: Link UP Mar 13 00:46:13.401428 systemd-networkd[839]: eth0: Gained carrier Mar 13 00:46:13.401438 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:46:13.409713 systemd[1]: Reached target network.target - Network. Mar 13 00:46:13.468105 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:46:13.632228 ignition[748]: parsing config with SHA512: 56976ae1ba76e2a8d54b760cf2d5f2f649115db545987c50350805cd9976997e166f2958f91d2c5e15e11b1557ace6d7e5735877c94793ae6e3913de945444aa Mar 13 00:46:13.636729 unknown[748]: fetched base config from "system" Mar 13 00:46:13.636761 unknown[748]: fetched user config from "qemu" Mar 13 00:46:13.637210 ignition[748]: fetch-offline: fetch-offline passed Mar 13 00:46:13.637270 ignition[748]: Ignition finished successfully Mar 13 00:46:13.643724 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:46:13.655586 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 13 00:46:13.656730 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:46:13.703999 ignition[844]: Ignition 2.22.0 Mar 13 00:46:13.704100 ignition[844]: Stage: kargs Mar 13 00:46:13.704219 ignition[844]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:46:13.704230 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:46:13.704970 ignition[844]: kargs: kargs passed Mar 13 00:46:13.712944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:46:13.705011 ignition[844]: Ignition finished successfully Mar 13 00:46:13.722155 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:46:13.770014 ignition[853]: Ignition 2.22.0 Mar 13 00:46:13.770189 ignition[853]: Stage: disks Mar 13 00:46:13.770370 ignition[853]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:46:13.770387 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:46:13.771529 ignition[853]: disks: disks passed Mar 13 00:46:13.771572 ignition[853]: Ignition finished successfully Mar 13 00:46:13.787880 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:46:13.791777 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:46:13.794696 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:46:13.809755 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:46:13.809926 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:46:13.822449 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:46:13.824241 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:46:13.870750 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:46:13.876618 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:46:13.878132 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:46:14.036154 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:46:14.037243 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:46:14.043630 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:46:14.052467 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:46:14.077019 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:46:14.083486 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:46:14.104257 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Mar 13 00:46:14.104280 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:46:14.104291 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:46:14.083567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:46:14.083596 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:46:14.094488 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:46:14.108213 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:46:14.132493 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:46:14.132511 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:46:14.134196 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:46:14.176137 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:46:14.182533 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:46:14.191736 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:46:14.197094 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:46:14.321990 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:46:14.327115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:46:14.350732 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:46:14.361345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:46:14.367783 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:46:14.386632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:46:14.401693 ignition[986]: INFO : Ignition 2.22.0 Mar 13 00:46:14.401693 ignition[986]: INFO : Stage: mount Mar 13 00:46:14.406467 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:46:14.406467 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:46:14.406467 ignition[986]: INFO : mount: mount passed Mar 13 00:46:14.406467 ignition[986]: INFO : Ignition finished successfully Mar 13 00:46:14.421233 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:46:14.428475 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:46:14.452836 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:46:14.488515 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Mar 13 00:46:14.488542 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:46:14.491756 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:46:14.500163 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:46:14.500184 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:46:14.502597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:46:14.555475 ignition[1015]: INFO : Ignition 2.22.0 Mar 13 00:46:14.555475 ignition[1015]: INFO : Stage: files Mar 13 00:46:14.560356 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:46:14.560356 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:46:14.560356 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:46:14.571209 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:46:14.571209 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:46:14.580933 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:46:14.580933 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:46:14.580933 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:46:14.580933 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:46:14.580933 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:46:14.575227 unknown[1015]: wrote ssh authorized keys file for user: core Mar 13 00:46:14.633365 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:46:14.761569 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:46:14.761569 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:46:14.761569 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 00:46:14.935463 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 00:46:15.056388 systemd-networkd[839]: eth0: Gained IPv6LL Mar 13 00:46:15.090783 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:46:15.090783 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:46:15.103201 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:46:15.324576 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 00:46:16.078511 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:46:16.078511 ignition[1015]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 13 00:46:16.091334 ignition[1015]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:46:16.143577 ignition[1015]: INFO : files: files passed Mar 13 00:46:16.143577 ignition[1015]: INFO : Ignition finished successfully Mar 13 00:46:16.117901 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:46:16.125542 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:46:16.155759 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:46:16.163456 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:46:16.208252 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Mar 13 00:46:16.163582 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:46:16.216219 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:46:16.221175 initrd-setup-root-after-ignition[1050]: grep: Mar 13 00:46:16.223644 initrd-setup-root-after-ignition[1050]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:46:16.228326 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:46:16.233881 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:46:16.242240 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:46:16.249740 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:46:16.349531 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:46:16.349827 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:46:16.353305 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:46:16.367607 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:46:16.367892 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:46:16.369336 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:46:16.423954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:46:16.425779 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:46:16.466285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:46:16.466539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:46:16.474288 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:46:16.481816 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:46:16.481952 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:46:16.498468 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:46:16.502753 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:46:16.509173 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:46:16.512196 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:46:16.519777 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:46:16.526614 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:46:16.533818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:46:16.540884 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:46:16.555269 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:46:16.559237 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:46:16.562747 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:46:16.571610 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:46:16.571795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:46:16.584147 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:46:16.584362 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:46:16.591360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:46:16.599342 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:46:16.603380 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:46:16.603512 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:46:16.618416 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:46:16.618554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:46:16.621871 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:46:16.628726 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:46:16.635210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:46:16.638973 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:46:16.655538 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:46:16.655891 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:46:16.656023 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:46:16.667488 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:46:16.667658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:46:16.670643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:46:16.670812 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:46:16.676543 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:46:16.676636 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:46:16.691968 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:46:16.708157 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:46:16.708303 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:46:16.708440 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:46:16.714484 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:46:16.714620 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:46:16.737467 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:46:16.737718 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:46:16.756867 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:46:16.770703 ignition[1070]: INFO : Ignition 2.22.0 Mar 13 00:46:16.770703 ignition[1070]: INFO : Stage: umount Mar 13 00:46:16.777145 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:46:16.777145 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:46:16.777145 ignition[1070]: INFO : umount: umount passed Mar 13 00:46:16.777145 ignition[1070]: INFO : Ignition finished successfully Mar 13 00:46:16.783001 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:46:16.783255 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:46:16.786627 systemd[1]: Stopped target network.target - Network. Mar 13 00:46:16.799609 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:46:16.799751 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:46:16.804473 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:46:16.804528 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:46:16.811190 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:46:16.811250 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:46:16.818130 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:46:16.818190 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:46:16.826241 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:46:16.835434 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:46:16.856289 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:46:16.856508 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:46:16.867178 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:46:16.867382 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:46:16.889302 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:46:16.889605 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:46:16.889860 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:46:16.907719 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:46:16.909259 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:46:16.919553 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:46:16.919647 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:46:16.919813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:46:16.919864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:46:16.929193 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:46:16.934202 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:46:16.934256 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:46:16.947986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:46:16.948107 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:46:16.970144 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:46:16.970252 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:46:16.977523 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:46:16.977610 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:46:16.993530 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:46:17.004280 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:46:17.004372 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:46:17.024725 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:46:17.024975 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:46:17.033618 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:46:17.033735 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:46:17.038209 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:46:17.038250 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:46:17.049409 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:46:17.049463 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:46:17.060310 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:46:17.060363 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:46:17.070388 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:46:17.070444 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:46:17.081905 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:46:17.084648 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:46:17.084735 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:46:17.109238 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:46:17.109290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:46:17.126335 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 13 00:46:17.126383 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:46:17.139568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:46:17.139650 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:46:17.147182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:46:17.147229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:46:17.163610 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:46:17.163745 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 13 00:46:17.163794 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:46:17.163842 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:46:17.164326 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:46:17.164460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:46:17.172250 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:46:17.172371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:46:17.178010 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:46:17.186828 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:46:17.234901 systemd[1]: Switching root. Mar 13 00:46:17.271901 systemd-journald[203]: Journal stopped Mar 13 00:46:18.822206 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 13 00:46:18.822269 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:46:18.822287 kernel: SELinux: policy capability open_perms=1 Mar 13 00:46:18.822297 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:46:18.822307 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:46:18.822323 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:46:18.822339 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:46:18.822353 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:46:18.822363 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:46:18.822376 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:46:18.822387 kernel: audit: type=1403 audit(1773362777.501:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:46:18.822399 systemd[1]: Successfully loaded SELinux policy in 78.748ms. Mar 13 00:46:18.822417 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.292ms. Mar 13 00:46:18.822429 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:46:18.822440 systemd[1]: Detected virtualization kvm. Mar 13 00:46:18.822451 systemd[1]: Detected architecture x86-64. Mar 13 00:46:18.822461 systemd[1]: Detected first boot. Mar 13 00:46:18.822472 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:46:18.822485 zram_generator::config[1118]: No configuration found. Mar 13 00:46:18.822498 kernel: Guest personality initialized and is inactive Mar 13 00:46:18.822509 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:46:18.822524 kernel: Initialized host personality Mar 13 00:46:18.822539 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:46:18.822549 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:46:18.822560 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:46:18.822572 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:46:18.822585 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:46:18.822595 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:46:18.822606 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:46:18.822617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:46:18.822627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:46:18.822638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:46:18.822650 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:46:18.822661 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:46:18.822674 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:46:18.822684 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:46:18.822734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:46:18.822746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:46:18.822757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:46:18.822768 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:46:18.822778 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:46:18.822789 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:46:18.822803 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:46:18.822814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:46:18.822825 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:46:18.822835 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:46:18.822846 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:46:18.822857 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:46:18.822868 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:46:18.822878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:46:18.822889 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:46:18.822902 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:46:18.822913 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:46:18.822925 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:46:18.822936 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:46:18.822946 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:46:18.822957 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:46:18.822968 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:46:18.822978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:46:18.822989 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:46:18.823002 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:46:18.823013 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:46:18.823079 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:46:18.823092 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:46:18.823103 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:46:18.823114 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:46:18.823125 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:46:18.823136 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:46:18.823147 systemd[1]: Reached target machines.target - Containers. Mar 13 00:46:18.823160 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:46:18.823171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:46:18.823182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:46:18.823193 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:46:18.823204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:46:18.823216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:46:18.823226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:46:18.823237 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:46:18.823250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:46:18.823261 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:46:18.823271 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:46:18.823282 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:46:18.823292 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:46:18.823304 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:46:18.823315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:46:18.823326 kernel: fuse: init (API version 7.41) Mar 13 00:46:18.823336 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:46:18.823349 kernel: ACPI: bus type drm_connector registered Mar 13 00:46:18.823360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:46:18.823370 kernel: loop: module loaded Mar 13 00:46:18.823381 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:46:18.823392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:46:18.823402 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:46:18.823435 systemd-journald[1203]: Collecting audit messages is disabled. Mar 13 00:46:18.823459 systemd-journald[1203]: Journal started Mar 13 00:46:18.823482 systemd-journald[1203]: Runtime Journal (/run/log/journal/632a474ca1794d83ab8074b1b9d201e2) is 6M, max 48.3M, 42.2M free. Mar 13 00:46:18.227439 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:46:18.254819 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:46:18.255538 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:46:18.829099 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:46:18.838202 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:46:18.838235 systemd[1]: Stopped verity-setup.service. Mar 13 00:46:18.848109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:46:18.855129 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:46:18.858775 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:46:18.862384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:46:18.866268 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:46:18.869656 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:46:18.873466 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:46:18.877361 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:46:18.880964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:46:18.885567 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:46:18.890514 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:46:18.890864 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:46:18.896880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:46:18.897231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:46:18.903380 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:46:18.903682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:46:18.907876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:46:18.908200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:46:18.913507 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:46:18.913842 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:46:18.918500 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:46:18.918819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:46:18.924319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:46:18.929143 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:46:18.934786 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:46:18.940309 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:46:18.945622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:46:18.964180 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:46:18.971221 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:46:18.983800 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:46:18.988589 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:46:18.988661 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:46:18.993837 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:46:19.015978 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:46:19.021495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:46:19.023400 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:46:19.029921 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:46:19.034647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:46:19.036275 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:46:19.040287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:46:19.042186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:46:19.049206 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:46:19.059011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:46:19.066977 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:46:19.071488 systemd-journald[1203]: Time spent on flushing to /var/log/journal/632a474ca1794d83ab8074b1b9d201e2 is 30.911ms for 983 entries. Mar 13 00:46:19.071488 systemd-journald[1203]: System Journal (/var/log/journal/632a474ca1794d83ab8074b1b9d201e2) is 8M, max 195.6M, 187.6M free. Mar 13 00:46:19.114188 systemd-journald[1203]: Received client request to flush runtime journal. Mar 13 00:46:19.114244 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:46:19.077281 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:46:19.099450 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:46:19.105457 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:46:19.114394 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:46:19.119907 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:46:19.131404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:46:19.148382 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Mar 13 00:46:19.148400 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Mar 13 00:46:19.155903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:46:19.157482 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:46:19.164471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:46:19.183519 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:46:19.201120 kernel: loop1: detected capacity change from 0 to 219192 Mar 13 00:46:19.225749 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:46:19.233288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:46:19.249114 kernel: loop2: detected capacity change from 0 to 128560 Mar 13 00:46:19.257564 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:46:19.264804 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 13 00:46:19.265212 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 13 00:46:19.270936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:46:19.291160 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:46:19.312112 kernel: loop4: detected capacity change from 0 to 219192 Mar 13 00:46:19.333102 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 00:46:19.349308 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 13 00:46:19.349958 (sd-merge)[1264]: Merged extensions into '/usr'. Mar 13 00:46:19.354928 systemd[1]: Reload requested from client PID 1238 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:46:19.354973 systemd[1]: Reloading... Mar 13 00:46:19.415151 zram_generator::config[1286]: No configuration found. Mar 13 00:46:19.464825 ldconfig[1233]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:46:19.610561 systemd[1]: Reloading finished in 255 ms. Mar 13 00:46:19.647868 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:46:19.652637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:46:19.657501 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:46:19.692408 systemd[1]: Starting ensure-sysext.service... Mar 13 00:46:19.696662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:46:19.703694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:46:19.717519 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:46:19.717534 systemd[1]: Reloading... Mar 13 00:46:19.728792 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:46:19.728827 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:46:19.729571 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:46:19.729983 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:46:19.731136 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:46:19.731445 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Mar 13 00:46:19.731512 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Mar 13 00:46:19.736163 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:46:19.736257 systemd-tmpfiles[1329]: Skipping /boot Mar 13 00:46:19.738318 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Mar 13 00:46:19.747926 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:46:19.748094 systemd-tmpfiles[1329]: Skipping /boot Mar 13 00:46:19.794876 zram_generator::config[1357]: No configuration found. Mar 13 00:46:19.937118 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:46:19.951102 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:46:19.958173 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:46:20.030934 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:46:20.031290 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:46:20.057431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:46:20.061872 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:46:20.062167 systemd[1]: Reloading finished in 344 ms. Mar 13 00:46:20.073241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:46:20.088370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:46:20.212137 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:46:20.216133 kernel: kvm_amd: TSC scaling supported Mar 13 00:46:20.216175 kernel: kvm_amd: Nested Virtualization enabled Mar 13 00:46:20.216200 kernel: kvm_amd: Nested Paging enabled Mar 13 00:46:20.216251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:46:20.218268 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 13 00:46:20.218382 kernel: kvm_amd: PMU virtualization is disabled Mar 13 00:46:20.239580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:46:20.243673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:46:20.249503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:46:20.258781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:46:20.266921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:46:20.270975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:46:20.274282 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:46:20.278899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:46:20.286242 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:46:20.301339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:46:20.313476 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:46:20.320409 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:46:20.333519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:46:20.338566 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:46:20.340358 augenrules[1479]: No rules Mar 13 00:46:20.343217 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:46:20.344916 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:46:20.355272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:46:20.355958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:46:20.360913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:46:20.361409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:46:20.368165 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:46:20.368383 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:46:20.373535 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:46:20.380430 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:46:20.386165 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:46:20.400365 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:46:20.404448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:46:20.411635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:46:20.413278 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:46:20.413547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:46:20.414798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:46:20.419181 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:46:20.430210 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:46:20.431845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:46:20.432210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:46:20.432273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:46:20.437325 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:46:20.439864 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:46:20.440598 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:46:20.440665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:46:20.443141 systemd[1]: Finished ensure-sysext.service. Mar 13 00:46:20.444086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:46:20.444323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:46:20.450785 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:46:20.451192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:46:20.458602 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:46:20.459147 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:46:20.460128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:46:20.461086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:46:20.469981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:46:20.470363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:46:20.475370 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:46:20.476668 augenrules[1495]: /sbin/augenrules: No change Mar 13 00:46:20.488439 augenrules[1526]: No rules Mar 13 00:46:20.589016 systemd-networkd[1469]: lo: Link UP Mar 13 00:46:20.589116 systemd-networkd[1469]: lo: Gained carrier Mar 13 00:46:20.590850 systemd-networkd[1469]: Enumeration completed Mar 13 00:46:20.591792 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:46:20.591829 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:46:20.592664 systemd-networkd[1469]: eth0: Link UP Mar 13 00:46:20.592887 systemd-networkd[1469]: eth0: Gained carrier Mar 13 00:46:20.592937 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:46:20.610100 systemd-resolved[1473]: Positive Trust Anchors: Mar 13 00:46:20.610139 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:46:20.610164 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:46:20.614127 systemd-resolved[1473]: Defaulting to hostname 'linux'. Mar 13 00:46:20.616107 systemd-networkd[1469]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:46:20.676775 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:46:20.682859 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:46:20.686950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:46:20.692571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:46:20.697783 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:46:20.698176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:46:20.704903 systemd[1]: Reached target network.target - Network. Mar 13 00:46:20.708156 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:46:20.723750 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:46:20.729385 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:46:20.733585 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:46:20.746911 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:46:20.793945 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:46:20.795175 systemd-timesyncd[1513]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 13 00:46:20.795249 systemd-timesyncd[1513]: Initial clock synchronization to Fri 2026-03-13 00:46:20.631171 UTC. Mar 13 00:46:20.799171 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:46:20.803888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:46:20.808110 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:46:20.812302 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:46:20.816353 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:46:20.820599 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:46:20.820682 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:46:20.823750 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:46:20.827426 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:46:20.831268 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:46:20.835533 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:46:20.840244 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:46:20.846387 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:46:20.853173 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:46:20.857497 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:46:20.861791 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:46:20.877623 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:46:20.881573 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:46:20.886551 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:46:20.891443 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:46:20.894838 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:46:20.898184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:46:20.898247 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:46:20.899480 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:46:20.904919 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:46:20.910408 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:46:20.922121 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:46:20.926613 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:46:20.930156 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:46:20.931346 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:46:20.934765 jq[1552]: false Mar 13 00:46:20.936778 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:46:20.942666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:46:20.947687 extend-filesystems[1553]: Found /dev/vda6 Mar 13 00:46:20.953982 extend-filesystems[1553]: Found /dev/vda9 Mar 13 00:46:20.951234 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:46:20.950952 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Mar 13 00:46:20.960302 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Mar 13 00:46:20.960434 extend-filesystems[1553]: Checking size of /dev/vda9 Mar 13 00:46:20.955245 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:46:20.965242 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:46:20.970510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:46:20.971089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:46:20.971668 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:46:20.973804 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Mar 13 00:46:20.973804 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:46:20.973774 oslogin_cache_refresh[1554]: Failure getting users, quitting Mar 13 00:46:20.974102 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Mar 13 00:46:20.973792 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:46:20.973834 oslogin_cache_refresh[1554]: Refreshing group entry cache Mar 13 00:46:20.976289 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:46:20.983325 extend-filesystems[1553]: Resized partition /dev/vda9 Mar 13 00:46:20.983458 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:46:20.992401 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:46:20.992770 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:46:20.993324 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:46:20.993694 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:46:20.996331 oslogin_cache_refresh[1554]: Failure getting groups, quitting Mar 13 00:46:20.997171 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Mar 13 00:46:20.997171 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:46:20.996342 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:46:21.000611 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:46:21.000994 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:46:21.006887 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:46:21.007191 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:46:21.021706 extend-filesystems[1580]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:46:21.026263 jq[1573]: true Mar 13 00:46:21.026613 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:46:21.032720 update_engine[1572]: I20260313 00:46:21.032595 1572 main.cc:92] Flatcar Update Engine starting Mar 13 00:46:21.039121 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 13 00:46:21.051318 tar[1579]: linux-amd64/LICENSE Mar 13 00:46:21.054837 jq[1592]: true Mar 13 00:46:21.075334 update_engine[1572]: I20260313 00:46:21.072278 1572 update_check_scheduler.cc:74] Next update check in 11m38s Mar 13 00:46:21.067992 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:46:21.067813 dbus-daemon[1550]: [system] SELinux support is enabled Mar 13 00:46:21.075722 tar[1579]: linux-amd64/helm Mar 13 00:46:21.075692 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:46:21.075713 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:46:21.077832 systemd-logind[1570]: New seat seat0. Mar 13 00:46:21.080423 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:46:21.085388 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:46:21.085454 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:46:21.090277 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:46:21.090340 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:46:21.094086 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 13 00:46:21.099405 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:46:21.104091 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:46:21.110778 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:46:21.110778 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 13 00:46:21.110778 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 13 00:46:21.132798 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Mar 13 00:46:21.134274 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:46:21.134537 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:46:21.162059 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:46:21.164586 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:46:21.169707 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:46:21.172619 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:46:21.240171 containerd[1581]: time="2026-03-13T00:46:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:46:21.243117 containerd[1581]: time="2026-03-13T00:46:21.242164098Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:46:21.255531 containerd[1581]: time="2026-03-13T00:46:21.255435898Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.215µs" Mar 13 00:46:21.255531 containerd[1581]: time="2026-03-13T00:46:21.255499029Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:46:21.255531 containerd[1581]: time="2026-03-13T00:46:21.255520510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.256123338Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.256201532Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.256231248Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.256349383Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.256414487Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.257198668Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.257286990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.257323918Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.257371808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:46:21.257874 containerd[1581]: time="2026-03-13T00:46:21.257589119Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:46:21.258231 containerd[1581]: time="2026-03-13T00:46:21.258128267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:46:21.258231 containerd[1581]: time="2026-03-13T00:46:21.258161211Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:46:21.258231 containerd[1581]: time="2026-03-13T00:46:21.258170534Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:46:21.258231 containerd[1581]: time="2026-03-13T00:46:21.258194470Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:46:21.258569 containerd[1581]: time="2026-03-13T00:46:21.258515039Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:46:21.258602 containerd[1581]: time="2026-03-13T00:46:21.258582939Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:46:21.265083 containerd[1581]: time="2026-03-13T00:46:21.264967827Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:46:21.265211 containerd[1581]: time="2026-03-13T00:46:21.265124461Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:46:21.265451 containerd[1581]: time="2026-03-13T00:46:21.265393096Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:46:21.265451 containerd[1581]: time="2026-03-13T00:46:21.265434804Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:46:21.265451 containerd[1581]: time="2026-03-13T00:46:21.265446571Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265455864Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265472233Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265487503Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265496747Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265506061Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265514490Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:46:21.265523 containerd[1581]: time="2026-03-13T00:46:21.265525286Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:46:21.265670 containerd[1581]: time="2026-03-13T00:46:21.265625904Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:46:21.265693 containerd[1581]: time="2026-03-13T00:46:21.265679849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:46:21.265726 containerd[1581]: time="2026-03-13T00:46:21.265693019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:46:21.265726 containerd[1581]: time="2026-03-13T00:46:21.265709584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265731880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265742283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265751763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265760085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265769721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265779093Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:46:21.265786 containerd[1581]: time="2026-03-13T00:46:21.265787180Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:46:21.265902 containerd[1581]: time="2026-03-13T00:46:21.265825375Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:46:21.265902 containerd[1581]: time="2026-03-13T00:46:21.265836542Z" level=info msg="Start snapshots syncer" Mar 13 00:46:21.266129 containerd[1581]: time="2026-03-13T00:46:21.265906248Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:46:21.266483 containerd[1581]: time="2026-03-13T00:46:21.266358172Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:46:21.266483 containerd[1581]: time="2026-03-13T00:46:21.266444197Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:46:21.267759 containerd[1581]: time="2026-03-13T00:46:21.267658547Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:46:21.267843 containerd[1581]: time="2026-03-13T00:46:21.267788654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:46:21.267843 containerd[1581]: time="2026-03-13T00:46:21.267812629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:46:21.267843 containerd[1581]: time="2026-03-13T00:46:21.267823984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:46:21.267843 containerd[1581]: time="2026-03-13T00:46:21.267834171Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:46:21.267916 containerd[1581]: time="2026-03-13T00:46:21.267853669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:46:21.267916 containerd[1581]: time="2026-03-13T00:46:21.267863679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:46:21.267916 containerd[1581]: time="2026-03-13T00:46:21.267873375Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:46:21.267916 containerd[1581]: time="2026-03-13T00:46:21.267893277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:46:21.267916 containerd[1581]: time="2026-03-13T00:46:21.267903121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:46:21.267916 containerd[1581]: time="2026-03-13T00:46:21.267912669Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:46:21.268006 containerd[1581]: time="2026-03-13T00:46:21.267938901Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:46:21.268006 containerd[1581]: time="2026-03-13T00:46:21.267951570Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:46:21.268006 containerd[1581]: time="2026-03-13T00:46:21.267959568Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:46:21.268006 containerd[1581]: time="2026-03-13T00:46:21.267968086Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:46:21.268006 containerd[1581]: time="2026-03-13T00:46:21.267974661Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:46:21.268006 containerd[1581]: time="2026-03-13T00:46:21.267986938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:46:21.268203 containerd[1581]: time="2026-03-13T00:46:21.268001726Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:46:21.268203 containerd[1581]: time="2026-03-13T00:46:21.268091422Z" level=info msg="runtime interface created" Mar 13 00:46:21.268203 containerd[1581]: time="2026-03-13T00:46:21.268097458Z" level=info msg="created NRI interface" Mar 13 00:46:21.268203 containerd[1581]: time="2026-03-13T00:46:21.268104760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:46:21.268203 containerd[1581]: time="2026-03-13T00:46:21.268167900Z" level=info msg="Connect containerd service" Mar 13 00:46:21.268203 containerd[1581]: time="2026-03-13T00:46:21.268185407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:46:21.269506 containerd[1581]: time="2026-03-13T00:46:21.269469645Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:46:21.308587 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:46:21.335893 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:46:21.343488 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346707378Z" level=info msg="Start subscribing containerd event" Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346751667Z" level=info msg="Start recovering state" Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346839252Z" level=info msg="Start event monitor" Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346851274Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346857957Z" level=info msg="Start streaming server" Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346873904Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346880018Z" level=info msg="runtime interface starting up..." Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346885357Z" level=info msg="starting plugins..." Mar 13 00:46:21.346984 containerd[1581]: time="2026-03-13T00:46:21.346897614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:46:21.347776 containerd[1581]: time="2026-03-13T00:46:21.347759980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:46:21.347995 containerd[1581]: time="2026-03-13T00:46:21.347978606Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:46:21.350216 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:46:21.350457 containerd[1581]: time="2026-03-13T00:46:21.350439297Z" level=info msg="containerd successfully booted in 0.110899s" Mar 13 00:46:21.365348 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:46:21.365656 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:46:21.372824 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:46:21.402828 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:46:21.409619 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:46:21.416343 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:46:21.420256 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:46:21.468107 tar[1579]: linux-amd64/README.md Mar 13 00:46:21.498682 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:46:21.776564 systemd-networkd[1469]: eth0: Gained IPv6LL Mar 13 00:46:21.778924 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:46:21.784632 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:46:21.790180 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 13 00:46:21.795630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:21.810516 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:46:21.841877 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 13 00:46:21.842296 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 13 00:46:21.846713 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:46:21.855685 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:46:22.617911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:22.622807 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:46:22.627768 systemd[1]: Startup finished in 3.269s (kernel) + 7.795s (initrd) + 5.204s (userspace) = 16.269s. Mar 13 00:46:22.642802 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:46:23.124422 kubelet[1683]: E0313 00:46:23.124291 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:46:23.128680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:46:23.128891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:46:23.129444 systemd[1]: kubelet.service: Consumed 925ms CPU time, 258.1M memory peak. Mar 13 00:46:23.668436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:46:23.669764 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:50556.service - OpenSSH per-connection server daemon (10.0.0.1:50556). Mar 13 00:46:23.744107 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 50556 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:23.746527 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:23.754150 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:46:23.755326 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:46:23.785864 systemd-logind[1570]: New session 1 of user core. Mar 13 00:46:23.798906 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:46:23.802355 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:46:23.825941 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:46:23.829517 systemd-logind[1570]: New session c1 of user core. Mar 13 00:46:23.967987 systemd[1702]: Queued start job for default target default.target. Mar 13 00:46:23.979370 systemd[1702]: Created slice app.slice - User Application Slice. Mar 13 00:46:23.979421 systemd[1702]: Reached target paths.target - Paths. Mar 13 00:46:23.979486 systemd[1702]: Reached target timers.target - Timers. Mar 13 00:46:23.981145 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:46:23.993759 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:46:23.993941 systemd[1702]: Reached target sockets.target - Sockets. Mar 13 00:46:23.994076 systemd[1702]: Reached target basic.target - Basic System. Mar 13 00:46:23.994127 systemd[1702]: Reached target default.target - Main User Target. Mar 13 00:46:23.994165 systemd[1702]: Startup finished in 156ms. Mar 13 00:46:23.994286 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:46:23.995885 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:46:24.014450 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:50568.service - OpenSSH per-connection server daemon (10.0.0.1:50568). Mar 13 00:46:24.080246 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 50568 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:24.081692 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:24.086934 systemd-logind[1570]: New session 2 of user core. Mar 13 00:46:24.097224 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:46:24.111533 sshd[1716]: Connection closed by 10.0.0.1 port 50568 Mar 13 00:46:24.111890 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:24.120379 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:50568.service: Deactivated successfully. Mar 13 00:46:24.122368 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:46:24.123346 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:46:24.125874 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:50584.service - OpenSSH per-connection server daemon (10.0.0.1:50584). Mar 13 00:46:24.127148 systemd-logind[1570]: Removed session 2. Mar 13 00:46:24.188416 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 50584 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:24.189804 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:24.196186 systemd-logind[1570]: New session 3 of user core. Mar 13 00:46:24.211234 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:46:24.220896 sshd[1726]: Connection closed by 10.0.0.1 port 50584 Mar 13 00:46:24.221287 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:24.231692 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:50584.service: Deactivated successfully. Mar 13 00:46:24.234271 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:46:24.235506 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:46:24.239460 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:50590.service - OpenSSH per-connection server daemon (10.0.0.1:50590). Mar 13 00:46:24.240132 systemd-logind[1570]: Removed session 3. Mar 13 00:46:24.305650 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 50590 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:24.307231 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:24.313921 systemd-logind[1570]: New session 4 of user core. Mar 13 00:46:24.323281 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:46:24.342730 sshd[1737]: Connection closed by 10.0.0.1 port 50590 Mar 13 00:46:24.343310 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:24.362820 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:50590.service: Deactivated successfully. Mar 13 00:46:24.364790 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:46:24.365903 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:46:24.368627 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:50594.service - OpenSSH per-connection server daemon (10.0.0.1:50594). Mar 13 00:46:24.369836 systemd-logind[1570]: Removed session 4. Mar 13 00:46:24.441240 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 50594 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:24.442713 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:24.448612 systemd-logind[1570]: New session 5 of user core. Mar 13 00:46:24.466248 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:46:24.487414 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:46:24.487800 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:46:24.508692 sudo[1747]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:24.510354 sshd[1746]: Connection closed by 10.0.0.1 port 50594 Mar 13 00:46:24.511133 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:24.520682 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:50594.service: Deactivated successfully. Mar 13 00:46:24.522755 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:46:24.523752 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:46:24.526667 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Mar 13 00:46:24.527853 systemd-logind[1570]: Removed session 5. Mar 13 00:46:24.592979 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:24.594577 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:24.601604 systemd-logind[1570]: New session 6 of user core. Mar 13 00:46:24.611344 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:46:24.627722 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:46:24.628264 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:46:24.637246 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:24.644489 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:46:24.645116 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:46:24.660192 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:46:24.725425 augenrules[1780]: No rules Mar 13 00:46:24.727485 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:46:24.728006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:46:24.729428 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:24.731719 sshd[1756]: Connection closed by 10.0.0.1 port 50606 Mar 13 00:46:24.733335 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:24.742692 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:50606.service: Deactivated successfully. Mar 13 00:46:24.745147 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:46:24.746252 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:46:24.749596 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:50620.service - OpenSSH per-connection server daemon (10.0.0.1:50620). Mar 13 00:46:24.751186 systemd-logind[1570]: Removed session 6. Mar 13 00:46:24.809318 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 50620 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:46:24.810930 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:46:24.817376 systemd-logind[1570]: New session 7 of user core. Mar 13 00:46:24.829322 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:46:24.845147 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:46:24.845489 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:46:25.670146 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:46:25.703659 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:46:26.965530 kernel: hrtimer: interrupt took 2171599 ns Mar 13 00:46:27.137486 dockerd[1814]: time="2026-03-13T00:46:27.137169501Z" level=info msg="Starting up" Mar 13 00:46:27.142205 dockerd[1814]: time="2026-03-13T00:46:27.142113848Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:46:27.218681 dockerd[1814]: time="2026-03-13T00:46:27.218474069Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:46:27.390982 dockerd[1814]: time="2026-03-13T00:46:27.390828984Z" level=info msg="Loading containers: start." Mar 13 00:46:27.409156 kernel: Initializing XFRM netlink socket Mar 13 00:46:27.855264 systemd-networkd[1469]: docker0: Link UP Mar 13 00:46:27.862321 dockerd[1814]: time="2026-03-13T00:46:27.862217501Z" level=info msg="Loading containers: done." Mar 13 00:46:28.090424 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1662469301-merged.mount: Deactivated successfully. Mar 13 00:46:28.096608 dockerd[1814]: time="2026-03-13T00:46:28.096517985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:46:28.097180 dockerd[1814]: time="2026-03-13T00:46:28.097127133Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:46:28.097453 dockerd[1814]: time="2026-03-13T00:46:28.097397948Z" level=info msg="Initializing buildkit" Mar 13 00:46:28.161530 dockerd[1814]: time="2026-03-13T00:46:28.161402065Z" level=info msg="Completed buildkit initialization" Mar 13 00:46:28.172400 dockerd[1814]: time="2026-03-13T00:46:28.172298194Z" level=info msg="Daemon has completed initialization" Mar 13 00:46:28.172756 dockerd[1814]: time="2026-03-13T00:46:28.172530169Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:46:28.172747 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:46:29.366369 containerd[1581]: time="2026-03-13T00:46:29.366211498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:46:30.020707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856970632.mount: Deactivated successfully. Mar 13 00:46:32.102564 containerd[1581]: time="2026-03-13T00:46:32.102326903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:32.103312 containerd[1581]: time="2026-03-13T00:46:32.103201917Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 13 00:46:32.105279 containerd[1581]: time="2026-03-13T00:46:32.105206294Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:32.110363 containerd[1581]: time="2026-03-13T00:46:32.110225929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:32.112124 containerd[1581]: time="2026-03-13T00:46:32.111900328Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.745613036s" Mar 13 00:46:32.112124 containerd[1581]: time="2026-03-13T00:46:32.111986301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:46:32.118858 containerd[1581]: time="2026-03-13T00:46:32.118530777Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:46:33.330336 containerd[1581]: time="2026-03-13T00:46:33.330213037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:33.331678 containerd[1581]: time="2026-03-13T00:46:33.331638325Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 13 00:46:33.333320 containerd[1581]: time="2026-03-13T00:46:33.333165611Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:33.337663 containerd[1581]: time="2026-03-13T00:46:33.337577125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:33.338743 containerd[1581]: time="2026-03-13T00:46:33.338668408Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.220072221s" Mar 13 00:46:33.338743 containerd[1581]: time="2026-03-13T00:46:33.338716469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:46:33.339978 containerd[1581]: time="2026-03-13T00:46:33.339908897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:46:33.379349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:46:33.381247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:33.584754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:33.589655 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:46:33.654243 kubelet[2104]: E0313 00:46:33.654192 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:46:33.659709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:46:33.659926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:46:33.660439 systemd[1]: kubelet.service: Consumed 248ms CPU time, 109.5M memory peak. Mar 13 00:46:34.412262 containerd[1581]: time="2026-03-13T00:46:34.412179457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:34.413090 containerd[1581]: time="2026-03-13T00:46:34.413012633Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 13 00:46:34.414810 containerd[1581]: time="2026-03-13T00:46:34.414737777Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:34.420100 containerd[1581]: time="2026-03-13T00:46:34.419960356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:34.421464 containerd[1581]: time="2026-03-13T00:46:34.421400489Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.081464271s" Mar 13 00:46:34.421464 containerd[1581]: time="2026-03-13T00:46:34.421453655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:46:34.422381 containerd[1581]: time="2026-03-13T00:46:34.422321529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:46:36.927133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078856215.mount: Deactivated successfully. Mar 13 00:46:37.306202 containerd[1581]: time="2026-03-13T00:46:37.305972040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:37.306927 containerd[1581]: time="2026-03-13T00:46:37.306810394Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 13 00:46:37.308242 containerd[1581]: time="2026-03-13T00:46:37.308140049Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:37.311220 containerd[1581]: time="2026-03-13T00:46:37.311133419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:37.311713 containerd[1581]: time="2026-03-13T00:46:37.311650091Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 2.889237954s" Mar 13 00:46:37.311713 containerd[1581]: time="2026-03-13T00:46:37.311699044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:46:37.319546 containerd[1581]: time="2026-03-13T00:46:37.319340923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:46:37.776511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3951771307.mount: Deactivated successfully. Mar 13 00:46:38.930980 containerd[1581]: time="2026-03-13T00:46:38.930882184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:38.931944 containerd[1581]: time="2026-03-13T00:46:38.931883259Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 13 00:46:38.933474 containerd[1581]: time="2026-03-13T00:46:38.933356392Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:38.936512 containerd[1581]: time="2026-03-13T00:46:38.936372135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:38.937412 containerd[1581]: time="2026-03-13T00:46:38.937290815Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.617922324s" Mar 13 00:46:38.937412 containerd[1581]: time="2026-03-13T00:46:38.937346370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:46:38.938547 containerd[1581]: time="2026-03-13T00:46:38.938507093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:46:39.335926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216815093.mount: Deactivated successfully. Mar 13 00:46:39.344156 containerd[1581]: time="2026-03-13T00:46:39.343364194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:39.344499 containerd[1581]: time="2026-03-13T00:46:39.344404776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 13 00:46:39.346007 containerd[1581]: time="2026-03-13T00:46:39.345955681Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:39.348482 containerd[1581]: time="2026-03-13T00:46:39.348402344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:39.349169 containerd[1581]: time="2026-03-13T00:46:39.349109080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 410.552808ms" Mar 13 00:46:39.349336 containerd[1581]: time="2026-03-13T00:46:39.349135571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:46:39.350122 containerd[1581]: time="2026-03-13T00:46:39.349934003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:46:39.872568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568965457.mount: Deactivated successfully. Mar 13 00:46:40.682890 containerd[1581]: time="2026-03-13T00:46:40.682798993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:40.683858 containerd[1581]: time="2026-03-13T00:46:40.683786478Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 13 00:46:40.685219 containerd[1581]: time="2026-03-13T00:46:40.685167379Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:40.690080 containerd[1581]: time="2026-03-13T00:46:40.689957146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:46:40.691681 containerd[1581]: time="2026-03-13T00:46:40.691536436Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.341572879s" Mar 13 00:46:40.691681 containerd[1581]: time="2026-03-13T00:46:40.691603701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:46:43.910742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:46:43.912764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:44.110499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:44.127453 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:46:44.179628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:44.183209 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:46:44.183534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:44.183899 systemd[1]: kubelet.service: Consumed 213ms CPU time, 108.3M memory peak. Mar 13 00:46:44.186929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:44.214092 systemd[1]: Reload requested from client PID 2288 ('systemctl') (unit session-7.scope)... Mar 13 00:46:44.214128 systemd[1]: Reloading... Mar 13 00:46:44.302158 zram_generator::config[2330]: No configuration found. Mar 13 00:46:44.518568 systemd[1]: Reloading finished in 304 ms. Mar 13 00:46:44.595955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:44.599404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:44.601007 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:46:44.601500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:44.601568 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.3M memory peak. Mar 13 00:46:44.603426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:44.793457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:44.807529 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:46:44.879580 kubelet[2380]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:46:44.879580 kubelet[2380]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:46:44.879941 kubelet[2380]: I0313 00:46:44.879594 2380 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:46:45.923527 kubelet[2380]: I0313 00:46:45.923459 2380 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:46:45.923527 kubelet[2380]: I0313 00:46:45.923513 2380 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:46:45.923906 kubelet[2380]: I0313 00:46:45.923546 2380 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:46:45.923906 kubelet[2380]: I0313 00:46:45.923555 2380 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:46:45.923906 kubelet[2380]: I0313 00:46:45.923750 2380 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:46:45.931811 kubelet[2380]: I0313 00:46:45.930583 2380 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:46:45.931933 kubelet[2380]: E0313 00:46:45.931846 2380 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:46:45.936508 kubelet[2380]: I0313 00:46:45.936484 2380 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:46:45.942424 kubelet[2380]: I0313 00:46:45.942299 2380 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:46:45.946291 kubelet[2380]: I0313 00:46:45.946139 2380 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:46:45.946576 kubelet[2380]: I0313 00:46:45.946248 2380 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:46:45.946576 kubelet[2380]: I0313 00:46:45.946553 2380 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:46:45.946576 kubelet[2380]: I0313 00:46:45.946563 2380 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:46:45.946860 kubelet[2380]: I0313 00:46:45.946707 2380 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:46:45.949058 kubelet[2380]: I0313 00:46:45.949015 2380 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:46:45.949486 kubelet[2380]: I0313 00:46:45.949428 2380 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:46:45.949486 kubelet[2380]: I0313 00:46:45.949469 2380 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:46:45.949486 kubelet[2380]: I0313 00:46:45.949490 2380 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:46:45.949617 kubelet[2380]: I0313 00:46:45.949505 2380 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:46:45.950310 kubelet[2380]: E0313 00:46:45.950152 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:46:45.950310 kubelet[2380]: E0313 00:46:45.950157 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:46:45.952120 kubelet[2380]: I0313 00:46:45.952018 2380 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:46:45.952672 kubelet[2380]: I0313 00:46:45.952630 2380 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:46:45.952802 kubelet[2380]: I0313 00:46:45.952710 2380 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:46:45.952802 kubelet[2380]: W0313 00:46:45.952752 2380 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:46:45.956775 kubelet[2380]: I0313 00:46:45.956721 2380 server.go:1262] "Started kubelet" Mar 13 00:46:45.957983 kubelet[2380]: I0313 00:46:45.957919 2380 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:46:45.959107 kubelet[2380]: I0313 00:46:45.958797 2380 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:46:45.959107 kubelet[2380]: I0313 00:46:45.958946 2380 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:46:45.959295 kubelet[2380]: I0313 00:46:45.959256 2380 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:46:45.959375 kubelet[2380]: I0313 00:46:45.959336 2380 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:46:45.962346 kubelet[2380]: I0313 00:46:45.962331 2380 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:46:45.963293 kubelet[2380]: I0313 00:46:45.963278 2380 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:46:45.963872 kubelet[2380]: E0313 00:46:45.962836 2380 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c40138aefeb95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:46:45.956668309 +0000 UTC m=+1.143998786,LastTimestamp:2026-03-13 00:46:45.956668309 +0000 UTC m=+1.143998786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:46:45.963981 kubelet[2380]: E0313 00:46:45.963884 2380 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:46:45.963981 kubelet[2380]: I0313 00:46:45.963902 2380 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:46:45.964122 kubelet[2380]: I0313 00:46:45.964101 2380 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:46:45.964155 kubelet[2380]: I0313 00:46:45.964148 2380 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:46:45.964511 kubelet[2380]: E0313 00:46:45.964445 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:46:45.964777 kubelet[2380]: E0313 00:46:45.964649 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" Mar 13 00:46:45.967133 kubelet[2380]: I0313 00:46:45.967118 2380 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:46:45.967499 kubelet[2380]: I0313 00:46:45.967373 2380 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:46:45.967499 kubelet[2380]: E0313 00:46:45.967294 2380 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:46:45.967499 kubelet[2380]: I0313 00:46:45.967433 2380 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:46:45.987778 kubelet[2380]: I0313 00:46:45.987729 2380 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:46:45.987778 kubelet[2380]: I0313 00:46:45.987771 2380 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:46:45.987865 kubelet[2380]: I0313 00:46:45.987787 2380 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:46:45.990551 kubelet[2380]: I0313 00:46:45.990298 2380 policy_none.go:49] "None policy: Start" Mar 13 00:46:45.990551 kubelet[2380]: I0313 00:46:45.990317 2380 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:46:45.990551 kubelet[2380]: I0313 00:46:45.990328 2380 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:46:45.992110 kubelet[2380]: I0313 00:46:45.992096 2380 policy_none.go:47] "Start" Mar 13 00:46:45.998387 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:46:46.000503 kubelet[2380]: I0313 00:46:46.000453 2380 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:46:46.002648 kubelet[2380]: I0313 00:46:46.002601 2380 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:46:46.002648 kubelet[2380]: I0313 00:46:46.002641 2380 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:46:46.002719 kubelet[2380]: I0313 00:46:46.002661 2380 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:46:46.002719 kubelet[2380]: E0313 00:46:46.002700 2380 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:46:46.004770 kubelet[2380]: E0313 00:46:46.004736 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:46:46.010856 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:46:46.015683 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:46:46.027367 kubelet[2380]: E0313 00:46:46.027189 2380 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:46:46.027694 kubelet[2380]: I0313 00:46:46.027462 2380 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:46:46.027694 kubelet[2380]: I0313 00:46:46.027508 2380 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:46:46.027990 kubelet[2380]: I0313 00:46:46.027898 2380 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:46:46.029805 kubelet[2380]: E0313 00:46:46.029395 2380 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:46:46.029805 kubelet[2380]: E0313 00:46:46.029476 2380 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 13 00:46:46.117802 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 13 00:46:46.128906 kubelet[2380]: I0313 00:46:46.128825 2380 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:46:46.129513 kubelet[2380]: E0313 00:46:46.129304 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:46.129513 kubelet[2380]: E0313 00:46:46.129305 2380 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Mar 13 00:46:46.132445 systemd[1]: Created slice kubepods-burstable-pod2fb1dea6b6a3a3dfc55cb0a74feb5c4e.slice - libcontainer container kubepods-burstable-pod2fb1dea6b6a3a3dfc55cb0a74feb5c4e.slice. Mar 13 00:46:46.143850 kubelet[2380]: E0313 00:46:46.143772 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:46.147590 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 13 00:46:46.150263 kubelet[2380]: E0313 00:46:46.150127 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:46.165837 kubelet[2380]: I0313 00:46:46.165674 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:46.165932 kubelet[2380]: I0313 00:46:46.165758 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fb1dea6b6a3a3dfc55cb0a74feb5c4e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2fb1dea6b6a3a3dfc55cb0a74feb5c4e\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:46.165932 kubelet[2380]: I0313 00:46:46.165921 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:46.166177 kubelet[2380]: I0313 00:46:46.166087 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:46.166177 kubelet[2380]: I0313 00:46:46.166204 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fb1dea6b6a3a3dfc55cb0a74feb5c4e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2fb1dea6b6a3a3dfc55cb0a74feb5c4e\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:46.166177 kubelet[2380]: I0313 00:46:46.166221 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fb1dea6b6a3a3dfc55cb0a74feb5c4e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2fb1dea6b6a3a3dfc55cb0a74feb5c4e\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:46.166177 kubelet[2380]: I0313 00:46:46.166233 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:46.166177 kubelet[2380]: I0313 00:46:46.166247 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:46.166377 kubelet[2380]: I0313 00:46:46.166261 2380 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:46.166512 kubelet[2380]: E0313 00:46:46.166396 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" Mar 13 00:46:46.333290 kubelet[2380]: I0313 00:46:46.332090 2380 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:46:46.333290 kubelet[2380]: E0313 00:46:46.332568 2380 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Mar 13 00:46:46.434234 kubelet[2380]: E0313 00:46:46.434191 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:46.436189 containerd[1581]: time="2026-03-13T00:46:46.436011052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:46.447490 kubelet[2380]: E0313 00:46:46.447346 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:46.447946 containerd[1581]: time="2026-03-13T00:46:46.447888643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2fb1dea6b6a3a3dfc55cb0a74feb5c4e,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:46.452458 kubelet[2380]: E0313 00:46:46.452394 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:46.452981 containerd[1581]: time="2026-03-13T00:46:46.452801186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:46.567887 kubelet[2380]: E0313 00:46:46.567692 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" Mar 13 00:46:46.734900 kubelet[2380]: I0313 00:46:46.734756 2380 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:46:46.735311 kubelet[2380]: E0313 00:46:46.735141 2380 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Mar 13 00:46:46.835099 kubelet[2380]: E0313 00:46:46.834975 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:46:46.848499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010351061.mount: Deactivated successfully. Mar 13 00:46:46.857638 containerd[1581]: time="2026-03-13T00:46:46.857458751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:46:46.861857 containerd[1581]: time="2026-03-13T00:46:46.861743439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:46:46.864932 containerd[1581]: time="2026-03-13T00:46:46.864823361Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:46:46.867405 containerd[1581]: time="2026-03-13T00:46:46.867348832Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:46:46.868827 containerd[1581]: time="2026-03-13T00:46:46.868669148Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:46:46.870108 containerd[1581]: time="2026-03-13T00:46:46.869917432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:46:46.871473 containerd[1581]: time="2026-03-13T00:46:46.871298047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:46:46.872839 containerd[1581]: time="2026-03-13T00:46:46.872753536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:46:46.873457 containerd[1581]: time="2026-03-13T00:46:46.873280307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.329629ms" Mar 13 00:46:46.877630 containerd[1581]: time="2026-03-13T00:46:46.877393128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 423.095213ms" Mar 13 00:46:46.878326 containerd[1581]: time="2026-03-13T00:46:46.878278707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 428.882554ms" Mar 13 00:46:46.903903 containerd[1581]: time="2026-03-13T00:46:46.903828340Z" level=info msg="connecting to shim 17d6c98cffa1e7ec4da1f44c56a83e3a67825409fb8f2c0d5400f6834f8e3bf0" address="unix:///run/containerd/s/d1d4820032afa1d68d1e5126d85bccecf8d892dc4859449165e484fd6201c291" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:46.921881 containerd[1581]: time="2026-03-13T00:46:46.921813189Z" level=info msg="connecting to shim 82025c3411db310d3a158da637b5d7d1fad866d65f91caf980969963fd306d8c" address="unix:///run/containerd/s/45e13d2be38f1ab62102fd6428228023c516e7c6142afab09a95372472c71e07" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:46.930083 containerd[1581]: time="2026-03-13T00:46:46.929546269Z" level=info msg="connecting to shim dc5aaeb969a613e9088906380f404379798cef7caa11be3f21d2d76221587bb0" address="unix:///run/containerd/s/2ee9ce9f0d91f8530a337efbd92051c43d49d5165caf7bb286b57ef141a0d55b" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:46.938453 systemd[1]: Started cri-containerd-17d6c98cffa1e7ec4da1f44c56a83e3a67825409fb8f2c0d5400f6834f8e3bf0.scope - libcontainer container 17d6c98cffa1e7ec4da1f44c56a83e3a67825409fb8f2c0d5400f6834f8e3bf0. Mar 13 00:46:46.973313 systemd[1]: Started cri-containerd-82025c3411db310d3a158da637b5d7d1fad866d65f91caf980969963fd306d8c.scope - libcontainer container 82025c3411db310d3a158da637b5d7d1fad866d65f91caf980969963fd306d8c. Mar 13 00:46:46.976917 systemd[1]: Started cri-containerd-dc5aaeb969a613e9088906380f404379798cef7caa11be3f21d2d76221587bb0.scope - libcontainer container dc5aaeb969a613e9088906380f404379798cef7caa11be3f21d2d76221587bb0. Mar 13 00:46:47.010673 containerd[1581]: time="2026-03-13T00:46:47.010403666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"17d6c98cffa1e7ec4da1f44c56a83e3a67825409fb8f2c0d5400f6834f8e3bf0\"" Mar 13 00:46:47.012977 kubelet[2380]: E0313 00:46:47.012628 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:47.019598 containerd[1581]: time="2026-03-13T00:46:47.019534278Z" level=info msg="CreateContainer within sandbox \"17d6c98cffa1e7ec4da1f44c56a83e3a67825409fb8f2c0d5400f6834f8e3bf0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:46:47.029896 kubelet[2380]: E0313 00:46:47.029818 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:46:47.032889 containerd[1581]: time="2026-03-13T00:46:47.032847351Z" level=info msg="Container 9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:47.046536 containerd[1581]: time="2026-03-13T00:46:47.046458331Z" level=info msg="CreateContainer within sandbox \"17d6c98cffa1e7ec4da1f44c56a83e3a67825409fb8f2c0d5400f6834f8e3bf0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac\"" Mar 13 00:46:47.048477 containerd[1581]: time="2026-03-13T00:46:47.048344066Z" level=info msg="StartContainer for \"9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac\"" Mar 13 00:46:47.048477 containerd[1581]: time="2026-03-13T00:46:47.048463841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc5aaeb969a613e9088906380f404379798cef7caa11be3f21d2d76221587bb0\"" Mar 13 00:46:47.050700 kubelet[2380]: E0313 00:46:47.050600 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:47.054836 containerd[1581]: time="2026-03-13T00:46:47.054669664Z" level=info msg="connecting to shim 9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac" address="unix:///run/containerd/s/d1d4820032afa1d68d1e5126d85bccecf8d892dc4859449165e484fd6201c291" protocol=ttrpc version=3 Mar 13 00:46:47.057966 containerd[1581]: time="2026-03-13T00:46:47.057925533Z" level=info msg="CreateContainer within sandbox \"dc5aaeb969a613e9088906380f404379798cef7caa11be3f21d2d76221587bb0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:46:47.061642 containerd[1581]: time="2026-03-13T00:46:47.061567224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2fb1dea6b6a3a3dfc55cb0a74feb5c4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"82025c3411db310d3a158da637b5d7d1fad866d65f91caf980969963fd306d8c\"" Mar 13 00:46:47.062961 kubelet[2380]: E0313 00:46:47.062842 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:47.069504 containerd[1581]: time="2026-03-13T00:46:47.069435837Z" level=info msg="CreateContainer within sandbox \"82025c3411db310d3a158da637b5d7d1fad866d65f91caf980969963fd306d8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:46:47.072083 containerd[1581]: time="2026-03-13T00:46:47.071836729Z" level=info msg="Container b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:47.084504 containerd[1581]: time="2026-03-13T00:46:47.084147577Z" level=info msg="CreateContainer within sandbox \"dc5aaeb969a613e9088906380f404379798cef7caa11be3f21d2d76221587bb0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3\"" Mar 13 00:46:47.084293 systemd[1]: Started cri-containerd-9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac.scope - libcontainer container 9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac. Mar 13 00:46:47.084930 containerd[1581]: time="2026-03-13T00:46:47.084267604Z" level=info msg="Container f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:47.085549 containerd[1581]: time="2026-03-13T00:46:47.085530731Z" level=info msg="StartContainer for \"b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3\"" Mar 13 00:46:47.087474 containerd[1581]: time="2026-03-13T00:46:47.087454570Z" level=info msg="connecting to shim b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3" address="unix:///run/containerd/s/2ee9ce9f0d91f8530a337efbd92051c43d49d5165caf7bb286b57ef141a0d55b" protocol=ttrpc version=3 Mar 13 00:46:47.095165 containerd[1581]: time="2026-03-13T00:46:47.094659911Z" level=info msg="CreateContainer within sandbox \"82025c3411db310d3a158da637b5d7d1fad866d65f91caf980969963fd306d8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b\"" Mar 13 00:46:47.097125 containerd[1581]: time="2026-03-13T00:46:47.097009595Z" level=info msg="StartContainer for \"f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b\"" Mar 13 00:46:47.104473 containerd[1581]: time="2026-03-13T00:46:47.104168360Z" level=info msg="connecting to shim f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b" address="unix:///run/containerd/s/45e13d2be38f1ab62102fd6428228023c516e7c6142afab09a95372472c71e07" protocol=ttrpc version=3 Mar 13 00:46:47.117684 kubelet[2380]: E0313 00:46:47.117595 2380 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:46:47.118429 systemd[1]: Started cri-containerd-b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3.scope - libcontainer container b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3. Mar 13 00:46:47.143638 systemd[1]: Started cri-containerd-f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b.scope - libcontainer container f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b. Mar 13 00:46:47.189126 containerd[1581]: time="2026-03-13T00:46:47.188997561Z" level=info msg="StartContainer for \"9e4dd1e2101046311142790e729d57ee540be7b595072a602c1ec2bdc51e71ac\" returns successfully" Mar 13 00:46:47.227989 containerd[1581]: time="2026-03-13T00:46:47.227494354Z" level=info msg="StartContainer for \"b60c7400b8da068f060c3d935e0cf9ce6d20a3ee0aeb29bf155d0cc954a056b3\" returns successfully" Mar 13 00:46:47.239581 containerd[1581]: time="2026-03-13T00:46:47.239364883Z" level=info msg="StartContainer for \"f37f3e9719cf420767271dc18f41dcf5cea3a9cdcdd6dac2621a8042a37ddb1b\" returns successfully" Mar 13 00:46:47.370149 kubelet[2380]: E0313 00:46:47.369402 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="1.6s" Mar 13 00:46:47.538494 kubelet[2380]: I0313 00:46:47.538426 2380 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:46:48.019873 kubelet[2380]: E0313 00:46:48.019476 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:48.019873 kubelet[2380]: E0313 00:46:48.019694 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:48.023203 kubelet[2380]: E0313 00:46:48.023170 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:48.024154 kubelet[2380]: E0313 00:46:48.024012 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:48.026557 kubelet[2380]: E0313 00:46:48.026491 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:48.026690 kubelet[2380]: E0313 00:46:48.026630 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:49.033876 kubelet[2380]: E0313 00:46:49.033814 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:49.034364 kubelet[2380]: E0313 00:46:49.034189 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:49.035481 kubelet[2380]: E0313 00:46:49.035386 2380 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:46:49.035658 kubelet[2380]: E0313 00:46:49.035603 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:49.143233 kubelet[2380]: E0313 00:46:49.143154 2380 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 13 00:46:49.265371 kubelet[2380]: E0313 00:46:49.265247 2380 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189c40138aefeb95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:46:45.956668309 +0000 UTC m=+1.143998786,LastTimestamp:2026-03-13 00:46:45.956668309 +0000 UTC m=+1.143998786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:46:49.359655 kubelet[2380]: I0313 00:46:49.359014 2380 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:46:49.359655 kubelet[2380]: E0313 00:46:49.359198 2380 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 13 00:46:49.389138 kubelet[2380]: E0313 00:46:49.388941 2380 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:46:49.489840 kubelet[2380]: E0313 00:46:49.489760 2380 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:46:49.590382 kubelet[2380]: E0313 00:46:49.590273 2380 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:46:49.665011 kubelet[2380]: I0313 00:46:49.664755 2380 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:49.671418 kubelet[2380]: E0313 00:46:49.671240 2380 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:49.671418 kubelet[2380]: I0313 00:46:49.671291 2380 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:49.673247 kubelet[2380]: E0313 00:46:49.673188 2380 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:49.673247 kubelet[2380]: I0313 00:46:49.673229 2380 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:49.675117 kubelet[2380]: E0313 00:46:49.674962 2380 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:49.951269 kubelet[2380]: I0313 00:46:49.951114 2380 apiserver.go:52] "Watching apiserver" Mar 13 00:46:49.964380 kubelet[2380]: I0313 00:46:49.964301 2380 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:46:50.030705 kubelet[2380]: I0313 00:46:50.030605 2380 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:50.032888 kubelet[2380]: E0313 00:46:50.032695 2380 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:50.033023 kubelet[2380]: E0313 00:46:50.032932 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:51.291008 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-7.scope)... Mar 13 00:46:51.291172 systemd[1]: Reloading... Mar 13 00:46:51.308551 kubelet[2380]: I0313 00:46:51.308415 2380 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:51.317136 kubelet[2380]: E0313 00:46:51.316649 2380 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:51.394270 zram_generator::config[2711]: No configuration found. Mar 13 00:46:51.653961 systemd[1]: Reloading finished in 362 ms. Mar 13 00:46:51.683428 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:51.707556 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:46:51.707904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:51.707979 systemd[1]: kubelet.service: Consumed 1.593s CPU time, 127M memory peak. Mar 13 00:46:51.710425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:46:51.919977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:46:51.934733 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:46:52.001428 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:46:52.003162 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:46:52.003162 kubelet[2756]: I0313 00:46:52.001856 2756 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:46:52.011795 kubelet[2756]: I0313 00:46:52.011768 2756 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:46:52.011883 kubelet[2756]: I0313 00:46:52.011871 2756 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:46:52.011968 kubelet[2756]: I0313 00:46:52.011957 2756 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:46:52.012255 kubelet[2756]: I0313 00:46:52.012238 2756 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:46:52.012591 kubelet[2756]: I0313 00:46:52.012571 2756 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:46:52.014330 kubelet[2756]: I0313 00:46:52.014314 2756 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:46:52.017105 kubelet[2756]: I0313 00:46:52.016890 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:46:52.025138 kubelet[2756]: I0313 00:46:52.024228 2756 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:46:52.031769 kubelet[2756]: I0313 00:46:52.031701 2756 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:46:52.032277 kubelet[2756]: I0313 00:46:52.032200 2756 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:46:52.032563 kubelet[2756]: I0313 00:46:52.032258 2756 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:46:52.032563 kubelet[2756]: I0313 00:46:52.032424 2756 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:46:52.032563 kubelet[2756]: I0313 00:46:52.032432 2756 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:46:52.032563 kubelet[2756]: I0313 00:46:52.032454 2756 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:46:52.032822 kubelet[2756]: I0313 00:46:52.032659 2756 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:46:52.032940 kubelet[2756]: I0313 00:46:52.032896 2756 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:46:52.032940 kubelet[2756]: I0313 00:46:52.032909 2756 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:46:52.032940 kubelet[2756]: I0313 00:46:52.032928 2756 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:46:52.032940 kubelet[2756]: I0313 00:46:52.032939 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:46:52.035416 kubelet[2756]: I0313 00:46:52.034401 2756 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:46:52.035416 kubelet[2756]: I0313 00:46:52.035183 2756 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:46:52.035416 kubelet[2756]: I0313 00:46:52.035222 2756 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:46:52.040480 kubelet[2756]: I0313 00:46:52.040461 2756 server.go:1262] "Started kubelet" Mar 13 00:46:52.040935 kubelet[2756]: I0313 00:46:52.040844 2756 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:46:52.040935 kubelet[2756]: I0313 00:46:52.040929 2756 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:46:52.041369 kubelet[2756]: I0313 00:46:52.041277 2756 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:46:52.043203 kubelet[2756]: I0313 00:46:52.042115 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:46:52.045135 kubelet[2756]: I0313 00:46:52.044410 2756 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:46:52.050106 kubelet[2756]: I0313 00:46:52.049937 2756 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:46:52.050437 kubelet[2756]: I0313 00:46:52.050370 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:46:52.050593 kubelet[2756]: E0313 00:46:52.050498 2756 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:46:52.052504 kubelet[2756]: I0313 00:46:52.050721 2756 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:46:52.052504 kubelet[2756]: I0313 00:46:52.050867 2756 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:46:52.052504 kubelet[2756]: I0313 00:46:52.052122 2756 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:46:52.056920 kubelet[2756]: I0313 00:46:52.056680 2756 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:46:52.057607 kubelet[2756]: I0313 00:46:52.057121 2756 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:46:52.059552 kubelet[2756]: I0313 00:46:52.059222 2756 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:46:52.095614 kubelet[2756]: I0313 00:46:52.095400 2756 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:46:52.099609 kubelet[2756]: I0313 00:46:52.099567 2756 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:46:52.099609 kubelet[2756]: I0313 00:46:52.099588 2756 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:46:52.099609 kubelet[2756]: I0313 00:46:52.099606 2756 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:46:52.100597 kubelet[2756]: E0313 00:46:52.099901 2756 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108516 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108539 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108564 2756 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108743 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108768 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108791 2756 policy_none.go:49] "None policy: Start" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108803 2756 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:46:52.109177 kubelet[2756]: I0313 00:46:52.108817 2756 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:46:52.112400 kubelet[2756]: I0313 00:46:52.112355 2756 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:46:52.112400 kubelet[2756]: I0313 00:46:52.112375 2756 policy_none.go:47] "Start" Mar 13 00:46:52.118691 kubelet[2756]: E0313 00:46:52.118597 2756 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:46:52.118878 kubelet[2756]: I0313 00:46:52.118776 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:46:52.118878 kubelet[2756]: I0313 00:46:52.118824 2756 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:46:52.119211 kubelet[2756]: I0313 00:46:52.119128 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:46:52.121795 kubelet[2756]: E0313 00:46:52.120380 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:46:52.204130 kubelet[2756]: I0313 00:46:52.201991 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:52.204130 kubelet[2756]: I0313 00:46:52.202745 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:52.204130 kubelet[2756]: I0313 00:46:52.203383 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:52.211497 kubelet[2756]: E0313 00:46:52.211426 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:52.230788 kubelet[2756]: I0313 00:46:52.230700 2756 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:46:52.239795 kubelet[2756]: I0313 00:46:52.239748 2756 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 13 00:46:52.239891 kubelet[2756]: I0313 00:46:52.239859 2756 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:46:52.251790 kubelet[2756]: I0313 00:46:52.251576 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:52.251790 kubelet[2756]: I0313 00:46:52.251659 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fb1dea6b6a3a3dfc55cb0a74feb5c4e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2fb1dea6b6a3a3dfc55cb0a74feb5c4e\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:52.251790 kubelet[2756]: I0313 00:46:52.251758 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:52.251790 kubelet[2756]: I0313 00:46:52.251784 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:52.252306 kubelet[2756]: I0313 00:46:52.251811 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:52.252306 kubelet[2756]: I0313 00:46:52.251832 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:52.252306 kubelet[2756]: I0313 00:46:52.251853 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:46:52.252306 kubelet[2756]: I0313 00:46:52.251877 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fb1dea6b6a3a3dfc55cb0a74feb5c4e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2fb1dea6b6a3a3dfc55cb0a74feb5c4e\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:52.252306 kubelet[2756]: I0313 00:46:52.251899 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fb1dea6b6a3a3dfc55cb0a74feb5c4e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2fb1dea6b6a3a3dfc55cb0a74feb5c4e\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:52.316445 sudo[2797]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 00:46:52.316827 sudo[2797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 00:46:52.512760 kubelet[2756]: E0313 00:46:52.512588 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:52.512929 kubelet[2756]: E0313 00:46:52.512808 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:52.513164 kubelet[2756]: E0313 00:46:52.513020 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:52.692738 sudo[2797]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:53.034654 kubelet[2756]: I0313 00:46:53.034594 2756 apiserver.go:52] "Watching apiserver" Mar 13 00:46:53.051953 kubelet[2756]: I0313 00:46:53.051780 2756 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:46:53.118276 kubelet[2756]: I0313 00:46:53.116745 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:53.119938 kubelet[2756]: I0313 00:46:53.119922 2756 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:53.120223 kubelet[2756]: E0313 00:46:53.119975 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:53.128975 kubelet[2756]: E0313 00:46:53.128930 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:46:53.129271 kubelet[2756]: E0313 00:46:53.129205 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:53.129562 kubelet[2756]: E0313 00:46:53.129414 2756 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:46:53.129754 kubelet[2756]: E0313 00:46:53.129718 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:53.156766 kubelet[2756]: I0313 00:46:53.156605 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.156589457 podStartE2EDuration="1.156589457s" podCreationTimestamp="2026-03-13 00:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:53.155950361 +0000 UTC m=+1.214007891" watchObservedRunningTime="2026-03-13 00:46:53.156589457 +0000 UTC m=+1.214646967" Mar 13 00:46:53.156968 kubelet[2756]: I0313 00:46:53.156810 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.156802593 podStartE2EDuration="2.156802593s" podCreationTimestamp="2026-03-13 00:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:53.14645523 +0000 UTC m=+1.204512741" watchObservedRunningTime="2026-03-13 00:46:53.156802593 +0000 UTC m=+1.214860103" Mar 13 00:46:53.165814 kubelet[2756]: I0313 00:46:53.165731 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.165720729 podStartE2EDuration="1.165720729s" podCreationTimestamp="2026-03-13 00:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:53.164905101 +0000 UTC m=+1.222962600" watchObservedRunningTime="2026-03-13 00:46:53.165720729 +0000 UTC m=+1.223778249" Mar 13 00:46:53.978573 sudo[1793]: pam_unix(sudo:session): session closed for user root Mar 13 00:46:53.980197 sshd[1792]: Connection closed by 10.0.0.1 port 50620 Mar 13 00:46:53.980518 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Mar 13 00:46:53.985302 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:50620.service: Deactivated successfully. Mar 13 00:46:53.987818 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:46:53.988248 systemd[1]: session-7.scope: Consumed 6.802s CPU time, 272.2M memory peak. Mar 13 00:46:53.990211 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:46:53.992089 systemd-logind[1570]: Removed session 7. Mar 13 00:46:54.118565 kubelet[2756]: E0313 00:46:54.118475 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:54.118996 kubelet[2756]: E0313 00:46:54.118672 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:55.120880 kubelet[2756]: E0313 00:46:55.120807 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:55.120880 kubelet[2756]: E0313 00:46:55.120833 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:56.586008 kubelet[2756]: I0313 00:46:56.585905 2756 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:46:56.586671 containerd[1581]: time="2026-03-13T00:46:56.586598570Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:46:56.587124 kubelet[2756]: I0313 00:46:56.586803 2756 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:46:57.422333 systemd[1]: Created slice kubepods-besteffort-pod0e9337ab_5381_4936_ac01_9f9c9a25826e.slice - libcontainer container kubepods-besteffort-pod0e9337ab_5381_4936_ac01_9f9c9a25826e.slice. Mar 13 00:46:57.439280 systemd[1]: Created slice kubepods-burstable-pod85adadd9_5ab9_406e_b45d_e48d59355591.slice - libcontainer container kubepods-burstable-pod85adadd9_5ab9_406e_b45d_e48d59355591.slice. Mar 13 00:46:57.488911 kubelet[2756]: I0313 00:46:57.488780 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e9337ab-5381-4936-ac01-9f9c9a25826e-kube-proxy\") pod \"kube-proxy-t77vl\" (UID: \"0e9337ab-5381-4936-ac01-9f9c9a25826e\") " pod="kube-system/kube-proxy-t77vl" Mar 13 00:46:57.488911 kubelet[2756]: I0313 00:46:57.488886 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-run\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.488911 kubelet[2756]: I0313 00:46:57.488912 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-bpf-maps\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489193 kubelet[2756]: I0313 00:46:57.488935 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-etc-cni-netd\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489193 kubelet[2756]: I0313 00:46:57.489114 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-xtables-lock\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489193 kubelet[2756]: I0313 00:46:57.489148 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e9337ab-5381-4936-ac01-9f9c9a25826e-xtables-lock\") pod \"kube-proxy-t77vl\" (UID: \"0e9337ab-5381-4936-ac01-9f9c9a25826e\") " pod="kube-system/kube-proxy-t77vl" Mar 13 00:46:57.489193 kubelet[2756]: I0313 00:46:57.489169 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e9337ab-5381-4936-ac01-9f9c9a25826e-lib-modules\") pod \"kube-proxy-t77vl\" (UID: \"0e9337ab-5381-4936-ac01-9f9c9a25826e\") " pod="kube-system/kube-proxy-t77vl" Mar 13 00:46:57.489193 kubelet[2756]: I0313 00:46:57.489188 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cni-path\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489377 kubelet[2756]: I0313 00:46:57.489214 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-lib-modules\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489377 kubelet[2756]: I0313 00:46:57.489342 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85adadd9-5ab9-406e-b45d-e48d59355591-clustermesh-secrets\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489377 kubelet[2756]: I0313 00:46:57.489367 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-config-path\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489456 kubelet[2756]: I0313 00:46:57.489424 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-hubble-tls\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489563 kubelet[2756]: I0313 00:46:57.489514 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-cgroup\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489563 kubelet[2756]: I0313 00:46:57.489544 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-net\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489620 kubelet[2756]: I0313 00:46:57.489562 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fsq\" (UniqueName: \"kubernetes.io/projected/0e9337ab-5381-4936-ac01-9f9c9a25826e-kube-api-access-s9fsq\") pod \"kube-proxy-t77vl\" (UID: \"0e9337ab-5381-4936-ac01-9f9c9a25826e\") " pod="kube-system/kube-proxy-t77vl" Mar 13 00:46:57.489620 kubelet[2756]: I0313 00:46:57.489580 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-hostproc\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489620 kubelet[2756]: I0313 00:46:57.489593 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-kernel\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.489620 kubelet[2756]: I0313 00:46:57.489606 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l72v\" (UniqueName: \"kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-kube-api-access-4l72v\") pod \"cilium-s88rx\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " pod="kube-system/cilium-s88rx" Mar 13 00:46:57.738323 kubelet[2756]: E0313 00:46:57.737417 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:57.742742 containerd[1581]: time="2026-03-13T00:46:57.742705666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t77vl,Uid:0e9337ab-5381-4936-ac01-9f9c9a25826e,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:57.748140 systemd[1]: Created slice kubepods-besteffort-pod4faad7d8_0159_43ef_8a0f_0338ab29acb0.slice - libcontainer container kubepods-besteffort-pod4faad7d8_0159_43ef_8a0f_0338ab29acb0.slice. Mar 13 00:46:57.753428 kubelet[2756]: E0313 00:46:57.753337 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:57.753896 containerd[1581]: time="2026-03-13T00:46:57.753855628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s88rx,Uid:85adadd9-5ab9-406e-b45d-e48d59355591,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:57.785419 containerd[1581]: time="2026-03-13T00:46:57.785385740Z" level=info msg="connecting to shim 96c4db7348971d4473a244fc73c6c01f39ea0e86791c1a0f03efa34963fc694c" address="unix:///run/containerd/s/5f5f3ec943df9015b0a4d356b1d7d4a6d048d779727168805056688d5fef5186" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:57.791826 kubelet[2756]: I0313 00:46:57.791800 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4746\" (UniqueName: \"kubernetes.io/projected/4faad7d8-0159-43ef-8a0f-0338ab29acb0-kube-api-access-l4746\") pod \"cilium-operator-6f9c7c5859-8nnbl\" (UID: \"4faad7d8-0159-43ef-8a0f-0338ab29acb0\") " pod="kube-system/cilium-operator-6f9c7c5859-8nnbl" Mar 13 00:46:57.792121 kubelet[2756]: I0313 00:46:57.792107 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4faad7d8-0159-43ef-8a0f-0338ab29acb0-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-8nnbl\" (UID: \"4faad7d8-0159-43ef-8a0f-0338ab29acb0\") " pod="kube-system/cilium-operator-6f9c7c5859-8nnbl" Mar 13 00:46:57.795822 containerd[1581]: time="2026-03-13T00:46:57.795798707Z" level=info msg="connecting to shim 43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433" address="unix:///run/containerd/s/44274dd12d8f19bc325d729bf978cf4243f60b74fdb411206bb3725979010ebb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:57.828318 systemd[1]: Started cri-containerd-96c4db7348971d4473a244fc73c6c01f39ea0e86791c1a0f03efa34963fc694c.scope - libcontainer container 96c4db7348971d4473a244fc73c6c01f39ea0e86791c1a0f03efa34963fc694c. Mar 13 00:46:57.832577 systemd[1]: Started cri-containerd-43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433.scope - libcontainer container 43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433. Mar 13 00:46:57.886001 containerd[1581]: time="2026-03-13T00:46:57.885700516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s88rx,Uid:85adadd9-5ab9-406e-b45d-e48d59355591,Namespace:kube-system,Attempt:0,} returns sandbox id \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\"" Mar 13 00:46:57.887563 containerd[1581]: time="2026-03-13T00:46:57.887424828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t77vl,Uid:0e9337ab-5381-4936-ac01-9f9c9a25826e,Namespace:kube-system,Attempt:0,} returns sandbox id \"96c4db7348971d4473a244fc73c6c01f39ea0e86791c1a0f03efa34963fc694c\"" Mar 13 00:46:57.888157 kubelet[2756]: E0313 00:46:57.887998 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:57.889493 kubelet[2756]: E0313 00:46:57.888447 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:57.891512 containerd[1581]: time="2026-03-13T00:46:57.891412526Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 00:46:57.899660 containerd[1581]: time="2026-03-13T00:46:57.899576227Z" level=info msg="CreateContainer within sandbox \"96c4db7348971d4473a244fc73c6c01f39ea0e86791c1a0f03efa34963fc694c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:46:57.917807 containerd[1581]: time="2026-03-13T00:46:57.917496362Z" level=info msg="Container 49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:46:57.932116 containerd[1581]: time="2026-03-13T00:46:57.931876015Z" level=info msg="CreateContainer within sandbox \"96c4db7348971d4473a244fc73c6c01f39ea0e86791c1a0f03efa34963fc694c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393\"" Mar 13 00:46:57.933388 containerd[1581]: time="2026-03-13T00:46:57.933154848Z" level=info msg="StartContainer for \"49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393\"" Mar 13 00:46:57.934815 containerd[1581]: time="2026-03-13T00:46:57.934666244Z" level=info msg="connecting to shim 49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393" address="unix:///run/containerd/s/5f5f3ec943df9015b0a4d356b1d7d4a6d048d779727168805056688d5fef5186" protocol=ttrpc version=3 Mar 13 00:46:57.974395 systemd[1]: Started cri-containerd-49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393.scope - libcontainer container 49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393. Mar 13 00:46:58.055611 kubelet[2756]: E0313 00:46:58.054999 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:58.056545 containerd[1581]: time="2026-03-13T00:46:58.056403733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-8nnbl,Uid:4faad7d8-0159-43ef-8a0f-0338ab29acb0,Namespace:kube-system,Attempt:0,}" Mar 13 00:46:58.100374 containerd[1581]: time="2026-03-13T00:46:58.100176357Z" level=info msg="StartContainer for \"49aa37eb2a9828d1466ad3b5c8fc82520cf071b32a57fe03dcb70510d2bc6393\" returns successfully" Mar 13 00:46:58.121694 containerd[1581]: time="2026-03-13T00:46:58.121653067Z" level=info msg="connecting to shim eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59" address="unix:///run/containerd/s/3d0824d3c20adc4850cef9b3cb7056328c210ac71742beef14a6ca0bff5a6933" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:46:58.136673 kubelet[2756]: E0313 00:46:58.136639 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:58.169609 systemd[1]: Started cri-containerd-eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59.scope - libcontainer container eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59. Mar 13 00:46:58.321637 containerd[1581]: time="2026-03-13T00:46:58.321459589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-8nnbl,Uid:4faad7d8-0159-43ef-8a0f-0338ab29acb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\"" Mar 13 00:46:58.323607 kubelet[2756]: E0313 00:46:58.323544 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:58.381146 kubelet[2756]: E0313 00:46:58.380995 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:46:58.401679 kubelet[2756]: I0313 00:46:58.401527 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t77vl" podStartSLOduration=1.401508833 podStartE2EDuration="1.401508833s" podCreationTimestamp="2026-03-13 00:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:46:58.167700775 +0000 UTC m=+6.225758276" watchObservedRunningTime="2026-03-13 00:46:58.401508833 +0000 UTC m=+6.459566344" Mar 13 00:46:59.143875 kubelet[2756]: E0313 00:46:59.143709 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:04.665825 kubelet[2756]: E0313 00:47:04.665548 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:04.947290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901562389.mount: Deactivated successfully. Mar 13 00:47:04.965616 kubelet[2756]: E0313 00:47:04.964991 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:05.166304 kubelet[2756]: E0313 00:47:05.166267 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:06.365929 update_engine[1572]: I20260313 00:47:06.365726 1572 update_attempter.cc:509] Updating boot flags... Mar 13 00:47:07.215772 containerd[1581]: time="2026-03-13T00:47:07.215671075Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:47:07.216815 containerd[1581]: time="2026-03-13T00:47:07.216736423Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 00:47:07.217882 containerd[1581]: time="2026-03-13T00:47:07.217816848Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:47:07.219004 containerd[1581]: time="2026-03-13T00:47:07.218930440Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.327430734s" Mar 13 00:47:07.219120 containerd[1581]: time="2026-03-13T00:47:07.219013621Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 00:47:07.220610 containerd[1581]: time="2026-03-13T00:47:07.220392029Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 00:47:07.226611 containerd[1581]: time="2026-03-13T00:47:07.226515083Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:47:07.238373 containerd[1581]: time="2026-03-13T00:47:07.238312258Z" level=info msg="Container 2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:07.247618 containerd[1581]: time="2026-03-13T00:47:07.247562956Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\"" Mar 13 00:47:07.248311 containerd[1581]: time="2026-03-13T00:47:07.248169929Z" level=info msg="StartContainer for \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\"" Mar 13 00:47:07.249282 containerd[1581]: time="2026-03-13T00:47:07.249231620Z" level=info msg="connecting to shim 2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d" address="unix:///run/containerd/s/44274dd12d8f19bc325d729bf978cf4243f60b74fdb411206bb3725979010ebb" protocol=ttrpc version=3 Mar 13 00:47:07.298227 systemd[1]: Started cri-containerd-2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d.scope - libcontainer container 2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d. Mar 13 00:47:07.350889 containerd[1581]: time="2026-03-13T00:47:07.350831055Z" level=info msg="StartContainer for \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\" returns successfully" Mar 13 00:47:07.376313 systemd[1]: cri-containerd-2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d.scope: Deactivated successfully. Mar 13 00:47:07.378148 containerd[1581]: time="2026-03-13T00:47:07.378112944Z" level=info msg="received container exit event container_id:\"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\" id:\"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\" pid:3205 exited_at:{seconds:1773362827 nanos:377580522}" Mar 13 00:47:07.409637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d-rootfs.mount: Deactivated successfully. Mar 13 00:47:08.176696 kubelet[2756]: E0313 00:47:08.176399 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:08.189561 containerd[1581]: time="2026-03-13T00:47:08.189398302Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:47:08.202382 containerd[1581]: time="2026-03-13T00:47:08.202277543Z" level=info msg="Container 68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:08.219499 containerd[1581]: time="2026-03-13T00:47:08.219385536Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\"" Mar 13 00:47:08.230116 containerd[1581]: time="2026-03-13T00:47:08.227213500Z" level=info msg="StartContainer for \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\"" Mar 13 00:47:08.234398 containerd[1581]: time="2026-03-13T00:47:08.234367524Z" level=info msg="connecting to shim 68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa" address="unix:///run/containerd/s/44274dd12d8f19bc325d729bf978cf4243f60b74fdb411206bb3725979010ebb" protocol=ttrpc version=3 Mar 13 00:47:08.285299 systemd[1]: Started cri-containerd-68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa.scope - libcontainer container 68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa. Mar 13 00:47:08.341536 containerd[1581]: time="2026-03-13T00:47:08.341382895Z" level=info msg="StartContainer for \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\" returns successfully" Mar 13 00:47:08.364658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:47:08.365166 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:47:08.365631 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:47:08.368630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:47:08.372806 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:47:08.373653 systemd[1]: cri-containerd-68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa.scope: Deactivated successfully. Mar 13 00:47:08.376248 containerd[1581]: time="2026-03-13T00:47:08.375828252Z" level=info msg="received container exit event container_id:\"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\" id:\"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\" pid:3262 exited_at:{seconds:1773362828 nanos:374634992}" Mar 13 00:47:08.399875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:47:08.418283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa-rootfs.mount: Deactivated successfully. Mar 13 00:47:08.759784 containerd[1581]: time="2026-03-13T00:47:08.759692109Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:47:08.761002 containerd[1581]: time="2026-03-13T00:47:08.760850184Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 00:47:08.762280 containerd[1581]: time="2026-03-13T00:47:08.762206477Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:47:08.763560 containerd[1581]: time="2026-03-13T00:47:08.763464093Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.543043892s" Mar 13 00:47:08.763560 containerd[1581]: time="2026-03-13T00:47:08.763527349Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 00:47:08.771496 containerd[1581]: time="2026-03-13T00:47:08.771415354Z" level=info msg="CreateContainer within sandbox \"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 00:47:08.780525 containerd[1581]: time="2026-03-13T00:47:08.780448949Z" level=info msg="Container ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:08.790309 containerd[1581]: time="2026-03-13T00:47:08.790148769Z" level=info msg="CreateContainer within sandbox \"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\"" Mar 13 00:47:08.790889 containerd[1581]: time="2026-03-13T00:47:08.790805585Z" level=info msg="StartContainer for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\"" Mar 13 00:47:08.794788 containerd[1581]: time="2026-03-13T00:47:08.794290555Z" level=info msg="connecting to shim ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4" address="unix:///run/containerd/s/3d0824d3c20adc4850cef9b3cb7056328c210ac71742beef14a6ca0bff5a6933" protocol=ttrpc version=3 Mar 13 00:47:08.824233 systemd[1]: Started cri-containerd-ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4.scope - libcontainer container ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4. Mar 13 00:47:08.876372 containerd[1581]: time="2026-03-13T00:47:08.876316843Z" level=info msg="StartContainer for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" returns successfully" Mar 13 00:47:09.187906 kubelet[2756]: E0313 00:47:09.187767 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:09.196689 kubelet[2756]: E0313 00:47:09.196628 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:09.197374 containerd[1581]: time="2026-03-13T00:47:09.197162404Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:47:09.225401 containerd[1581]: time="2026-03-13T00:47:09.224415350Z" level=info msg="Container 9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:09.246543 containerd[1581]: time="2026-03-13T00:47:09.246330193Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\"" Mar 13 00:47:09.248125 containerd[1581]: time="2026-03-13T00:47:09.247862122Z" level=info msg="StartContainer for \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\"" Mar 13 00:47:09.250790 containerd[1581]: time="2026-03-13T00:47:09.250710393Z" level=info msg="connecting to shim 9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4" address="unix:///run/containerd/s/44274dd12d8f19bc325d729bf978cf4243f60b74fdb411206bb3725979010ebb" protocol=ttrpc version=3 Mar 13 00:47:09.340337 systemd[1]: Started cri-containerd-9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4.scope - libcontainer container 9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4. Mar 13 00:47:09.450452 containerd[1581]: time="2026-03-13T00:47:09.449748528Z" level=info msg="StartContainer for \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\" returns successfully" Mar 13 00:47:09.451657 systemd[1]: cri-containerd-9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4.scope: Deactivated successfully. Mar 13 00:47:09.454454 containerd[1581]: time="2026-03-13T00:47:09.454344591Z" level=info msg="received container exit event container_id:\"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\" id:\"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\" pid:3350 exited_at:{seconds:1773362829 nanos:453145380}" Mar 13 00:47:09.512199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4-rootfs.mount: Deactivated successfully. Mar 13 00:47:10.203795 kubelet[2756]: E0313 00:47:10.203580 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:10.203795 kubelet[2756]: E0313 00:47:10.203600 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:10.213574 containerd[1581]: time="2026-03-13T00:47:10.213471512Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:47:10.227858 kubelet[2756]: I0313 00:47:10.227699 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-8nnbl" podStartSLOduration=2.787503851 podStartE2EDuration="13.227681126s" podCreationTimestamp="2026-03-13 00:46:57 +0000 UTC" firstStartedPulling="2026-03-13 00:46:58.324570726 +0000 UTC m=+6.382628235" lastFinishedPulling="2026-03-13 00:47:08.764748 +0000 UTC m=+16.822805510" observedRunningTime="2026-03-13 00:47:09.286329977 +0000 UTC m=+17.344387497" watchObservedRunningTime="2026-03-13 00:47:10.227681126 +0000 UTC m=+18.285738637" Mar 13 00:47:10.236123 containerd[1581]: time="2026-03-13T00:47:10.234423538Z" level=info msg="Container 235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:10.245561 containerd[1581]: time="2026-03-13T00:47:10.245465622Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\"" Mar 13 00:47:10.246482 containerd[1581]: time="2026-03-13T00:47:10.246413662Z" level=info msg="StartContainer for \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\"" Mar 13 00:47:10.247612 containerd[1581]: time="2026-03-13T00:47:10.247428281Z" level=info msg="connecting to shim 235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6" address="unix:///run/containerd/s/44274dd12d8f19bc325d729bf978cf4243f60b74fdb411206bb3725979010ebb" protocol=ttrpc version=3 Mar 13 00:47:10.282307 systemd[1]: Started cri-containerd-235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6.scope - libcontainer container 235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6. Mar 13 00:47:10.324536 systemd[1]: cri-containerd-235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6.scope: Deactivated successfully. Mar 13 00:47:10.326352 containerd[1581]: time="2026-03-13T00:47:10.326311329Z" level=info msg="received container exit event container_id:\"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\" id:\"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\" pid:3390 exited_at:{seconds:1773362830 nanos:324786197}" Mar 13 00:47:10.328670 containerd[1581]: time="2026-03-13T00:47:10.328606173Z" level=info msg="StartContainer for \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\" returns successfully" Mar 13 00:47:10.363380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6-rootfs.mount: Deactivated successfully. Mar 13 00:47:11.211943 kubelet[2756]: E0313 00:47:11.211807 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:11.218193 containerd[1581]: time="2026-03-13T00:47:11.218019736Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:47:11.234227 containerd[1581]: time="2026-03-13T00:47:11.232822431Z" level=info msg="Container 19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:11.246872 containerd[1581]: time="2026-03-13T00:47:11.246689889Z" level=info msg="CreateContainer within sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\"" Mar 13 00:47:11.248493 containerd[1581]: time="2026-03-13T00:47:11.248356911Z" level=info msg="StartContainer for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\"" Mar 13 00:47:11.250588 containerd[1581]: time="2026-03-13T00:47:11.250440017Z" level=info msg="connecting to shim 19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8" address="unix:///run/containerd/s/44274dd12d8f19bc325d729bf978cf4243f60b74fdb411206bb3725979010ebb" protocol=ttrpc version=3 Mar 13 00:47:11.274626 systemd[1]: Started cri-containerd-19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8.scope - libcontainer container 19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8. Mar 13 00:47:11.351479 containerd[1581]: time="2026-03-13T00:47:11.351301941Z" level=info msg="StartContainer for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" returns successfully" Mar 13 00:47:11.513681 kubelet[2756]: I0313 00:47:11.513547 2756 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:47:11.586817 systemd[1]: Created slice kubepods-burstable-pod27f32348_14bf_458f_878e_5623ef850bfd.slice - libcontainer container kubepods-burstable-pod27f32348_14bf_458f_878e_5623ef850bfd.slice. Mar 13 00:47:11.605532 systemd[1]: Created slice kubepods-burstable-pod5a03fef1_6dbb_4747_82e1_7bb7ae5306cb.slice - libcontainer container kubepods-burstable-pod5a03fef1_6dbb_4747_82e1_7bb7ae5306cb.slice. Mar 13 00:47:11.619287 kubelet[2756]: I0313 00:47:11.619171 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qck55\" (UniqueName: \"kubernetes.io/projected/27f32348-14bf-458f-878e-5623ef850bfd-kube-api-access-qck55\") pod \"coredns-66bc5c9577-gtp27\" (UID: \"27f32348-14bf-458f-878e-5623ef850bfd\") " pod="kube-system/coredns-66bc5c9577-gtp27" Mar 13 00:47:11.619287 kubelet[2756]: I0313 00:47:11.619210 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27f32348-14bf-458f-878e-5623ef850bfd-config-volume\") pod \"coredns-66bc5c9577-gtp27\" (UID: \"27f32348-14bf-458f-878e-5623ef850bfd\") " pod="kube-system/coredns-66bc5c9577-gtp27" Mar 13 00:47:11.619287 kubelet[2756]: I0313 00:47:11.619226 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwks\" (UniqueName: \"kubernetes.io/projected/5a03fef1-6dbb-4747-82e1-7bb7ae5306cb-kube-api-access-jdwks\") pod \"coredns-66bc5c9577-kx8nc\" (UID: \"5a03fef1-6dbb-4747-82e1-7bb7ae5306cb\") " pod="kube-system/coredns-66bc5c9577-kx8nc" Mar 13 00:47:11.619287 kubelet[2756]: I0313 00:47:11.619242 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a03fef1-6dbb-4747-82e1-7bb7ae5306cb-config-volume\") pod \"coredns-66bc5c9577-kx8nc\" (UID: \"5a03fef1-6dbb-4747-82e1-7bb7ae5306cb\") " pod="kube-system/coredns-66bc5c9577-kx8nc" Mar 13 00:47:11.905291 kubelet[2756]: E0313 00:47:11.905158 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:11.906770 containerd[1581]: time="2026-03-13T00:47:11.906715467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gtp27,Uid:27f32348-14bf-458f-878e-5623ef850bfd,Namespace:kube-system,Attempt:0,}" Mar 13 00:47:11.913650 kubelet[2756]: E0313 00:47:11.913570 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:11.918568 containerd[1581]: time="2026-03-13T00:47:11.918401976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kx8nc,Uid:5a03fef1-6dbb-4747-82e1-7bb7ae5306cb,Namespace:kube-system,Attempt:0,}" Mar 13 00:47:12.224262 kubelet[2756]: E0313 00:47:12.223574 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:13.227216 kubelet[2756]: E0313 00:47:13.227016 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:13.650417 systemd-networkd[1469]: cilium_host: Link UP Mar 13 00:47:13.650686 systemd-networkd[1469]: cilium_net: Link UP Mar 13 00:47:13.650990 systemd-networkd[1469]: cilium_net: Gained carrier Mar 13 00:47:13.655695 systemd-networkd[1469]: cilium_host: Gained carrier Mar 13 00:47:13.801222 systemd-networkd[1469]: cilium_vxlan: Link UP Mar 13 00:47:13.801600 systemd-networkd[1469]: cilium_vxlan: Gained carrier Mar 13 00:47:14.078463 kernel: NET: Registered PF_ALG protocol family Mar 13 00:47:14.192421 systemd-networkd[1469]: cilium_net: Gained IPv6LL Mar 13 00:47:14.230209 kubelet[2756]: E0313 00:47:14.230132 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:14.513363 systemd-networkd[1469]: cilium_host: Gained IPv6LL Mar 13 00:47:15.002478 systemd-networkd[1469]: lxc_health: Link UP Mar 13 00:47:15.002948 systemd-networkd[1469]: lxc_health: Gained carrier Mar 13 00:47:15.408368 systemd-networkd[1469]: cilium_vxlan: Gained IPv6LL Mar 13 00:47:15.463202 systemd-networkd[1469]: lxc4f3a46078264: Link UP Mar 13 00:47:15.473120 kernel: eth0: renamed from tmp744b2 Mar 13 00:47:15.475236 systemd-networkd[1469]: lxc4f3a46078264: Gained carrier Mar 13 00:47:15.491260 systemd-networkd[1469]: lxc1bfe5f7a7f92: Link UP Mar 13 00:47:15.512633 kernel: eth0: renamed from tmp78c5d Mar 13 00:47:15.517001 systemd-networkd[1469]: lxc1bfe5f7a7f92: Gained carrier Mar 13 00:47:15.748806 kubelet[2756]: E0313 00:47:15.748711 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:15.782261 kubelet[2756]: I0313 00:47:15.781967 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s88rx" podStartSLOduration=9.452504644 podStartE2EDuration="18.781953232s" podCreationTimestamp="2026-03-13 00:46:57 +0000 UTC" firstStartedPulling="2026-03-13 00:46:57.890804631 +0000 UTC m=+5.948862141" lastFinishedPulling="2026-03-13 00:47:07.220253219 +0000 UTC m=+15.278310729" observedRunningTime="2026-03-13 00:47:12.250760681 +0000 UTC m=+20.308818211" watchObservedRunningTime="2026-03-13 00:47:15.781953232 +0000 UTC m=+23.840010741" Mar 13 00:47:16.234929 kubelet[2756]: E0313 00:47:16.234860 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:16.624293 systemd-networkd[1469]: lxc4f3a46078264: Gained IPv6LL Mar 13 00:47:16.752325 systemd-networkd[1469]: lxc_health: Gained IPv6LL Mar 13 00:47:16.816425 systemd-networkd[1469]: lxc1bfe5f7a7f92: Gained IPv6LL Mar 13 00:47:17.237574 kubelet[2756]: E0313 00:47:17.237519 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:18.952911 containerd[1581]: time="2026-03-13T00:47:18.952859257Z" level=info msg="connecting to shim 744b29c8271de0c7245cacbe3905189b203eedaec27e81e5655136aa78c4a61a" address="unix:///run/containerd/s/395f2488eac243fc32197f71b4e25600250a728fe9b133207d7e51d5c9f3d82e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:47:18.954161 containerd[1581]: time="2026-03-13T00:47:18.953997895Z" level=info msg="connecting to shim 78c5dc250a89f4cd196b989f86911f7e9819a14e609a98ce365ea5ba13a3f898" address="unix:///run/containerd/s/98ac516b8d3a4a37a129cf903407f4004f7e603a295bc6fd42d57a5d4dbcffbb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:47:18.988202 systemd[1]: Started cri-containerd-744b29c8271de0c7245cacbe3905189b203eedaec27e81e5655136aa78c4a61a.scope - libcontainer container 744b29c8271de0c7245cacbe3905189b203eedaec27e81e5655136aa78c4a61a. Mar 13 00:47:18.991967 systemd[1]: Started cri-containerd-78c5dc250a89f4cd196b989f86911f7e9819a14e609a98ce365ea5ba13a3f898.scope - libcontainer container 78c5dc250a89f4cd196b989f86911f7e9819a14e609a98ce365ea5ba13a3f898. Mar 13 00:47:19.008291 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:47:19.010243 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:47:19.056548 containerd[1581]: time="2026-03-13T00:47:19.056490374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kx8nc,Uid:5a03fef1-6dbb-4747-82e1-7bb7ae5306cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c5dc250a89f4cd196b989f86911f7e9819a14e609a98ce365ea5ba13a3f898\"" Mar 13 00:47:19.057849 kubelet[2756]: E0313 00:47:19.057659 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:19.059956 containerd[1581]: time="2026-03-13T00:47:19.059874198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gtp27,Uid:27f32348-14bf-458f-878e-5623ef850bfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"744b29c8271de0c7245cacbe3905189b203eedaec27e81e5655136aa78c4a61a\"" Mar 13 00:47:19.062106 kubelet[2756]: E0313 00:47:19.061539 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:19.064268 containerd[1581]: time="2026-03-13T00:47:19.063913514Z" level=info msg="CreateContainer within sandbox \"78c5dc250a89f4cd196b989f86911f7e9819a14e609a98ce365ea5ba13a3f898\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:47:19.067643 containerd[1581]: time="2026-03-13T00:47:19.067557153Z" level=info msg="CreateContainer within sandbox \"744b29c8271de0c7245cacbe3905189b203eedaec27e81e5655136aa78c4a61a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:47:19.091699 containerd[1581]: time="2026-03-13T00:47:19.091606302Z" level=info msg="Container 30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:19.092664 containerd[1581]: time="2026-03-13T00:47:19.092584762Z" level=info msg="Container de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:47:19.107245 containerd[1581]: time="2026-03-13T00:47:19.107196285Z" level=info msg="CreateContainer within sandbox \"744b29c8271de0c7245cacbe3905189b203eedaec27e81e5655136aa78c4a61a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b\"" Mar 13 00:47:19.107973 containerd[1581]: time="2026-03-13T00:47:19.107911869Z" level=info msg="StartContainer for \"de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b\"" Mar 13 00:47:19.109673 containerd[1581]: time="2026-03-13T00:47:19.109605621Z" level=info msg="connecting to shim de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b" address="unix:///run/containerd/s/395f2488eac243fc32197f71b4e25600250a728fe9b133207d7e51d5c9f3d82e" protocol=ttrpc version=3 Mar 13 00:47:19.130650 containerd[1581]: time="2026-03-13T00:47:19.130573949Z" level=info msg="CreateContainer within sandbox \"78c5dc250a89f4cd196b989f86911f7e9819a14e609a98ce365ea5ba13a3f898\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c\"" Mar 13 00:47:19.132631 containerd[1581]: time="2026-03-13T00:47:19.132478004Z" level=info msg="StartContainer for \"30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c\"" Mar 13 00:47:19.133852 containerd[1581]: time="2026-03-13T00:47:19.133732593Z" level=info msg="connecting to shim 30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c" address="unix:///run/containerd/s/98ac516b8d3a4a37a129cf903407f4004f7e603a295bc6fd42d57a5d4dbcffbb" protocol=ttrpc version=3 Mar 13 00:47:19.136258 systemd[1]: Started cri-containerd-de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b.scope - libcontainer container de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b. Mar 13 00:47:19.163521 systemd[1]: Started cri-containerd-30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c.scope - libcontainer container 30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c. Mar 13 00:47:19.190249 containerd[1581]: time="2026-03-13T00:47:19.190009197Z" level=info msg="StartContainer for \"de6b5ed3815eb10011bc48c85580d6ba27922941c8b40ba75523293a3576793b\" returns successfully" Mar 13 00:47:19.214624 containerd[1581]: time="2026-03-13T00:47:19.214420827Z" level=info msg="StartContainer for \"30143a811c52a56bcc820999790f72c70935697a816deb13721ec9665da3de7c\" returns successfully" Mar 13 00:47:19.248086 kubelet[2756]: E0313 00:47:19.247937 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:19.253456 kubelet[2756]: E0313 00:47:19.253335 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:19.273580 kubelet[2756]: I0313 00:47:19.273363 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kx8nc" podStartSLOduration=22.273347737 podStartE2EDuration="22.273347737s" podCreationTimestamp="2026-03-13 00:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:47:19.265506304 +0000 UTC m=+27.323563815" watchObservedRunningTime="2026-03-13 00:47:19.273347737 +0000 UTC m=+27.331405247" Mar 13 00:47:19.288480 kubelet[2756]: I0313 00:47:19.288421 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gtp27" podStartSLOduration=22.288406418 podStartE2EDuration="22.288406418s" podCreationTimestamp="2026-03-13 00:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:47:19.287499105 +0000 UTC m=+27.345556646" watchObservedRunningTime="2026-03-13 00:47:19.288406418 +0000 UTC m=+27.346463928" Mar 13 00:47:20.262005 kubelet[2756]: E0313 00:47:20.261904 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:20.265295 kubelet[2756]: E0313 00:47:20.265128 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:21.264937 kubelet[2756]: E0313 00:47:21.264852 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:21.265471 kubelet[2756]: E0313 00:47:21.265214 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:47:22.669641 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:42732.service - OpenSSH per-connection server daemon (10.0.0.1:42732). Mar 13 00:47:22.738981 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 42732 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:22.740883 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:22.747107 systemd-logind[1570]: New session 8 of user core. Mar 13 00:47:22.763255 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:47:22.854174 sshd[4110]: Connection closed by 10.0.0.1 port 42732 Mar 13 00:47:22.854609 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:22.859484 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:42732.service: Deactivated successfully. Mar 13 00:47:22.861645 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:47:22.862745 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:47:22.864336 systemd-logind[1570]: Removed session 8. Mar 13 00:47:27.871177 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:42740.service - OpenSSH per-connection server daemon (10.0.0.1:42740). Mar 13 00:47:27.936603 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 42740 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:27.938552 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:27.948418 systemd-logind[1570]: New session 9 of user core. Mar 13 00:47:27.961320 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:47:28.057594 sshd[4129]: Connection closed by 10.0.0.1 port 42740 Mar 13 00:47:28.058139 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:28.062884 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:42740.service: Deactivated successfully. Mar 13 00:47:28.065323 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:47:28.067665 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:47:28.069816 systemd-logind[1570]: Removed session 9. Mar 13 00:47:33.078548 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:37302.service - OpenSSH per-connection server daemon (10.0.0.1:37302). Mar 13 00:47:33.149528 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 37302 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:33.151519 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:33.157841 systemd-logind[1570]: New session 10 of user core. Mar 13 00:47:33.168403 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:47:33.279352 sshd[4148]: Connection closed by 10.0.0.1 port 37302 Mar 13 00:47:33.281892 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:33.287458 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:37302.service: Deactivated successfully. Mar 13 00:47:33.289734 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:47:33.291401 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:47:33.293483 systemd-logind[1570]: Removed session 10. Mar 13 00:47:38.296119 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:37304.service - OpenSSH per-connection server daemon (10.0.0.1:37304). Mar 13 00:47:38.362666 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 37304 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:38.364456 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:38.371239 systemd-logind[1570]: New session 11 of user core. Mar 13 00:47:38.384336 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:47:38.485119 sshd[4165]: Connection closed by 10.0.0.1 port 37304 Mar 13 00:47:38.485530 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:38.489959 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:37304.service: Deactivated successfully. Mar 13 00:47:38.493179 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:47:38.496871 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:47:38.498424 systemd-logind[1570]: Removed session 11. Mar 13 00:47:43.499954 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:33540.service - OpenSSH per-connection server daemon (10.0.0.1:33540). Mar 13 00:47:43.601985 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 33540 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:43.604425 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:43.611597 systemd-logind[1570]: New session 12 of user core. Mar 13 00:47:43.621262 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:47:43.757796 sshd[4182]: Connection closed by 10.0.0.1 port 33540 Mar 13 00:47:43.758920 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:43.767858 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:33540.service: Deactivated successfully. Mar 13 00:47:43.769976 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:47:43.771297 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:47:43.773875 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:33544.service - OpenSSH per-connection server daemon (10.0.0.1:33544). Mar 13 00:47:43.775686 systemd-logind[1570]: Removed session 12. Mar 13 00:47:43.850253 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 33544 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:43.852501 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:43.859657 systemd-logind[1570]: New session 13 of user core. Mar 13 00:47:43.874344 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:47:44.027676 sshd[4199]: Connection closed by 10.0.0.1 port 33544 Mar 13 00:47:44.028938 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:44.046387 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:33544.service: Deactivated successfully. Mar 13 00:47:44.055012 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:47:44.059213 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:47:44.067448 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:33546.service - OpenSSH per-connection server daemon (10.0.0.1:33546). Mar 13 00:47:44.072282 systemd-logind[1570]: Removed session 13. Mar 13 00:47:44.144638 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 33546 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:44.149639 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:44.155377 systemd-logind[1570]: New session 14 of user core. Mar 13 00:47:44.166214 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:47:44.259710 sshd[4213]: Connection closed by 10.0.0.1 port 33546 Mar 13 00:47:44.260153 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:44.264840 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:33546.service: Deactivated successfully. Mar 13 00:47:44.266986 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:47:44.268292 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:47:44.270171 systemd-logind[1570]: Removed session 14. Mar 13 00:47:49.277811 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:53808.service - OpenSSH per-connection server daemon (10.0.0.1:53808). Mar 13 00:47:49.338670 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:49.340095 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:49.346354 systemd-logind[1570]: New session 15 of user core. Mar 13 00:47:49.357267 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:47:49.438636 sshd[4229]: Connection closed by 10.0.0.1 port 53808 Mar 13 00:47:49.438970 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:49.445301 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:53808.service: Deactivated successfully. Mar 13 00:47:49.447342 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:47:49.448494 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:47:49.450018 systemd-logind[1570]: Removed session 15. Mar 13 00:47:54.466653 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Mar 13 00:47:54.533920 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:54.535568 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:54.542679 systemd-logind[1570]: New session 16 of user core. Mar 13 00:47:54.556216 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:47:54.647909 sshd[4248]: Connection closed by 10.0.0.1 port 53814 Mar 13 00:47:54.648373 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:54.652988 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:53814.service: Deactivated successfully. Mar 13 00:47:54.655483 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:47:54.656824 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:47:54.658573 systemd-logind[1570]: Removed session 16. Mar 13 00:47:59.662196 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:33504.service - OpenSSH per-connection server daemon (10.0.0.1:33504). Mar 13 00:47:59.725559 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 33504 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:59.726864 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:59.733085 systemd-logind[1570]: New session 17 of user core. Mar 13 00:47:59.746344 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:47:59.830314 sshd[4266]: Connection closed by 10.0.0.1 port 33504 Mar 13 00:47:59.830863 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Mar 13 00:47:59.838953 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:33504.service: Deactivated successfully. Mar 13 00:47:59.841540 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:47:59.843975 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:47:59.847139 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Mar 13 00:47:59.848572 systemd-logind[1570]: Removed session 17. Mar 13 00:47:59.916160 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:47:59.917690 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:47:59.924330 systemd-logind[1570]: New session 18 of user core. Mar 13 00:47:59.940328 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:48:00.164756 sshd[4282]: Connection closed by 10.0.0.1 port 33506 Mar 13 00:48:00.165632 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:00.178518 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:33506.service: Deactivated successfully. Mar 13 00:48:00.180897 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:48:00.182280 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:48:00.185288 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:33514.service - OpenSSH per-connection server daemon (10.0.0.1:33514). Mar 13 00:48:00.187384 systemd-logind[1570]: Removed session 18. Mar 13 00:48:00.268011 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 33514 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:00.270206 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:00.277096 systemd-logind[1570]: New session 19 of user core. Mar 13 00:48:00.284311 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:48:01.030106 sshd[4297]: Connection closed by 10.0.0.1 port 33514 Mar 13 00:48:01.031525 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:01.039648 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:33514.service: Deactivated successfully. Mar 13 00:48:01.042676 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:48:01.044946 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:48:01.051317 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:33516.service - OpenSSH per-connection server daemon (10.0.0.1:33516). Mar 13 00:48:01.054414 systemd-logind[1570]: Removed session 19. Mar 13 00:48:01.115075 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 33516 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:01.117420 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:01.123830 systemd-logind[1570]: New session 20 of user core. Mar 13 00:48:01.129290 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:48:01.370680 sshd[4318]: Connection closed by 10.0.0.1 port 33516 Mar 13 00:48:01.370153 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:01.381576 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:33516.service: Deactivated successfully. Mar 13 00:48:01.384741 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:48:01.386018 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:48:01.389319 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:33526.service - OpenSSH per-connection server daemon (10.0.0.1:33526). Mar 13 00:48:01.390892 systemd-logind[1570]: Removed session 20. Mar 13 00:48:01.449123 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 33526 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:01.450607 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:01.459023 systemd-logind[1570]: New session 21 of user core. Mar 13 00:48:01.468277 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:48:01.562240 sshd[4332]: Connection closed by 10.0.0.1 port 33526 Mar 13 00:48:01.562644 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:01.568861 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:33526.service: Deactivated successfully. Mar 13 00:48:01.571281 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:48:01.572707 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:48:01.575304 systemd-logind[1570]: Removed session 21. Mar 13 00:48:06.581943 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:33536.service - OpenSSH per-connection server daemon (10.0.0.1:33536). Mar 13 00:48:06.660871 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 33536 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:06.662724 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:06.669598 systemd-logind[1570]: New session 22 of user core. Mar 13 00:48:06.681328 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:48:06.763999 sshd[4351]: Connection closed by 10.0.0.1 port 33536 Mar 13 00:48:06.764524 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:06.768533 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:33536.service: Deactivated successfully. Mar 13 00:48:06.770986 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:48:06.773211 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:48:06.774986 systemd-logind[1570]: Removed session 22. Mar 13 00:48:11.780217 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:34714.service - OpenSSH per-connection server daemon (10.0.0.1:34714). Mar 13 00:48:11.846439 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 34714 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:11.849013 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:11.857161 systemd-logind[1570]: New session 23 of user core. Mar 13 00:48:11.869257 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:48:11.961892 sshd[4370]: Connection closed by 10.0.0.1 port 34714 Mar 13 00:48:11.962462 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:11.967408 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:34714.service: Deactivated successfully. Mar 13 00:48:11.970007 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:48:11.971400 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:48:11.973745 systemd-logind[1570]: Removed session 23. Mar 13 00:48:16.101436 kubelet[2756]: E0313 00:48:16.101286 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:16.981343 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:34728.service - OpenSSH per-connection server daemon (10.0.0.1:34728). Mar 13 00:48:17.055803 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 34728 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:17.063725 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:17.073834 systemd-logind[1570]: New session 24 of user core. Mar 13 00:48:17.084452 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:48:17.101531 kubelet[2756]: E0313 00:48:17.101415 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:17.201201 sshd[4387]: Connection closed by 10.0.0.1 port 34728 Mar 13 00:48:17.201681 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:17.217824 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:34728.service: Deactivated successfully. Mar 13 00:48:17.220798 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:48:17.222693 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:48:17.226803 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:34732.service - OpenSSH per-connection server daemon (10.0.0.1:34732). Mar 13 00:48:17.228489 systemd-logind[1570]: Removed session 24. Mar 13 00:48:17.309014 sshd[4400]: Accepted publickey for core from 10.0.0.1 port 34732 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:17.311010 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:17.317971 systemd-logind[1570]: New session 25 of user core. Mar 13 00:48:17.331424 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:48:18.723575 containerd[1581]: time="2026-03-13T00:48:18.723510160Z" level=info msg="StopContainer for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" with timeout 30 (s)" Mar 13 00:48:18.754635 containerd[1581]: time="2026-03-13T00:48:18.754284346Z" level=info msg="Stop container \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" with signal terminated" Mar 13 00:48:18.797294 systemd[1]: cri-containerd-ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4.scope: Deactivated successfully. Mar 13 00:48:18.800840 containerd[1581]: time="2026-03-13T00:48:18.800644505Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:48:18.806424 containerd[1581]: time="2026-03-13T00:48:18.806363494Z" level=info msg="received container exit event container_id:\"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" id:\"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" pid:3315 exited_at:{seconds:1773362898 nanos:804409140}" Mar 13 00:48:18.807593 containerd[1581]: time="2026-03-13T00:48:18.807427472Z" level=info msg="StopContainer for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" with timeout 2 (s)" Mar 13 00:48:18.808634 containerd[1581]: time="2026-03-13T00:48:18.808570643Z" level=info msg="Stop container \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" with signal terminated" Mar 13 00:48:18.824376 systemd-networkd[1469]: lxc_health: Link DOWN Mar 13 00:48:18.824423 systemd-networkd[1469]: lxc_health: Lost carrier Mar 13 00:48:18.869179 systemd[1]: cri-containerd-19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8.scope: Deactivated successfully. Mar 13 00:48:18.869664 systemd[1]: cri-containerd-19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8.scope: Consumed 7.701s CPU time, 126.1M memory peak, 196K read from disk, 13.3M written to disk. Mar 13 00:48:18.872455 containerd[1581]: time="2026-03-13T00:48:18.872277611Z" level=info msg="received container exit event container_id:\"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" id:\"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" pid:3428 exited_at:{seconds:1773362898 nanos:871716116}" Mar 13 00:48:18.897539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4-rootfs.mount: Deactivated successfully. Mar 13 00:48:18.922565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8-rootfs.mount: Deactivated successfully. Mar 13 00:48:18.927429 containerd[1581]: time="2026-03-13T00:48:18.927374282Z" level=info msg="StopContainer for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" returns successfully" Mar 13 00:48:18.933413 containerd[1581]: time="2026-03-13T00:48:18.933207590Z" level=info msg="StopPodSandbox for \"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\"" Mar 13 00:48:18.935255 containerd[1581]: time="2026-03-13T00:48:18.935011648Z" level=info msg="Container to stop \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:48:18.936099 containerd[1581]: time="2026-03-13T00:48:18.935972446Z" level=info msg="StopContainer for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" returns successfully" Mar 13 00:48:18.938246 containerd[1581]: time="2026-03-13T00:48:18.937685851Z" level=info msg="StopPodSandbox for \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\"" Mar 13 00:48:18.938246 containerd[1581]: time="2026-03-13T00:48:18.937758857Z" level=info msg="Container to stop \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:48:18.938246 containerd[1581]: time="2026-03-13T00:48:18.937771571Z" level=info msg="Container to stop \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:48:18.938246 containerd[1581]: time="2026-03-13T00:48:18.937785948Z" level=info msg="Container to stop \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:48:18.938246 containerd[1581]: time="2026-03-13T00:48:18.937794303Z" level=info msg="Container to stop \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:48:18.938246 containerd[1581]: time="2026-03-13T00:48:18.937802398Z" level=info msg="Container to stop \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:48:18.958629 systemd[1]: cri-containerd-eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59.scope: Deactivated successfully. Mar 13 00:48:18.961724 systemd[1]: cri-containerd-43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433.scope: Deactivated successfully. Mar 13 00:48:18.967512 containerd[1581]: time="2026-03-13T00:48:18.967281951Z" level=info msg="received sandbox exit event container_id:\"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" id:\"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" exit_status:137 exited_at:{seconds:1773362898 nanos:961858410}" monitor_name=podsandbox Mar 13 00:48:18.972434 containerd[1581]: time="2026-03-13T00:48:18.972387110Z" level=info msg="received sandbox exit event container_id:\"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" id:\"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" exit_status:137 exited_at:{seconds:1773362898 nanos:971523355}" monitor_name=podsandbox Mar 13 00:48:19.013334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59-rootfs.mount: Deactivated successfully. Mar 13 00:48:19.019711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433-rootfs.mount: Deactivated successfully. Mar 13 00:48:19.026799 containerd[1581]: time="2026-03-13T00:48:19.026627081Z" level=info msg="shim disconnected" id=eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59 namespace=k8s.io Mar 13 00:48:19.026799 containerd[1581]: time="2026-03-13T00:48:19.026663749Z" level=warning msg="cleaning up after shim disconnected" id=eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59 namespace=k8s.io Mar 13 00:48:19.039277 containerd[1581]: time="2026-03-13T00:48:19.026674569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:48:19.039763 containerd[1581]: time="2026-03-13T00:48:19.027294573Z" level=info msg="shim disconnected" id=43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433 namespace=k8s.io Mar 13 00:48:19.039763 containerd[1581]: time="2026-03-13T00:48:19.039476202Z" level=warning msg="cleaning up after shim disconnected" id=43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433 namespace=k8s.io Mar 13 00:48:19.039763 containerd[1581]: time="2026-03-13T00:48:19.039489667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:48:19.084005 containerd[1581]: time="2026-03-13T00:48:19.083460022Z" level=info msg="TearDown network for sandbox \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" successfully" Mar 13 00:48:19.084005 containerd[1581]: time="2026-03-13T00:48:19.083969120Z" level=info msg="StopPodSandbox for \"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" returns successfully" Mar 13 00:48:19.085330 containerd[1581]: time="2026-03-13T00:48:19.084292170Z" level=info msg="TearDown network for sandbox \"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" successfully" Mar 13 00:48:19.085330 containerd[1581]: time="2026-03-13T00:48:19.084311145Z" level=info msg="StopPodSandbox for \"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" returns successfully" Mar 13 00:48:19.084758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59-shm.mount: Deactivated successfully. Mar 13 00:48:19.084984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433-shm.mount: Deactivated successfully. Mar 13 00:48:19.115109 containerd[1581]: time="2026-03-13T00:48:19.114417660Z" level=info msg="received sandbox container exit event sandbox_id:\"43702a12c119085073290ec8d6a3bafbf9417694374740603c1e32fd94291433\" exit_status:137 exited_at:{seconds:1773362898 nanos:971523355}" monitor_name=criService Mar 13 00:48:19.115109 containerd[1581]: time="2026-03-13T00:48:19.114665263Z" level=info msg="received sandbox container exit event sandbox_id:\"eb0634ccab80b122a25b462a062871cc237732df16313caa88b5276054133e59\" exit_status:137 exited_at:{seconds:1773362898 nanos:961858410}" monitor_name=criService Mar 13 00:48:19.133376 kubelet[2756]: I0313 00:48:19.133339 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cni-path\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.134250 kubelet[2756]: I0313 00:48:19.133982 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l72v\" (UniqueName: \"kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-kube-api-access-4l72v\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.134250 kubelet[2756]: I0313 00:48:19.134164 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-etc-cni-netd\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.134250 kubelet[2756]: I0313 00:48:19.134181 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-kernel\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.134250 kubelet[2756]: I0313 00:48:19.134200 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-hubble-tls\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.134250 kubelet[2756]: I0313 00:48:19.133546 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cni-path" (OuterVolumeSpecName: "cni-path") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.135365 kubelet[2756]: I0313 00:48:19.134262 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.135365 kubelet[2756]: I0313 00:48:19.134279 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.135365 kubelet[2756]: I0313 00:48:19.134746 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-config-path\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135365 kubelet[2756]: I0313 00:48:19.134769 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-run\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135365 kubelet[2756]: I0313 00:48:19.134791 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4faad7d8-0159-43ef-8a0f-0338ab29acb0-cilium-config-path\") pod \"4faad7d8-0159-43ef-8a0f-0338ab29acb0\" (UID: \"4faad7d8-0159-43ef-8a0f-0338ab29acb0\") " Mar 13 00:48:19.135542 kubelet[2756]: I0313 00:48:19.134806 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-bpf-maps\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135542 kubelet[2756]: I0313 00:48:19.134819 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-xtables-lock\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135542 kubelet[2756]: I0313 00:48:19.134833 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-cgroup\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135542 kubelet[2756]: I0313 00:48:19.134849 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85adadd9-5ab9-406e-b45d-e48d59355591-clustermesh-secrets\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135542 kubelet[2756]: I0313 00:48:19.134911 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-net\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135542 kubelet[2756]: I0313 00:48:19.134928 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-hostproc\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135764 kubelet[2756]: I0313 00:48:19.134941 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-lib-modules\") pod \"85adadd9-5ab9-406e-b45d-e48d59355591\" (UID: \"85adadd9-5ab9-406e-b45d-e48d59355591\") " Mar 13 00:48:19.135764 kubelet[2756]: I0313 00:48:19.134956 2756 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4746\" (UniqueName: \"kubernetes.io/projected/4faad7d8-0159-43ef-8a0f-0338ab29acb0-kube-api-access-l4746\") pod \"4faad7d8-0159-43ef-8a0f-0338ab29acb0\" (UID: \"4faad7d8-0159-43ef-8a0f-0338ab29acb0\") " Mar 13 00:48:19.135764 kubelet[2756]: I0313 00:48:19.134993 2756 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.135764 kubelet[2756]: I0313 00:48:19.135002 2756 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.135764 kubelet[2756]: I0313 00:48:19.135012 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.137514 kubelet[2756]: I0313 00:48:19.137483 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.139336 kubelet[2756]: I0313 00:48:19.137635 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.139336 kubelet[2756]: I0313 00:48:19.137661 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.139336 kubelet[2756]: I0313 00:48:19.137694 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.139336 kubelet[2756]: I0313 00:48:19.137728 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-hostproc" (OuterVolumeSpecName: "hostproc") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.139336 kubelet[2756]: I0313 00:48:19.137743 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.139535 kubelet[2756]: I0313 00:48:19.137766 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:48:19.148787 kubelet[2756]: I0313 00:48:19.148243 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4faad7d8-0159-43ef-8a0f-0338ab29acb0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4faad7d8-0159-43ef-8a0f-0338ab29acb0" (UID: "4faad7d8-0159-43ef-8a0f-0338ab29acb0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:48:19.149293 kubelet[2756]: I0313 00:48:19.149119 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:48:19.151166 kubelet[2756]: I0313 00:48:19.150937 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4faad7d8-0159-43ef-8a0f-0338ab29acb0-kube-api-access-l4746" (OuterVolumeSpecName: "kube-api-access-l4746") pod "4faad7d8-0159-43ef-8a0f-0338ab29acb0" (UID: "4faad7d8-0159-43ef-8a0f-0338ab29acb0"). InnerVolumeSpecName "kube-api-access-l4746". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:48:19.154790 kubelet[2756]: I0313 00:48:19.154669 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85adadd9-5ab9-406e-b45d-e48d59355591-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:48:19.155207 kubelet[2756]: I0313 00:48:19.155021 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:48:19.156226 kubelet[2756]: I0313 00:48:19.156108 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-kube-api-access-4l72v" (OuterVolumeSpecName: "kube-api-access-4l72v") pod "85adadd9-5ab9-406e-b45d-e48d59355591" (UID: "85adadd9-5ab9-406e-b45d-e48d59355591"). InnerVolumeSpecName "kube-api-access-4l72v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235571 2756 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235671 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235689 2756 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85adadd9-5ab9-406e-b45d-e48d59355591-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235708 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235724 2756 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235738 2756 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235750 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4746\" (UniqueName: \"kubernetes.io/projected/4faad7d8-0159-43ef-8a0f-0338ab29acb0-kube-api-access-l4746\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.235988 kubelet[2756]: I0313 00:48:19.235764 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4l72v\" (UniqueName: \"kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-kube-api-access-4l72v\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.236505 kubelet[2756]: I0313 00:48:19.235778 2756 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85adadd9-5ab9-406e-b45d-e48d59355591-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.236505 kubelet[2756]: I0313 00:48:19.235790 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.236505 kubelet[2756]: I0313 00:48:19.235802 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.236505 kubelet[2756]: I0313 00:48:19.235816 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4faad7d8-0159-43ef-8a0f-0338ab29acb0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.236505 kubelet[2756]: I0313 00:48:19.235831 2756 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85adadd9-5ab9-406e-b45d-e48d59355591-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 13 00:48:19.471241 kubelet[2756]: I0313 00:48:19.471015 2756 scope.go:117] "RemoveContainer" containerID="ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4" Mar 13 00:48:19.474140 containerd[1581]: time="2026-03-13T00:48:19.474014354Z" level=info msg="RemoveContainer for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\"" Mar 13 00:48:19.484671 systemd[1]: Removed slice kubepods-besteffort-pod4faad7d8_0159_43ef_8a0f_0338ab29acb0.slice - libcontainer container kubepods-besteffort-pod4faad7d8_0159_43ef_8a0f_0338ab29acb0.slice. Mar 13 00:48:19.487796 containerd[1581]: time="2026-03-13T00:48:19.487661784Z" level=info msg="RemoveContainer for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" returns successfully" Mar 13 00:48:19.488214 kubelet[2756]: I0313 00:48:19.488185 2756 scope.go:117] "RemoveContainer" containerID="ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4" Mar 13 00:48:19.498322 containerd[1581]: time="2026-03-13T00:48:19.488999301Z" level=error msg="ContainerStatus for \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\": not found" Mar 13 00:48:19.500800 kubelet[2756]: E0313 00:48:19.499662 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\": not found" containerID="ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4" Mar 13 00:48:19.500800 kubelet[2756]: I0313 00:48:19.499720 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4"} err="failed to get container status \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed6cf26f147d99551d0d8abbdb1f76b78361d9cbb2e8e157ca404757bea948e4\": not found" Mar 13 00:48:19.500800 kubelet[2756]: I0313 00:48:19.499773 2756 scope.go:117] "RemoveContainer" containerID="19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8" Mar 13 00:48:19.501578 systemd[1]: Removed slice kubepods-burstable-pod85adadd9_5ab9_406e_b45d_e48d59355591.slice - libcontainer container kubepods-burstable-pod85adadd9_5ab9_406e_b45d_e48d59355591.slice. Mar 13 00:48:19.501793 systemd[1]: kubepods-burstable-pod85adadd9_5ab9_406e_b45d_e48d59355591.slice: Consumed 7.880s CPU time, 126.4M memory peak, 208K read from disk, 13.3M written to disk. Mar 13 00:48:19.507648 containerd[1581]: time="2026-03-13T00:48:19.507604042Z" level=info msg="RemoveContainer for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\"" Mar 13 00:48:19.518757 containerd[1581]: time="2026-03-13T00:48:19.518370100Z" level=info msg="RemoveContainer for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" returns successfully" Mar 13 00:48:19.519393 kubelet[2756]: I0313 00:48:19.519221 2756 scope.go:117] "RemoveContainer" containerID="235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6" Mar 13 00:48:19.524133 containerd[1581]: time="2026-03-13T00:48:19.523443342Z" level=info msg="RemoveContainer for \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\"" Mar 13 00:48:19.530837 containerd[1581]: time="2026-03-13T00:48:19.530798534Z" level=info msg="RemoveContainer for \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\" returns successfully" Mar 13 00:48:19.531640 kubelet[2756]: I0313 00:48:19.531528 2756 scope.go:117] "RemoveContainer" containerID="9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4" Mar 13 00:48:19.551723 containerd[1581]: time="2026-03-13T00:48:19.551482495Z" level=info msg="RemoveContainer for \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\"" Mar 13 00:48:19.559857 containerd[1581]: time="2026-03-13T00:48:19.559717479Z" level=info msg="RemoveContainer for \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\" returns successfully" Mar 13 00:48:19.560422 kubelet[2756]: I0313 00:48:19.560250 2756 scope.go:117] "RemoveContainer" containerID="68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa" Mar 13 00:48:19.563665 containerd[1581]: time="2026-03-13T00:48:19.563573517Z" level=info msg="RemoveContainer for \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\"" Mar 13 00:48:19.588261 containerd[1581]: time="2026-03-13T00:48:19.587948241Z" level=info msg="RemoveContainer for \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\" returns successfully" Mar 13 00:48:19.588597 kubelet[2756]: I0313 00:48:19.588536 2756 scope.go:117] "RemoveContainer" containerID="2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d" Mar 13 00:48:19.592342 containerd[1581]: time="2026-03-13T00:48:19.592284611Z" level=info msg="RemoveContainer for \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\"" Mar 13 00:48:19.598321 containerd[1581]: time="2026-03-13T00:48:19.598214541Z" level=info msg="RemoveContainer for \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\" returns successfully" Mar 13 00:48:19.599385 kubelet[2756]: I0313 00:48:19.599349 2756 scope.go:117] "RemoveContainer" containerID="19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8" Mar 13 00:48:19.600122 containerd[1581]: time="2026-03-13T00:48:19.599935554Z" level=error msg="ContainerStatus for \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\": not found" Mar 13 00:48:19.600321 kubelet[2756]: E0313 00:48:19.600249 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\": not found" containerID="19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8" Mar 13 00:48:19.600371 kubelet[2756]: I0313 00:48:19.600332 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8"} err="failed to get container status \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"19e1f373bbb9fba565ddb5785a418a7a2f6bedbca13ec40a747c08b9f95942e8\": not found" Mar 13 00:48:19.600371 kubelet[2756]: I0313 00:48:19.600362 2756 scope.go:117] "RemoveContainer" containerID="235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6" Mar 13 00:48:19.600722 containerd[1581]: time="2026-03-13T00:48:19.600631490Z" level=error msg="ContainerStatus for \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\": not found" Mar 13 00:48:19.601390 kubelet[2756]: E0313 00:48:19.601209 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\": not found" containerID="235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6" Mar 13 00:48:19.601390 kubelet[2756]: I0313 00:48:19.601367 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6"} err="failed to get container status \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"235534ad4c2cb10bbe6effd769337d848c2c898e8459f119b540c724aac2a8e6\": not found" Mar 13 00:48:19.601390 kubelet[2756]: I0313 00:48:19.601391 2756 scope.go:117] "RemoveContainer" containerID="9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4" Mar 13 00:48:19.602300 containerd[1581]: time="2026-03-13T00:48:19.602197875Z" level=error msg="ContainerStatus for \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\": not found" Mar 13 00:48:19.602763 kubelet[2756]: E0313 00:48:19.602631 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\": not found" containerID="9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4" Mar 13 00:48:19.602763 kubelet[2756]: I0313 00:48:19.602692 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4"} err="failed to get container status \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c0fe03ed1ec3a5787bf63ec6daa271e17f5146513b56652ea1029874feb80d4\": not found" Mar 13 00:48:19.602763 kubelet[2756]: I0313 00:48:19.602719 2756 scope.go:117] "RemoveContainer" containerID="68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa" Mar 13 00:48:19.603384 containerd[1581]: time="2026-03-13T00:48:19.603276473Z" level=error msg="ContainerStatus for \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\": not found" Mar 13 00:48:19.603581 kubelet[2756]: E0313 00:48:19.603551 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\": not found" containerID="68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa" Mar 13 00:48:19.603755 kubelet[2756]: I0313 00:48:19.603672 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa"} err="failed to get container status \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"68779fac088523ed611776dacf49d118cadad0ea669fa30f57f414f523cbb0fa\": not found" Mar 13 00:48:19.603755 kubelet[2756]: I0313 00:48:19.603740 2756 scope.go:117] "RemoveContainer" containerID="2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d" Mar 13 00:48:19.604121 containerd[1581]: time="2026-03-13T00:48:19.604012328Z" level=error msg="ContainerStatus for \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\": not found" Mar 13 00:48:19.604803 kubelet[2756]: E0313 00:48:19.604610 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\": not found" containerID="2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d" Mar 13 00:48:19.604803 kubelet[2756]: I0313 00:48:19.604663 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d"} err="failed to get container status \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2903bc1bbde356e5432198d8d133da93b2a654e76a712441b34e801f68975a7d\": not found" Mar 13 00:48:19.897568 systemd[1]: var-lib-kubelet-pods-4faad7d8\x2d0159\x2d43ef\x2d8a0f\x2d0338ab29acb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4746.mount: Deactivated successfully. Mar 13 00:48:19.897782 systemd[1]: var-lib-kubelet-pods-85adadd9\x2d5ab9\x2d406e\x2db45d\x2de48d59355591-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4l72v.mount: Deactivated successfully. Mar 13 00:48:19.897953 systemd[1]: var-lib-kubelet-pods-85adadd9\x2d5ab9\x2d406e\x2db45d\x2de48d59355591-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 00:48:19.898165 systemd[1]: var-lib-kubelet-pods-85adadd9\x2d5ab9\x2d406e\x2db45d\x2de48d59355591-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 00:48:20.109141 kubelet[2756]: I0313 00:48:20.108848 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4faad7d8-0159-43ef-8a0f-0338ab29acb0" path="/var/lib/kubelet/pods/4faad7d8-0159-43ef-8a0f-0338ab29acb0/volumes" Mar 13 00:48:20.109816 kubelet[2756]: I0313 00:48:20.109714 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85adadd9-5ab9-406e-b45d-e48d59355591" path="/var/lib/kubelet/pods/85adadd9-5ab9-406e-b45d-e48d59355591/volumes" Mar 13 00:48:20.632763 sshd[4403]: Connection closed by 10.0.0.1 port 34732 Mar 13 00:48:20.633834 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:20.650523 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:34732.service: Deactivated successfully. Mar 13 00:48:20.653616 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:48:20.656144 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:48:20.660285 systemd[1]: Started sshd@25-10.0.0.109:22-10.0.0.1:53078.service - OpenSSH per-connection server daemon (10.0.0.1:53078). Mar 13 00:48:20.666864 systemd-logind[1570]: Removed session 25. Mar 13 00:48:20.737396 sshd[4549]: Accepted publickey for core from 10.0.0.1 port 53078 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:20.739848 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:20.753443 systemd-logind[1570]: New session 26 of user core. Mar 13 00:48:20.778597 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:48:21.326142 sshd[4552]: Connection closed by 10.0.0.1 port 53078 Mar 13 00:48:21.325241 sshd-session[4549]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:21.337322 systemd[1]: sshd@25-10.0.0.109:22-10.0.0.1:53078.service: Deactivated successfully. Mar 13 00:48:21.362541 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:48:21.366122 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:48:21.372457 systemd[1]: Started sshd@26-10.0.0.109:22-10.0.0.1:53084.service - OpenSSH per-connection server daemon (10.0.0.1:53084). Mar 13 00:48:21.375679 systemd-logind[1570]: Removed session 26. Mar 13 00:48:21.398855 systemd[1]: Created slice kubepods-burstable-pod70d5c387_5b97_41f5_8707_2f88bb0486c6.slice - libcontainer container kubepods-burstable-pod70d5c387_5b97_41f5_8707_2f88bb0486c6.slice. Mar 13 00:48:21.456151 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 53084 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:21.460397 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:21.468173 kubelet[2756]: I0313 00:48:21.468011 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-host-proc-sys-kernel\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.468639 kubelet[2756]: I0313 00:48:21.468394 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-cni-path\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.470262 kubelet[2756]: I0313 00:48:21.469703 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-lib-modules\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.470262 kubelet[2756]: I0313 00:48:21.470003 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-xtables-lock\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.470262 kubelet[2756]: I0313 00:48:21.470115 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-cilium-cgroup\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.470708 kubelet[2756]: I0313 00:48:21.470572 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70d5c387-5b97-41f5-8707-2f88bb0486c6-cilium-ipsec-secrets\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.470708 kubelet[2756]: I0313 00:48:21.470643 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-hostproc\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.471757 kubelet[2756]: I0313 00:48:21.471668 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-etc-cni-netd\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.472844 kubelet[2756]: I0313 00:48:21.472786 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70d5c387-5b97-41f5-8707-2f88bb0486c6-clustermesh-secrets\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473672 kubelet[2756]: I0313 00:48:21.472987 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6vx\" (UniqueName: \"kubernetes.io/projected/70d5c387-5b97-41f5-8707-2f88bb0486c6-kube-api-access-kj6vx\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473672 kubelet[2756]: I0313 00:48:21.473282 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-host-proc-sys-net\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473672 kubelet[2756]: I0313 00:48:21.473374 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70d5c387-5b97-41f5-8707-2f88bb0486c6-hubble-tls\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473672 kubelet[2756]: I0313 00:48:21.473389 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-cilium-run\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473672 kubelet[2756]: I0313 00:48:21.473403 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70d5c387-5b97-41f5-8707-2f88bb0486c6-bpf-maps\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473672 kubelet[2756]: I0313 00:48:21.473515 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70d5c387-5b97-41f5-8707-2f88bb0486c6-cilium-config-path\") pod \"cilium-ftp56\" (UID: \"70d5c387-5b97-41f5-8707-2f88bb0486c6\") " pod="kube-system/cilium-ftp56" Mar 13 00:48:21.473242 systemd-logind[1570]: New session 27 of user core. Mar 13 00:48:21.485525 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 00:48:21.503315 sshd[4568]: Connection closed by 10.0.0.1 port 53084 Mar 13 00:48:21.504277 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:21.514730 systemd[1]: sshd@26-10.0.0.109:22-10.0.0.1:53084.service: Deactivated successfully. Mar 13 00:48:21.517705 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 00:48:21.519501 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Mar 13 00:48:21.524195 systemd[1]: Started sshd@27-10.0.0.109:22-10.0.0.1:53086.service - OpenSSH per-connection server daemon (10.0.0.1:53086). Mar 13 00:48:21.525426 systemd-logind[1570]: Removed session 27. Mar 13 00:48:21.623383 sshd[4575]: Accepted publickey for core from 10.0.0.1 port 53086 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:48:21.626185 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:48:21.632696 systemd-logind[1570]: New session 28 of user core. Mar 13 00:48:21.658366 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 13 00:48:21.709749 kubelet[2756]: E0313 00:48:21.709676 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:21.711599 containerd[1581]: time="2026-03-13T00:48:21.711172456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ftp56,Uid:70d5c387-5b97-41f5-8707-2f88bb0486c6,Namespace:kube-system,Attempt:0,}" Mar 13 00:48:21.747585 containerd[1581]: time="2026-03-13T00:48:21.746462363Z" level=info msg="connecting to shim 37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0" address="unix:///run/containerd/s/17f08016ee828707975a9ceb9072cd79358a46f302b6ac5ce7fa1672ae6ead7c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:48:21.809420 systemd[1]: Started cri-containerd-37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0.scope - libcontainer container 37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0. Mar 13 00:48:21.881294 containerd[1581]: time="2026-03-13T00:48:21.880983109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ftp56,Uid:70d5c387-5b97-41f5-8707-2f88bb0486c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\"" Mar 13 00:48:21.883009 kubelet[2756]: E0313 00:48:21.882950 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:21.893208 containerd[1581]: time="2026-03-13T00:48:21.893105139Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:48:21.911114 containerd[1581]: time="2026-03-13T00:48:21.910537634Z" level=info msg="Container c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:48:21.920446 containerd[1581]: time="2026-03-13T00:48:21.920300584Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e\"" Mar 13 00:48:21.922293 containerd[1581]: time="2026-03-13T00:48:21.922020285Z" level=info msg="StartContainer for \"c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e\"" Mar 13 00:48:21.925143 containerd[1581]: time="2026-03-13T00:48:21.925107201Z" level=info msg="connecting to shim c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e" address="unix:///run/containerd/s/17f08016ee828707975a9ceb9072cd79358a46f302b6ac5ce7fa1672ae6ead7c" protocol=ttrpc version=3 Mar 13 00:48:21.966359 systemd[1]: Started cri-containerd-c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e.scope - libcontainer container c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e. Mar 13 00:48:22.034681 containerd[1581]: time="2026-03-13T00:48:22.034566610Z" level=info msg="StartContainer for \"c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e\" returns successfully" Mar 13 00:48:22.069517 systemd[1]: cri-containerd-c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e.scope: Deactivated successfully. Mar 13 00:48:22.074308 containerd[1581]: time="2026-03-13T00:48:22.074116211Z" level=info msg="received container exit event container_id:\"c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e\" id:\"c32c74ea65404e7889d2a1a4446fbf5e8e817f04e2aa48a28207758e517c5d1e\" pid:4646 exited_at:{seconds:1773362902 nanos:72806656}" Mar 13 00:48:22.179929 kubelet[2756]: E0313 00:48:22.179723 2756 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:48:22.500684 kubelet[2756]: E0313 00:48:22.498513 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:22.510125 containerd[1581]: time="2026-03-13T00:48:22.509823206Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:48:22.530688 containerd[1581]: time="2026-03-13T00:48:22.530556977Z" level=info msg="Container 2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:48:22.553814 containerd[1581]: time="2026-03-13T00:48:22.553707251Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11\"" Mar 13 00:48:22.555366 containerd[1581]: time="2026-03-13T00:48:22.555178011Z" level=info msg="StartContainer for \"2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11\"" Mar 13 00:48:22.556711 containerd[1581]: time="2026-03-13T00:48:22.556525388Z" level=info msg="connecting to shim 2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11" address="unix:///run/containerd/s/17f08016ee828707975a9ceb9072cd79358a46f302b6ac5ce7fa1672ae6ead7c" protocol=ttrpc version=3 Mar 13 00:48:22.589016 systemd[1]: Started cri-containerd-2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11.scope - libcontainer container 2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11. Mar 13 00:48:22.654877 containerd[1581]: time="2026-03-13T00:48:22.654798249Z" level=info msg="StartContainer for \"2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11\" returns successfully" Mar 13 00:48:22.663143 systemd[1]: cri-containerd-2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11.scope: Deactivated successfully. Mar 13 00:48:22.664022 containerd[1581]: time="2026-03-13T00:48:22.663832105Z" level=info msg="received container exit event container_id:\"2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11\" id:\"2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11\" pid:4692 exited_at:{seconds:1773362902 nanos:663576739}" Mar 13 00:48:22.700828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2de2122d55a59c4264080be049c96122ae895c33e9f2d83477a0f46d35284c11-rootfs.mount: Deactivated successfully. Mar 13 00:48:23.520119 kubelet[2756]: E0313 00:48:23.519422 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:23.533879 containerd[1581]: time="2026-03-13T00:48:23.533787134Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:48:23.562468 containerd[1581]: time="2026-03-13T00:48:23.562370354Z" level=info msg="Container a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:48:23.568344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996118543.mount: Deactivated successfully. Mar 13 00:48:23.578922 containerd[1581]: time="2026-03-13T00:48:23.578804126Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb\"" Mar 13 00:48:23.580613 containerd[1581]: time="2026-03-13T00:48:23.580463578Z" level=info msg="StartContainer for \"a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb\"" Mar 13 00:48:23.583805 containerd[1581]: time="2026-03-13T00:48:23.583774601Z" level=info msg="connecting to shim a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb" address="unix:///run/containerd/s/17f08016ee828707975a9ceb9072cd79358a46f302b6ac5ce7fa1672ae6ead7c" protocol=ttrpc version=3 Mar 13 00:48:23.617362 systemd[1]: Started cri-containerd-a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb.scope - libcontainer container a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb. Mar 13 00:48:23.745282 containerd[1581]: time="2026-03-13T00:48:23.745241913Z" level=info msg="StartContainer for \"a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb\" returns successfully" Mar 13 00:48:23.751995 systemd[1]: cri-containerd-a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb.scope: Deactivated successfully. Mar 13 00:48:23.755240 containerd[1581]: time="2026-03-13T00:48:23.755201671Z" level=info msg="received container exit event container_id:\"a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb\" id:\"a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb\" pid:4737 exited_at:{seconds:1773362903 nanos:754846571}" Mar 13 00:48:23.790462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a392174206ae3a5798077aec1c41538255f46660dffc6a6ba928f2dd47504cbb-rootfs.mount: Deactivated successfully. Mar 13 00:48:23.887422 kubelet[2756]: I0313 00:48:23.887308 2756 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T00:48:23Z","lastTransitionTime":"2026-03-13T00:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 00:48:24.526610 kubelet[2756]: E0313 00:48:24.526550 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:24.532777 containerd[1581]: time="2026-03-13T00:48:24.532527279Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:48:24.548737 containerd[1581]: time="2026-03-13T00:48:24.548554547Z" level=info msg="Container 1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:48:24.571770 containerd[1581]: time="2026-03-13T00:48:24.571731143Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135\"" Mar 13 00:48:24.572654 containerd[1581]: time="2026-03-13T00:48:24.572476460Z" level=info msg="StartContainer for \"1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135\"" Mar 13 00:48:24.573385 containerd[1581]: time="2026-03-13T00:48:24.573298520Z" level=info msg="connecting to shim 1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135" address="unix:///run/containerd/s/17f08016ee828707975a9ceb9072cd79358a46f302b6ac5ce7fa1672ae6ead7c" protocol=ttrpc version=3 Mar 13 00:48:24.625248 systemd[1]: Started cri-containerd-1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135.scope - libcontainer container 1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135. Mar 13 00:48:24.715615 systemd[1]: cri-containerd-1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135.scope: Deactivated successfully. Mar 13 00:48:24.717616 containerd[1581]: time="2026-03-13T00:48:24.717509285Z" level=info msg="received container exit event container_id:\"1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135\" id:\"1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135\" pid:4777 exited_at:{seconds:1773362904 nanos:715857923}" Mar 13 00:48:24.728150 containerd[1581]: time="2026-03-13T00:48:24.728104493Z" level=info msg="StartContainer for \"1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135\" returns successfully" Mar 13 00:48:24.747391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b05d868b0e15ddfda5cfe478dc3521f7771691da2fbb9e6aadeae06b1e5c135-rootfs.mount: Deactivated successfully. Mar 13 00:48:25.536019 kubelet[2756]: E0313 00:48:25.535809 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:25.544546 containerd[1581]: time="2026-03-13T00:48:25.544503311Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:48:25.567975 containerd[1581]: time="2026-03-13T00:48:25.567859176Z" level=info msg="Container 936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:48:25.582979 containerd[1581]: time="2026-03-13T00:48:25.582737892Z" level=info msg="CreateContainer within sandbox \"37f273bd71e889e945dceea5b75778b5ee20d218997c7deebc13535e8c24bcc0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe\"" Mar 13 00:48:25.584238 containerd[1581]: time="2026-03-13T00:48:25.583945937Z" level=info msg="StartContainer for \"936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe\"" Mar 13 00:48:25.586492 containerd[1581]: time="2026-03-13T00:48:25.586419529Z" level=info msg="connecting to shim 936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe" address="unix:///run/containerd/s/17f08016ee828707975a9ceb9072cd79358a46f302b6ac5ce7fa1672ae6ead7c" protocol=ttrpc version=3 Mar 13 00:48:25.622311 systemd[1]: Started cri-containerd-936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe.scope - libcontainer container 936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe. Mar 13 00:48:25.705673 containerd[1581]: time="2026-03-13T00:48:25.705447605Z" level=info msg="StartContainer for \"936fb624eca05e46a3586c1894b7a7fbbe350c527b81be66368d578cb2a3efbe\" returns successfully" Mar 13 00:48:26.336380 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 13 00:48:26.552129 kubelet[2756]: E0313 00:48:26.551835 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:26.576127 kubelet[2756]: I0313 00:48:26.575812 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ftp56" podStartSLOduration=5.575798929 podStartE2EDuration="5.575798929s" podCreationTimestamp="2026-03-13 00:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:48:26.57459473 +0000 UTC m=+94.632652241" watchObservedRunningTime="2026-03-13 00:48:26.575798929 +0000 UTC m=+94.633856439" Mar 13 00:48:27.709123 kubelet[2756]: E0313 00:48:27.707084 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:30.450859 systemd-networkd[1469]: lxc_health: Link UP Mar 13 00:48:30.451383 systemd-networkd[1469]: lxc_health: Gained carrier Mar 13 00:48:31.707964 kubelet[2756]: E0313 00:48:31.707801 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:31.824347 systemd-networkd[1469]: lxc_health: Gained IPv6LL Mar 13 00:48:32.102204 kubelet[2756]: E0313 00:48:32.101321 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:32.577886 kubelet[2756]: E0313 00:48:32.577850 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:33.100832 kubelet[2756]: E0313 00:48:33.100691 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:33.577643 kubelet[2756]: E0313 00:48:33.577178 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:34.100672 kubelet[2756]: E0313 00:48:34.100638 2756 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:48:36.981740 sshd[4582]: Connection closed by 10.0.0.1 port 53086 Mar 13 00:48:36.982480 sshd-session[4575]: pam_unix(sshd:session): session closed for user core Mar 13 00:48:36.988471 systemd[1]: sshd@27-10.0.0.109:22-10.0.0.1:53086.service: Deactivated successfully. Mar 13 00:48:36.990702 systemd[1]: session-28.scope: Deactivated successfully. Mar 13 00:48:36.992370 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Mar 13 00:48:36.994599 systemd-logind[1570]: Removed session 28.