Mar 4 00:58:53.890610 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 00:58:53.890643 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 00:58:53.890663 kernel: BIOS-provided physical RAM map: Mar 4 00:58:53.890673 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 4 00:58:53.890683 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 4 00:58:53.890693 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 4 00:58:53.890705 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 4 00:58:53.890714 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 4 00:58:53.890722 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 4 00:58:53.890730 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 4 00:58:53.890743 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 4 00:58:53.890753 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 4 00:58:53.890764 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 4 00:58:53.890772 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 4 00:58:53.890782 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 4 00:58:53.890791 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 4 00:58:53.890806 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 4 00:58:53.890818 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 4 00:58:53.890827 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 4 00:58:53.890836 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 00:58:53.890844 kernel: NX (Execute Disable) protection: active Mar 4 00:58:53.890853 kernel: APIC: Static calls initialized Mar 4 00:58:53.890864 kernel: efi: EFI v2.7 by EDK II Mar 4 00:58:53.890874 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 4 00:58:53.890883 kernel: SMBIOS 2.8 present. Mar 4 00:58:53.890891 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 4 00:58:53.890900 kernel: Hypervisor detected: KVM Mar 4 00:58:53.890917 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 00:58:53.890925 kernel: kvm-clock: using sched offset of 51882791019 cycles Mar 4 00:58:53.890935 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 00:58:53.890945 kernel: tsc: Detected 2445.424 MHz processor Mar 4 00:58:53.890957 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 00:58:53.890966 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 00:58:53.890975 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 4 00:58:53.890985 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 4 00:58:53.890996 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 00:58:53.891009 kernel: Using GB pages for direct mapping Mar 4 00:58:53.891018 kernel: Secure boot disabled Mar 4 00:58:53.891029 kernel: ACPI: Early table checksum verification disabled Mar 4 00:58:53.891040 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 4 00:58:53.891280 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 4 00:58:53.891296 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:58:53.891308 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:58:53.891326 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 4 00:58:53.891338 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:58:53.891351 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:58:53.891360 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:58:53.891370 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 00:58:53.891379 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 4 00:58:53.891390 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 4 00:58:53.891409 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 4 00:58:53.891418 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 4 00:58:53.891427 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 4 00:58:53.891437 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 4 00:58:53.891449 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 4 00:58:53.891459 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 4 00:58:53.891469 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 4 00:58:53.891477 kernel: No NUMA configuration found Mar 4 00:58:53.891489 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 4 00:58:53.891504 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 4 00:58:53.891513 kernel: Zone ranges: Mar 4 00:58:53.891524 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 00:58:53.891535 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 4 00:58:53.891547 kernel: Normal empty Mar 4 00:58:53.891558 kernel: Movable zone start for each node Mar 4 00:58:53.891570 kernel: Early memory node ranges Mar 4 00:58:53.891581 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 4 00:58:53.891592 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 4 00:58:53.891610 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 4 00:58:53.891619 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 4 00:58:53.891628 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 4 00:58:53.891639 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 4 00:58:53.891651 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 4 00:58:53.891661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 00:58:53.891670 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 4 00:58:53.891679 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 4 00:58:53.891690 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 00:58:53.891707 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 4 00:58:53.891718 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 4 00:58:53.891727 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 4 00:58:53.891736 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 00:58:53.891747 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 00:58:53.891759 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 00:58:53.891769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 00:58:53.891778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 00:58:53.891787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 00:58:53.891805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 00:58:53.891815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 00:58:53.891824 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 00:58:53.891834 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 00:58:53.891846 kernel: TSC deadline timer available Mar 4 00:58:53.891855 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 00:58:53.891864 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 00:58:53.891874 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 00:58:53.891884 kernel: kvm-guest: setup PV sched yield Mar 4 00:58:53.891901 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 4 00:58:53.891913 kernel: Booting paravirtualized kernel on KVM Mar 4 00:58:53.891924 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 00:58:53.891935 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 00:58:53.891947 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 00:58:53.891959 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 00:58:53.891969 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 00:58:53.891978 kernel: kvm-guest: PV spinlocks enabled Mar 4 00:58:53.891987 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 00:58:53.892007 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 00:58:53.892018 kernel: random: crng init done Mar 4 00:58:53.892268 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 00:58:53.892280 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 00:58:53.892289 kernel: Fallback order for Node 0: 0 Mar 4 00:58:53.892301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 4 00:58:53.892311 kernel: Policy zone: DMA32 Mar 4 00:58:53.892319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 00:58:53.892329 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 4 00:58:53.892347 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 00:58:53.892359 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 00:58:53.892370 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 00:58:53.892382 kernel: Dynamic Preempt: voluntary Mar 4 00:58:53.892393 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 00:58:53.892427 kernel: rcu: RCU event tracing is enabled. Mar 4 00:58:53.892441 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 00:58:53.892451 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 00:58:53.892464 kernel: Rude variant of Tasks RCU enabled. Mar 4 00:58:53.892475 kernel: Tracing variant of Tasks RCU enabled. Mar 4 00:58:53.892485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 00:58:53.892500 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 00:58:53.892513 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 00:58:53.892526 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 00:58:53.892536 kernel: Console: colour dummy device 80x25 Mar 4 00:58:53.892546 kernel: printk: console [ttyS0] enabled Mar 4 00:58:53.892561 kernel: ACPI: Core revision 20230628 Mar 4 00:58:53.892574 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 00:58:53.892584 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 00:58:53.892594 kernel: x2apic enabled Mar 4 00:58:53.892604 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 00:58:53.892617 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 00:58:53.892628 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 00:58:53.892638 kernel: kvm-guest: setup PV IPIs Mar 4 00:58:53.892648 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 00:58:53.892665 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 00:58:53.892675 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 4 00:58:53.892685 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 00:58:53.892696 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 00:58:53.892708 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 00:58:53.892720 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 00:58:53.892733 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 00:58:53.892745 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 00:58:53.892757 kernel: Speculative Store Bypass: Vulnerable Mar 4 00:58:53.892776 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 00:58:53.892787 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 00:58:53.892797 kernel: active return thunk: srso_alias_return_thunk Mar 4 00:58:53.892807 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 00:58:53.892820 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 00:58:53.892831 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 00:58:53.892840 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 00:58:53.892850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 00:58:53.892862 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 00:58:53.892880 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 00:58:53.892890 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 00:58:53.892900 kernel: Freeing SMP alternatives memory: 32K Mar 4 00:58:53.892911 kernel: pid_max: default: 32768 minimum: 301 Mar 4 00:58:53.892923 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 00:58:53.892934 kernel: landlock: Up and running. Mar 4 00:58:53.892944 kernel: SELinux: Initializing. Mar 4 00:58:53.892954 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:58:53.892967 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 00:58:53.892983 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 00:58:53.892993 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 00:58:53.893004 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 00:58:53.893016 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 00:58:53.893025 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 00:58:53.893035 kernel: signal: max sigframe size: 1776 Mar 4 00:58:53.893047 kernel: rcu: Hierarchical SRCU implementation. Mar 4 00:58:53.893374 kernel: rcu: Max phase no-delay instances is 400. Mar 4 00:58:53.893399 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 00:58:53.893413 kernel: smp: Bringing up secondary CPUs ... Mar 4 00:58:53.893424 kernel: smpboot: x86: Booting SMP configuration: Mar 4 00:58:53.893434 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 00:58:53.893443 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 00:58:53.893453 kernel: smpboot: Max logical packages: 1 Mar 4 00:58:53.893465 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 4 00:58:53.893478 kernel: devtmpfs: initialized Mar 4 00:58:53.893489 kernel: x86/mm: Memory block size: 128MB Mar 4 00:58:53.893499 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 4 00:58:53.893514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 4 00:58:53.893526 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 4 00:58:53.893536 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 4 00:58:53.893546 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 4 00:58:53.893558 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 00:58:53.893569 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 00:58:53.893582 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 00:58:53.893595 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 00:58:53.893612 kernel: audit: initializing netlink subsys (disabled) Mar 4 00:58:53.893624 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 00:58:53.893636 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 00:58:53.893646 kernel: audit: type=2000 audit(1772585914.995:1): state=initialized audit_enabled=0 res=1 Mar 4 00:58:53.893656 kernel: cpuidle: using governor menu Mar 4 00:58:53.893666 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 00:58:53.893679 kernel: dca service started, version 1.12.1 Mar 4 00:58:53.893690 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 00:58:53.893699 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 00:58:53.893714 kernel: PCI: Using configuration type 1 for base access Mar 4 00:58:53.893727 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 00:58:53.893739 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 00:58:53.893750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 00:58:53.893759 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 00:58:53.893769 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 00:58:53.893782 kernel: ACPI: Added _OSI(Module Device) Mar 4 00:58:53.893793 kernel: ACPI: Added _OSI(Processor Device) Mar 4 00:58:53.893803 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 00:58:53.893819 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 00:58:53.893832 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 00:58:53.893843 kernel: ACPI: Interpreter enabled Mar 4 00:58:53.893852 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 00:58:53.893863 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 00:58:53.893875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 00:58:53.893885 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 00:58:53.893895 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 00:58:53.893907 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 00:58:53.895293 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 00:58:53.895504 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 00:58:53.895690 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 00:58:53.895707 kernel: PCI host bridge to bus 0000:00 Mar 4 00:58:53.896730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 00:58:53.896915 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 00:58:53.897327 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 00:58:53.897501 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 00:58:53.897677 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 00:58:53.897850 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 4 00:58:53.898020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 00:58:53.898692 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 00:58:53.899401 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 00:58:53.899605 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 4 00:58:53.899794 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 4 00:58:53.899974 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 4 00:58:53.900397 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 4 00:58:53.900592 kernel: pci 0000:00:01.0: efifb_fixup_resources+0x0/0x140 took 22460 usecs Mar 4 00:58:53.900779 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 00:58:53.900965 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 34179 usecs Mar 4 00:58:53.902410 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 00:58:53.902603 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 4 00:58:53.902798 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 4 00:58:53.902986 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 4 00:58:53.903511 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 00:58:53.903699 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 4 00:58:53.903903 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 4 00:58:53.904288 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 4 00:58:53.904715 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 00:58:53.904905 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 4 00:58:53.905316 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 4 00:58:53.905509 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 4 00:58:53.905700 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 4 00:58:53.906249 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 00:58:53.906451 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 00:58:53.906643 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 15625 usecs Mar 4 00:58:53.907479 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 00:58:53.907672 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 4 00:58:53.907861 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 4 00:58:53.908420 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 00:58:53.908618 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 4 00:58:53.908640 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 00:58:53.908653 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 00:58:53.908666 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 00:58:53.908677 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 00:58:53.908687 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 00:58:53.908696 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 00:58:53.908708 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 00:58:53.908720 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 00:58:53.908736 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 00:58:53.908746 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 00:58:53.908758 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 00:58:53.908771 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 00:58:53.908781 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 00:58:53.908790 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 00:58:53.908802 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 00:58:53.908814 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 00:58:53.908824 kernel: iommu: Default domain type: Translated Mar 4 00:58:53.908839 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 00:58:53.908851 kernel: efivars: Registered efivars operations Mar 4 00:58:53.908864 kernel: PCI: Using ACPI for IRQ routing Mar 4 00:58:53.908874 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 00:58:53.908883 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 4 00:58:53.908894 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 4 00:58:53.908906 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 4 00:58:53.908918 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 4 00:58:53.909622 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 00:58:53.909819 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 00:58:53.910011 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 00:58:53.910029 kernel: vgaarb: loaded Mar 4 00:58:53.910039 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 00:58:53.910051 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 00:58:53.910264 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 00:58:53.910277 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 00:58:53.910287 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 00:58:53.910297 kernel: pnp: PnP ACPI init Mar 4 00:58:53.910923 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 00:58:53.910942 kernel: pnp: PnP ACPI: found 6 devices Mar 4 00:58:53.910956 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 00:58:53.910966 kernel: NET: Registered PF_INET protocol family Mar 4 00:58:53.910976 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 00:58:53.910986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 00:58:53.910999 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 00:58:53.911011 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 00:58:53.911031 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 00:58:53.911041 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 00:58:53.911051 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:58:53.911249 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 00:58:53.911262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 00:58:53.911272 kernel: NET: Registered PF_XDP protocol family Mar 4 00:58:53.911466 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 4 00:58:53.911659 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 4 00:58:53.911842 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 00:58:53.912012 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 00:58:53.912757 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 00:58:53.913003 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 00:58:53.913437 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 00:58:53.913612 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 4 00:58:53.913633 kernel: PCI: CLS 0 bytes, default 64 Mar 4 00:58:53.913644 kernel: Initialise system trusted keyrings Mar 4 00:58:53.913661 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 00:58:53.913672 kernel: Key type asymmetric registered Mar 4 00:58:53.913684 kernel: Asymmetric key parser 'x509' registered Mar 4 00:58:53.913695 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 00:58:53.913705 kernel: io scheduler mq-deadline registered Mar 4 00:58:53.913715 kernel: io scheduler kyber registered Mar 4 00:58:53.913726 kernel: io scheduler bfq registered Mar 4 00:58:53.913739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 00:58:53.913752 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 00:58:53.913767 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 00:58:53.913777 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 00:58:53.913790 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 00:58:53.913802 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 00:58:53.913812 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 00:58:53.913822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 00:58:53.913833 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 00:58:53.914301 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 00:58:53.914480 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 00:58:53.914507 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 00:58:53.914678 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T00:58:51 UTC (1772585931) Mar 4 00:58:53.914851 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 00:58:53.914867 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 00:58:53.914880 kernel: efifb: probing for efifb Mar 4 00:58:53.914892 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 4 00:58:53.914902 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 4 00:58:53.914912 kernel: efifb: scrolling: redraw Mar 4 00:58:53.914930 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 4 00:58:53.914940 kernel: Console: switching to colour frame buffer device 100x37 Mar 4 00:58:53.914950 kernel: fb0: EFI VGA frame buffer device Mar 4 00:58:53.914961 kernel: pstore: Using crash dump compression: deflate Mar 4 00:58:53.914974 kernel: pstore: Registered efi_pstore as persistent store backend Mar 4 00:58:53.914984 kernel: NET: Registered PF_INET6 protocol family Mar 4 00:58:53.914993 kernel: Segment Routing with IPv6 Mar 4 00:58:53.915006 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 00:58:53.915048 kernel: NET: Registered PF_PACKET protocol family Mar 4 00:58:53.915265 kernel: Key type dns_resolver registered Mar 4 00:58:53.915277 kernel: IPI shorthand broadcast: enabled Mar 4 00:58:53.915289 kernel: sched_clock: Marking stable (12397094262, 3992487904)->(20387239431, -3997657265) Mar 4 00:58:53.915303 kernel: registered taskstats version 1 Mar 4 00:58:53.915314 kernel: Loading compiled-in X.509 certificates Mar 4 00:58:53.915324 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 00:58:53.915335 kernel: Key type .fscrypt registered Mar 4 00:58:53.915348 kernel: Key type fscrypt-provisioning registered Mar 4 00:58:53.915361 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 00:58:53.915376 kernel: ima: Allocated hash algorithm: sha1 Mar 4 00:58:53.915387 kernel: ima: No architecture policies found Mar 4 00:58:53.915400 kernel: clk: Disabling unused clocks Mar 4 00:58:53.915413 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 00:58:53.915426 kernel: Write protecting the kernel read-only data: 36864k Mar 4 00:58:53.915439 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 00:58:53.915451 kernel: Run /init as init process Mar 4 00:58:53.915462 kernel: with arguments: Mar 4 00:58:53.915472 kernel: /init Mar 4 00:58:53.915490 kernel: with environment: Mar 4 00:58:53.915502 kernel: HOME=/ Mar 4 00:58:53.915512 kernel: TERM=linux Mar 4 00:58:53.915525 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:58:53.915541 systemd[1]: Detected virtualization kvm. Mar 4 00:58:53.915553 systemd[1]: Detected architecture x86-64. Mar 4 00:58:53.915563 systemd[1]: Running in initrd. Mar 4 00:58:53.915580 systemd[1]: No hostname configured, using default hostname. Mar 4 00:58:53.915591 systemd[1]: Hostname set to . Mar 4 00:58:53.915602 systemd[1]: Initializing machine ID from VM UUID. Mar 4 00:58:53.915614 systemd[1]: Queued start job for default target initrd.target. Mar 4 00:58:53.915627 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:58:53.915641 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:58:53.915661 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 00:58:53.915675 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:58:53.915689 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 00:58:53.915702 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 00:58:53.915714 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 00:58:53.915726 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 00:58:53.915746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:58:53.915758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:58:53.915768 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:58:53.915780 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:58:53.915793 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:58:53.915807 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:58:53.915818 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:58:53.915828 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:58:53.915847 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 00:58:53.915860 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 00:58:53.915871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:58:53.915882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:58:53.915897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:58:53.915908 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:58:53.915919 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 00:58:53.915931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:58:53.915943 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 00:58:53.915959 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 00:58:53.915971 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:58:53.915985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:58:53.916032 systemd-journald[194]: Collecting audit messages is disabled. Mar 4 00:58:53.916323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:58:53.916337 systemd-journald[194]: Journal started Mar 4 00:58:53.916364 systemd-journald[194]: Runtime Journal (/run/log/journal/9f3297ad38654ea6bd51a88cd1374a65) is 6.0M, max 48.3M, 42.2M free. Mar 4 00:58:53.947529 systemd-modules-load[195]: Inserted module 'overlay' Mar 4 00:58:53.985867 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:58:53.999328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 00:58:54.019275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:58:54.031818 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 00:58:54.076966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:58:54.180330 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 00:58:54.183511 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:58:54.216943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:58:54.248967 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:58:54.257013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:58:54.288417 kernel: Bridge firewalling registered Mar 4 00:58:54.277619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:58:54.332708 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 4 00:58:54.335925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:58:54.361528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:58:54.373745 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:58:54.386901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:58:54.454675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:58:54.482715 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 00:58:54.499872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:58:54.539716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:58:54.565985 dracut-cmdline[231]: dracut-dracut-053 Mar 4 00:58:54.577615 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 00:58:54.677718 systemd-resolved[234]: Positive Trust Anchors: Mar 4 00:58:54.677817 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:58:54.677863 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:58:54.769578 systemd-resolved[234]: Defaulting to hostname 'linux'. Mar 4 00:58:54.782712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:58:54.794698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:58:54.866457 kernel: SCSI subsystem initialized Mar 4 00:58:54.890430 kernel: Loading iSCSI transport class v2.0-870. Mar 4 00:58:54.943806 kernel: iscsi: registered transport (tcp) Mar 4 00:58:55.004601 kernel: iscsi: registered transport (qla4xxx) Mar 4 00:58:55.004678 kernel: QLogic iSCSI HBA Driver Mar 4 00:58:55.181368 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 00:58:55.211538 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 00:58:55.381815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 00:58:55.381916 kernel: device-mapper: uevent: version 1.0.3 Mar 4 00:58:55.394651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 00:58:55.551336 kernel: raid6: avx2x4 gen() 13769 MB/s Mar 4 00:58:55.571744 kernel: raid6: avx2x2 gen() 23775 MB/s Mar 4 00:58:55.601449 kernel: raid6: avx2x1 gen() 13839 MB/s Mar 4 00:58:55.601530 kernel: raid6: using algorithm avx2x2 gen() 23775 MB/s Mar 4 00:58:55.632894 kernel: raid6: .... xor() 14778 MB/s, rmw enabled Mar 4 00:58:55.632965 kernel: raid6: using avx2x2 recovery algorithm Mar 4 00:58:55.685694 kernel: xor: automatically using best checksumming function avx Mar 4 00:58:56.641779 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 00:58:56.707322 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:58:56.745953 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:58:56.832306 systemd-udevd[418]: Using default interface naming scheme 'v255'. Mar 4 00:58:56.858414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:58:56.921895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 00:58:56.968767 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 4 00:58:57.071816 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:58:57.115897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:58:57.750863 kernel: hrtimer: interrupt took 8902214 ns Mar 4 00:58:57.851690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:58:58.042856 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 00:58:58.169697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 00:58:58.246053 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:58:58.308415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:58:58.361973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:58:58.532495 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 00:58:58.673285 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 00:58:58.735911 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:58:58.776866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:58:58.777516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:58:58.840059 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:58:58.867666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:58:58.868059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:58:58.885635 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:58:59.031546 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 00:58:59.042853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:58:59.190678 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 00:58:59.191529 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 00:58:59.191549 kernel: GPT:9289727 != 19775487 Mar 4 00:58:59.191564 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 00:58:59.191578 kernel: GPT:9289727 != 19775487 Mar 4 00:58:59.191591 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 00:58:59.191611 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 00:58:59.183680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:58:59.183877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:58:59.366832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:58:59.486492 kernel: libata version 3.00 loaded. Mar 4 00:58:59.541389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:58:59.576439 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 00:58:59.590927 kernel: AES CTR mode by8 optimization enabled Mar 4 00:58:59.606933 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 00:58:59.652664 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 00:58:59.673847 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Mar 4 00:58:59.673914 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 00:58:59.669746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 00:58:59.729060 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (465) Mar 4 00:58:59.792736 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 00:58:59.798712 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 00:58:59.799059 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 00:58:59.858359 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 00:58:59.864986 kernel: scsi host0: ahci Mar 4 00:58:59.861284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 00:58:59.881360 kernel: scsi host1: ahci Mar 4 00:58:59.886048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 00:59:00.047406 kernel: scsi host2: ahci Mar 4 00:59:00.047731 kernel: scsi host3: ahci Mar 4 00:59:00.047971 kernel: scsi host4: ahci Mar 4 00:59:00.050583 kernel: scsi host5: ahci Mar 4 00:59:00.050905 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 4 00:59:00.050926 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 4 00:59:00.050951 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 4 00:59:00.050969 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 4 00:59:00.050986 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 4 00:59:00.051003 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 4 00:59:00.083920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 00:59:00.131517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:59:00.164768 disk-uuid[575]: Primary Header is updated. Mar 4 00:59:00.164768 disk-uuid[575]: Secondary Entries is updated. Mar 4 00:59:00.164768 disk-uuid[575]: Secondary Header is updated. Mar 4 00:59:00.234048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 00:59:00.357964 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:00.380744 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:00.411399 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:00.425606 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:00.447494 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 00:59:00.447630 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 00:59:00.454275 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 00:59:00.484547 kernel: ata3.00: applying bridge limits Mar 4 00:59:00.511426 kernel: ata3.00: configured for UDMA/100 Mar 4 00:59:00.532406 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 00:59:00.683680 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 00:59:00.684482 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 00:59:00.710279 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 00:59:01.246271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 00:59:01.261854 disk-uuid[577]: The operation has completed successfully. Mar 4 00:59:01.302435 kernel: block device autoloading is deprecated and will be removed. Mar 4 00:59:01.500968 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 00:59:01.502663 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 00:59:01.568614 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 00:59:01.663462 sh[603]: Success Mar 4 00:59:01.826435 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 00:59:02.047414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 00:59:02.093608 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 00:59:02.118669 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 00:59:02.198409 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 00:59:02.198472 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:02.198488 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 00:59:02.220357 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 00:59:02.220413 kernel: BTRFS info (device dm-0): using free space tree Mar 4 00:59:02.311808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 00:59:02.323806 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 00:59:02.362652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 00:59:02.402404 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 00:59:02.464500 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:02.464580 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:02.464597 kernel: BTRFS info (device vda6): using free space tree Mar 4 00:59:02.512392 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 00:59:02.562578 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 00:59:02.597817 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:02.653495 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 00:59:02.700683 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 00:59:03.266652 ignition[699]: Ignition 2.19.0 Mar 4 00:59:03.267013 ignition[699]: Stage: fetch-offline Mar 4 00:59:03.272281 ignition[699]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:03.272299 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:03.272929 ignition[699]: parsed url from cmdline: "" Mar 4 00:59:03.272937 ignition[699]: no config URL provided Mar 4 00:59:03.272948 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 00:59:03.272963 ignition[699]: no config at "/usr/lib/ignition/user.ign" Mar 4 00:59:03.273000 ignition[699]: op(1): [started] loading QEMU firmware config module Mar 4 00:59:03.348846 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:59:03.273010 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 00:59:03.404327 ignition[699]: op(1): [finished] loading QEMU firmware config module Mar 4 00:59:03.427436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:59:03.551435 systemd-networkd[792]: lo: Link UP Mar 4 00:59:03.551520 systemd-networkd[792]: lo: Gained carrier Mar 4 00:59:03.555916 systemd-networkd[792]: Enumeration completed Mar 4 00:59:03.556036 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:59:03.559705 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:03.559711 systemd-networkd[792]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:59:03.560857 systemd[1]: Reached target network.target - Network. Mar 4 00:59:03.581294 systemd-networkd[792]: eth0: Link UP Mar 4 00:59:03.581303 systemd-networkd[792]: eth0: Gained carrier Mar 4 00:59:03.581317 systemd-networkd[792]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:03.742918 systemd-networkd[792]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 00:59:05.040641 ignition[699]: parsing config with SHA512: 8ad2b8538ddf2997af8cbf8eeb3af23a901cf7a19ba728451beaac2d9cb1094c94f655ce7d8752042b413de11c1f9d29641abff51e28547952d3b5ea0a399230 Mar 4 00:59:05.041571 systemd-networkd[792]: eth0: Gained IPv6LL Mar 4 00:59:05.157544 unknown[699]: fetched base config from "system" Mar 4 00:59:05.158581 ignition[699]: fetch-offline: fetch-offline passed Mar 4 00:59:05.157628 unknown[699]: fetched user config from "qemu" Mar 4 00:59:05.158969 ignition[699]: Ignition finished successfully Mar 4 00:59:05.244394 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:59:05.272970 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 00:59:05.341843 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 00:59:05.721025 ignition[796]: Ignition 2.19.0 Mar 4 00:59:05.724430 ignition[796]: Stage: kargs Mar 4 00:59:05.726449 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:05.726541 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:05.824913 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 00:59:05.739641 ignition[796]: kargs: kargs passed Mar 4 00:59:05.739741 ignition[796]: Ignition finished successfully Mar 4 00:59:06.080060 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 00:59:06.632811 ignition[804]: Ignition 2.19.0 Mar 4 00:59:06.634564 ignition[804]: Stage: disks Mar 4 00:59:06.637583 ignition[804]: no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:06.649056 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 00:59:06.637605 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:06.660437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 00:59:06.640553 ignition[804]: disks: disks passed Mar 4 00:59:06.662423 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 00:59:06.640630 ignition[804]: Ignition finished successfully Mar 4 00:59:06.662482 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:59:06.662532 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:59:06.662574 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:59:06.768565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 00:59:06.939458 systemd-fsck[815]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 00:59:06.969528 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 00:59:07.047758 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 00:59:07.976495 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 00:59:07.980459 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 00:59:07.993526 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 00:59:08.067756 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:59:08.147333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 00:59:08.162711 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 00:59:08.162771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 00:59:08.162803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:59:08.220085 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 00:59:08.333086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (824) Mar 4 00:59:08.272751 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 00:59:08.364680 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:08.364789 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:08.364810 kernel: BTRFS info (device vda6): using free space tree Mar 4 00:59:08.426712 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 00:59:08.440350 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:59:08.582870 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 00:59:08.624975 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Mar 4 00:59:08.642456 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 00:59:08.673395 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 00:59:09.486380 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 00:59:09.549975 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 00:59:09.595295 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 00:59:09.643812 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 00:59:09.691479 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:09.869629 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 00:59:09.908002 ignition[937]: INFO : Ignition 2.19.0 Mar 4 00:59:09.908002 ignition[937]: INFO : Stage: mount Mar 4 00:59:09.932494 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:09.932494 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:09.932494 ignition[937]: INFO : mount: mount passed Mar 4 00:59:09.932494 ignition[937]: INFO : Ignition finished successfully Mar 4 00:59:09.926009 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 00:59:10.026606 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 00:59:10.094641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 00:59:10.187726 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (951) Mar 4 00:59:10.225498 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 00:59:10.225574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 00:59:10.225589 kernel: BTRFS info (device vda6): using free space tree Mar 4 00:59:10.291892 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 00:59:10.317535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 00:59:10.449388 ignition[968]: INFO : Ignition 2.19.0 Mar 4 00:59:10.449388 ignition[968]: INFO : Stage: files Mar 4 00:59:10.449388 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:10.449388 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:10.566610 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Mar 4 00:59:10.566610 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 00:59:10.566610 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 00:59:10.566610 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 00:59:10.566610 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 00:59:10.732446 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 00:59:10.732446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 00:59:10.732446 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 00:59:10.574084 unknown[968]: wrote ssh authorized keys file for user: core Mar 4 00:59:10.840914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 00:59:11.295500 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 00:59:11.295500 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 00:59:11.295500 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 4 00:59:11.545580 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 00:59:11.814472 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 00:59:11.853081 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:59:11.887692 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 00:59:12.097735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:59:12.097735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 00:59:12.097735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 00:59:12.097735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 00:59:12.097735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 00:59:12.097735 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 4 00:59:12.339550 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 00:59:15.243968 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 00:59:15.282843 ignition[968]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 4 00:59:15.308728 ignition[968]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 00:59:15.544007 ignition[968]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 00:59:15.563391 ignition[968]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 00:59:15.583959 ignition[968]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 00:59:15.583959 ignition[968]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 4 00:59:15.583959 ignition[968]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 00:59:15.583959 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:59:15.583959 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 00:59:15.583959 ignition[968]: INFO : files: files passed Mar 4 00:59:15.583959 ignition[968]: INFO : Ignition finished successfully Mar 4 00:59:15.646623 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 00:59:15.747870 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 00:59:15.797493 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 00:59:15.814852 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 00:59:15.815280 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 00:59:15.876455 initrd-setup-root-after-ignition[996]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 00:59:15.914830 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:59:15.914830 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:59:15.963781 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 00:59:15.944848 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:59:16.038879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 00:59:16.081618 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 00:59:16.186557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 00:59:16.186865 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 00:59:16.216769 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 00:59:16.274590 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 00:59:16.285646 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 00:59:16.322785 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 00:59:16.377373 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:59:16.434953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 00:59:16.486428 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:59:16.514759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:59:16.543375 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 00:59:16.563483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 00:59:16.563778 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 00:59:16.595068 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 00:59:16.613747 systemd[1]: Stopped target basic.target - Basic System. Mar 4 00:59:16.630949 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 00:59:16.654376 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 00:59:16.678977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 00:59:16.704982 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 00:59:16.732542 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 00:59:16.775983 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 00:59:16.856590 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 00:59:16.867593 systemd[1]: Stopped target swap.target - Swaps. Mar 4 00:59:16.875832 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 00:59:16.877425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 00:59:16.894439 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:59:16.915906 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:59:16.926669 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 00:59:16.928888 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:59:16.965616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 00:59:16.965888 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 00:59:17.005696 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 00:59:17.008770 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 00:59:17.038369 systemd[1]: Stopped target paths.target - Path Units. Mar 4 00:59:17.068821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 00:59:17.070649 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:59:17.114954 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 00:59:17.165016 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 00:59:17.219555 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 00:59:17.219843 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 00:59:17.257507 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 00:59:17.257652 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 00:59:17.304704 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 00:59:17.304945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 00:59:17.379282 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 00:59:17.379467 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 00:59:17.452844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 00:59:17.469565 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 00:59:17.502755 ignition[1023]: INFO : Ignition 2.19.0 Mar 4 00:59:17.502755 ignition[1023]: INFO : Stage: umount Mar 4 00:59:17.502755 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 00:59:17.502755 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 00:59:17.586926 ignition[1023]: INFO : umount: umount passed Mar 4 00:59:17.586926 ignition[1023]: INFO : Ignition finished successfully Mar 4 00:59:17.513857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 00:59:17.521536 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:59:17.543417 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 00:59:17.543676 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 00:59:17.649501 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 00:59:17.662430 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 00:59:17.674504 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 00:59:17.700460 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 00:59:17.713078 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 00:59:17.737959 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 00:59:17.746778 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 00:59:17.774794 systemd[1]: Stopped target network.target - Network. Mar 4 00:59:17.784524 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 00:59:17.784628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 00:59:17.813664 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 00:59:17.813807 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 00:59:17.842640 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 00:59:17.842755 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 00:59:17.856676 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 00:59:17.856803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 00:59:17.967934 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 00:59:17.968301 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 00:59:17.968842 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 00:59:18.033455 systemd-networkd[792]: eth0: DHCPv6 lease lost Mar 4 00:59:18.079834 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 00:59:18.102370 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 00:59:18.102718 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 00:59:18.165658 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 00:59:18.171770 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:59:18.239639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 00:59:18.265014 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 00:59:18.266969 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 00:59:18.323382 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:59:18.391503 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 00:59:18.391953 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 00:59:18.439540 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 00:59:18.439963 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:59:18.469643 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 00:59:18.469999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 00:59:18.498801 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 00:59:18.498918 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 00:59:18.514562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 00:59:18.514623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:59:18.527845 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 00:59:18.527929 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 00:59:18.549724 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 00:59:18.549820 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 00:59:18.551825 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 00:59:18.551901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 00:59:18.607751 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 00:59:18.643951 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 00:59:18.644390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:59:18.654399 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 00:59:18.654626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 00:59:18.787666 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 00:59:18.787861 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:59:18.803892 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 00:59:18.805296 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:59:18.818717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 00:59:18.818781 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:59:18.866434 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 00:59:18.866549 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:59:18.901686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 00:59:18.901795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:18.930430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 00:59:18.930677 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 00:59:18.971872 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 00:59:19.067558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 00:59:19.122811 systemd[1]: Switching root. Mar 4 00:59:19.213834 systemd-journald[194]: Journal stopped Mar 4 00:59:29.373945 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 4 00:59:29.377913 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 00:59:29.378562 kernel: SELinux: policy capability open_perms=1 Mar 4 00:59:29.378581 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 00:59:29.378597 kernel: SELinux: policy capability always_check_network=0 Mar 4 00:59:29.378693 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 00:59:29.378711 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 00:59:29.378732 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 00:59:29.378746 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 00:59:29.378770 kernel: audit: type=1403 audit(1772585959.775:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 00:59:29.378945 systemd[1]: Successfully loaded SELinux policy in 369.066ms. Mar 4 00:59:29.379064 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 77.387ms. Mar 4 00:59:29.379085 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 00:59:29.382424 systemd[1]: Detected virtualization kvm. Mar 4 00:59:29.382552 systemd[1]: Detected architecture x86-64. Mar 4 00:59:29.382569 systemd[1]: Detected first boot. Mar 4 00:59:29.382585 systemd[1]: Initializing machine ID from VM UUID. Mar 4 00:59:29.382601 zram_generator::config[1068]: No configuration found. Mar 4 00:59:29.382619 systemd[1]: Populated /etc with preset unit settings. Mar 4 00:59:29.382635 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 00:59:29.382651 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 00:59:29.382732 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 00:59:29.382842 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 00:59:29.382860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 00:59:29.382956 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 00:59:29.382974 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 00:59:29.382990 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 00:59:29.383008 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 00:59:29.383025 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 00:59:29.383426 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 00:59:29.383452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 00:59:29.383547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 00:59:29.383565 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 00:59:29.383583 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 00:59:29.383600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 00:59:29.383617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 00:59:29.383634 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 00:59:29.383649 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 00:59:29.383670 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 00:59:29.383681 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 00:59:29.383692 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 00:59:29.383708 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 00:59:29.383719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 00:59:29.383730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 00:59:29.383740 systemd[1]: Reached target slices.target - Slice Units. Mar 4 00:59:29.383751 systemd[1]: Reached target swap.target - Swaps. Mar 4 00:59:29.383844 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 00:59:29.383856 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 00:59:29.383867 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 00:59:29.383877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 00:59:29.383888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 00:59:29.383900 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 00:59:29.383910 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 00:59:29.383921 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 00:59:29.383932 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 00:59:29.383946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:29.383956 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 00:59:29.383967 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 00:59:29.383977 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 00:59:29.384064 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 00:59:29.384076 systemd[1]: Reached target machines.target - Containers. Mar 4 00:59:29.384446 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 00:59:29.384464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:59:29.384477 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 00:59:29.384488 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 00:59:29.384499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:59:29.384509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:59:29.384520 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:59:29.384530 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 00:59:29.384541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:59:29.384552 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 00:59:29.384562 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 00:59:29.384577 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 00:59:29.384588 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 00:59:29.384598 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 00:59:29.384609 kernel: fuse: init (API version 7.39) Mar 4 00:59:29.384619 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 00:59:29.384630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 00:59:29.384641 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 00:59:29.384703 systemd-journald[1152]: Collecting audit messages is disabled. Mar 4 00:59:29.384729 systemd-journald[1152]: Journal started Mar 4 00:59:29.384749 systemd-journald[1152]: Runtime Journal (/run/log/journal/9f3297ad38654ea6bd51a88cd1374a65) is 6.0M, max 48.3M, 42.2M free. Mar 4 00:59:25.843982 systemd[1]: Queued start job for default target multi-user.target. Mar 4 00:59:25.956080 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 00:59:25.959744 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 00:59:25.966357 systemd[1]: systemd-journald.service: Consumed 3.500s CPU time. Mar 4 00:59:29.920725 kernel: loop: module loaded Mar 4 00:59:29.921425 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 00:59:29.970704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 00:59:30.030904 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 00:59:30.030999 systemd[1]: Stopped verity-setup.service. Mar 4 00:59:30.063462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:30.147705 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 00:59:30.148000 kernel: ACPI: bus type drm_connector registered Mar 4 00:59:30.186487 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 00:59:30.212764 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 00:59:30.235537 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 00:59:30.249599 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 00:59:30.269784 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 00:59:30.292532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 00:59:30.328907 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 00:59:30.360665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 00:59:30.378888 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 00:59:30.379656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 00:59:30.395027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:59:30.398655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:59:30.427759 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:59:30.428092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:59:30.452731 systemd[1]: modprobe@drm.service: Consumed 1.358s CPU time. Mar 4 00:59:30.454090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:59:30.455868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:59:30.475836 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 00:59:30.476528 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 00:59:30.498055 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:59:30.500815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:59:30.523983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 00:59:30.547326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 00:59:30.576867 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 00:59:30.597615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 00:59:30.693049 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 00:59:30.731993 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 00:59:30.770998 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 00:59:30.792509 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 00:59:30.792660 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 00:59:30.812499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 00:59:30.833526 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 00:59:30.857994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 00:59:30.883817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:59:30.918541 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 00:59:30.937542 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 00:59:30.962479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:59:30.979890 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 00:59:31.000653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:59:31.025911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 00:59:31.897799 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 00:59:33.827019 systemd-journald[1152]: Time spent on flushing to /var/log/journal/9f3297ad38654ea6bd51a88cd1374a65 is 814.808ms for 995 entries. Mar 4 00:59:33.827019 systemd-journald[1152]: System Journal (/var/log/journal/9f3297ad38654ea6bd51a88cd1374a65) is 8.0M, max 195.6M, 187.6M free. Mar 4 00:59:34.890617 systemd-journald[1152]: Received client request to flush runtime journal. Mar 4 00:59:34.891016 kernel: loop0: detected capacity change from 0 to 142488 Mar 4 00:59:34.891064 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 00:59:34.891083 kernel: loop1: detected capacity change from 0 to 219192 Mar 4 00:59:33.851654 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 00:59:33.947718 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 00:59:34.036654 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 00:59:34.097856 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 00:59:34.331522 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 00:59:34.741452 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 00:59:34.782501 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 00:59:34.842041 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 00:59:36.187463 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 00:59:36.225898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 00:59:36.270796 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 00:59:36.417661 kernel: loop2: detected capacity change from 0 to 140768 Mar 4 00:59:36.872520 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 4 00:59:36.872544 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 4 00:59:36.946981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 00:59:36.980538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 00:59:37.029805 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 00:59:37.118792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 00:59:37.241570 kernel: loop3: detected capacity change from 0 to 142488 Mar 4 00:59:37.407773 kernel: loop4: detected capacity change from 0 to 219192 Mar 4 00:59:37.421456 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 00:59:37.462921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 00:59:37.576447 kernel: loop5: detected capacity change from 0 to 140768 Mar 4 00:59:37.650816 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 4 00:59:37.650840 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 4 00:59:37.678966 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 00:59:37.734443 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 00:59:37.741650 (sd-merge)[1204]: Merged extensions into '/usr'. Mar 4 00:59:37.766503 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 00:59:37.766529 systemd[1]: Reloading... Mar 4 00:59:38.123624 zram_generator::config[1238]: No configuration found. Mar 4 00:59:39.540510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:59:39.574497 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 00:59:39.740077 systemd[1]: Reloading finished in 1972 ms. Mar 4 00:59:40.005841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 00:59:40.035666 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 00:59:40.067790 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 00:59:40.149917 systemd[1]: Starting ensure-sysext.service... Mar 4 00:59:40.196025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 00:59:40.264669 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 00:59:40.322532 systemd[1]: Reloading requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Mar 4 00:59:40.322555 systemd[1]: Reloading... Mar 4 00:59:40.454691 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 00:59:40.456052 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 00:59:40.461850 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 00:59:40.462944 systemd-udevd[1275]: Using default interface naming scheme 'v255'. Mar 4 00:59:40.464668 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Mar 4 00:59:40.464772 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Mar 4 00:59:40.488456 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:59:40.488472 systemd-tmpfiles[1274]: Skipping /boot Mar 4 00:59:40.541056 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 00:59:40.541616 systemd-tmpfiles[1274]: Skipping /boot Mar 4 00:59:40.641517 zram_generator::config[1309]: No configuration found. Mar 4 00:59:40.959587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1329) Mar 4 00:59:41.105583 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 00:59:41.154720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 00:59:41.183558 kernel: ACPI: button: Power Button [PWRF] Mar 4 00:59:41.375520 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 00:59:41.375718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 00:59:41.385882 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 4 00:59:41.387863 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 00:59:41.410562 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 00:59:41.411030 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 00:59:41.443766 systemd[1]: Reloading finished in 1120 ms. Mar 4 00:59:41.502024 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 00:59:41.514069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 00:59:41.569843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 00:59:41.642796 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 00:59:41.791977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:41.940981 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 00:59:41.967697 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 00:59:41.989011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 00:59:41.999038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 00:59:42.820784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 00:59:42.871725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 00:59:42.907671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 00:59:42.938942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 00:59:42.945025 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 00:59:43.062900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 00:59:43.163565 augenrules[1392]: No rules Mar 4 00:59:43.293974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 00:59:43.483758 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 00:59:43.577514 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 00:59:43.818974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 00:59:43.836878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 00:59:43.933883 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 00:59:44.066075 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 00:59:44.382892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 00:59:44.385843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 00:59:44.437082 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 00:59:44.443651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 00:59:44.465049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 00:59:44.466008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 00:59:44.490793 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 00:59:44.491493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 00:59:44.543959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 00:59:44.576009 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 00:59:44.796899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 00:59:44.802026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 00:59:44.939434 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 00:59:45.083509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 00:59:45.083915 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 00:59:45.088774 systemd[1]: Finished ensure-sysext.service. Mar 4 00:59:45.134887 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 00:59:45.243535 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 00:59:45.438928 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 00:59:45.517825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 00:59:45.940081 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 00:59:46.435336 systemd-networkd[1398]: lo: Link UP Mar 4 00:59:46.435351 systemd-networkd[1398]: lo: Gained carrier Mar 4 00:59:46.446726 systemd-networkd[1398]: Enumeration completed Mar 4 00:59:46.449527 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 00:59:46.454427 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:46.454711 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 00:59:46.464828 systemd-networkd[1398]: eth0: Link UP Mar 4 00:59:46.464911 systemd-networkd[1398]: eth0: Gained carrier Mar 4 00:59:46.465547 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 00:59:46.592097 systemd-resolved[1399]: Positive Trust Anchors: Mar 4 00:59:46.592572 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 00:59:46.592613 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 00:59:46.603788 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 00:59:46.625579 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 00:59:46.652877 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 00:59:46.687086 systemd-resolved[1399]: Defaulting to hostname 'linux'. Mar 4 00:59:46.697846 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 00:59:46.702849 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Mar 4 00:59:46.705501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 00:59:47.895525 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 00:59:47.895877 systemd-timesyncd[1419]: Initial clock synchronization to Wed 2026-03-04 00:59:47.895279 UTC. Mar 4 00:59:47.897833 systemd-resolved[1399]: Clock change detected. Flushing caches. Mar 4 00:59:47.914539 systemd[1]: Reached target network.target - Network. Mar 4 00:59:47.940336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 00:59:49.326299 systemd-networkd[1398]: eth0: Gained IPv6LL Mar 4 00:59:49.347986 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 00:59:49.392303 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 00:59:49.538963 kernel: kvm_amd: TSC scaling supported Mar 4 00:59:49.539805 kernel: kvm_amd: Nested Virtualization enabled Mar 4 00:59:49.539833 kernel: kvm_amd: Nested Paging enabled Mar 4 00:59:49.545949 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 00:59:49.561009 kernel: kvm_amd: PMU virtualization is disabled Mar 4 00:59:51.028032 kernel: EDAC MC: Ver: 3.0.0 Mar 4 00:59:51.189998 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 00:59:51.235525 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 00:59:51.310169 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:59:51.402372 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 00:59:51.431122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 00:59:51.454134 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 00:59:51.480312 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 00:59:51.511474 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 00:59:51.634258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 00:59:51.715249 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 00:59:51.779405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 00:59:51.876535 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 00:59:51.902050 systemd[1]: Reached target paths.target - Path Units. Mar 4 00:59:51.926550 systemd[1]: Reached target timers.target - Timer Units. Mar 4 00:59:52.172360 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 00:59:52.613994 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 00:59:52.658817 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 00:59:52.714450 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 00:59:52.743352 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 00:59:52.762128 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 00:59:52.777982 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 00:59:52.787435 systemd[1]: Reached target basic.target - Basic System. Mar 4 00:59:52.819258 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:59:52.819927 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 00:59:52.848015 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 00:59:52.889467 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 00:59:52.914542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 00:59:53.022263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 00:59:53.097989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 00:59:53.120992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 00:59:53.129297 jq[1442]: false Mar 4 00:59:53.140522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 00:59:53.168110 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 00:59:53.177922 dbus-daemon[1441]: [system] SELinux support is enabled Mar 4 00:59:53.192392 extend-filesystems[1443]: Found loop3 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found loop4 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found loop5 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found sr0 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda1 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda2 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda3 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found usr Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda4 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda6 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda7 Mar 4 00:59:53.192392 extend-filesystems[1443]: Found vda9 Mar 4 00:59:53.192392 extend-filesystems[1443]: Checking size of /dev/vda9 Mar 4 00:59:54.157921 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 00:59:54.158282 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1337) Mar 4 00:59:53.215508 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 00:59:54.161814 extend-filesystems[1443]: Resized partition /dev/vda9 Mar 4 00:59:53.296214 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 00:59:54.235562 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Mar 4 00:59:53.393122 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 00:59:53.474339 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 00:59:53.564320 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 00:59:53.620138 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 00:59:53.647488 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 00:59:53.740917 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 00:59:54.083265 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 00:59:54.291206 jq[1467]: true Mar 4 00:59:54.419005 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 00:59:54.237548 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 00:59:54.289139 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 00:59:54.417247 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 00:59:54.417909 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 00:59:54.442009 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 00:59:54.442009 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 00:59:54.442009 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 00:59:54.425229 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 00:59:54.588104 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Mar 4 00:59:54.646912 update_engine[1465]: I20260304 00:59:54.546483 1465 main.cc:92] Flatcar Update Engine starting Mar 4 00:59:54.646912 update_engine[1465]: I20260304 00:59:54.553937 1465 update_check_scheduler.cc:74] Next update check in 7m55s Mar 4 00:59:54.426035 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 00:59:54.440559 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 00:59:54.554877 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 00:59:54.555163 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 00:59:54.667923 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 00:59:54.669090 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 00:59:54.803859 jq[1478]: true Mar 4 00:59:54.817238 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 00:59:54.818068 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 00:59:54.820411 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 00:59:54.831014 systemd-logind[1464]: New seat seat0. Mar 4 00:59:54.853565 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 00:59:54.958434 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 00:59:54.962561 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 4 00:59:54.959152 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 00:59:55.023483 tar[1477]: linux-amd64/LICENSE Mar 4 00:59:55.045114 tar[1477]: linux-amd64/helm Mar 4 00:59:55.067317 systemd[1]: Started update-engine.service - Update Engine. Mar 4 00:59:55.101358 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 00:59:55.103039 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 00:59:55.103222 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 00:59:55.135505 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 00:59:55.135912 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 00:59:55.226425 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 00:59:55.806388 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 4 00:59:55.813323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 00:59:55.854144 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 00:59:57.824188 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 00:59:58.178043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 00:59:58.219354 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 00:59:58.374936 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 00:59:58.376106 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 00:59:58.422348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 00:59:58.480833 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:48708.service - OpenSSH per-connection server daemon (10.0.0.1:48708). Mar 4 00:59:58.514397 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 00:59:58.609076 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 00:59:58.648538 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 00:59:58.684513 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 00:59:58.737405 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 00:59:58.755205 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 00:59:58.913362 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 48708 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 00:59:58.932063 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 00:59:58.986427 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 00:59:59.025245 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 00:59:59.070926 systemd-logind[1464]: New session 1 of user core. Mar 4 00:59:59.133050 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 00:59:59.182428 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 00:59:59.235464 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 00:59:59.504021 tar[1477]: linux-amd64/README.md Mar 4 00:59:59.605479 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 00:59:59.620074 containerd[1479]: time="2026-03-04T00:59:59.619977266Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 00:59:59.662937 containerd[1479]: time="2026-03-04T00:59:59.662887655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.675221 containerd[1479]: time="2026-03-04T00:59:59.675174441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:59.675357 containerd[1479]: time="2026-03-04T00:59:59.675336825Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 00:59:59.675444 containerd[1479]: time="2026-03-04T00:59:59.675424198Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.676446447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.678185280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.678938787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.678963624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.679239008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.679264556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.679292458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.679310081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.680024 containerd[1479]: time="2026-03-04T00:59:59.679451054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.682241 containerd[1479]: time="2026-03-04T00:59:59.682099770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 00:59:59.683985 containerd[1479]: time="2026-03-04T00:59:59.683843155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 00:59:59.684055 containerd[1479]: time="2026-03-04T00:59:59.683984429Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 00:59:59.685356 containerd[1479]: time="2026-03-04T00:59:59.685133425Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 00:59:59.685356 containerd[1479]: time="2026-03-04T00:59:59.685328910Z" level=info msg="metadata content store policy set" policy=shared Mar 4 00:59:59.737557 containerd[1479]: time="2026-03-04T00:59:59.736318535Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 00:59:59.737557 containerd[1479]: time="2026-03-04T00:59:59.736514421Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 00:59:59.737557 containerd[1479]: time="2026-03-04T00:59:59.736557562Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 00:59:59.737557 containerd[1479]: time="2026-03-04T00:59:59.736845269Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 00:59:59.737557 containerd[1479]: time="2026-03-04T00:59:59.736873682Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 00:59:59.737557 containerd[1479]: time="2026-03-04T00:59:59.737163673Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 00:59:59.752828 containerd[1479]: time="2026-03-04T00:59:59.750807895Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 00:59:59.752960 containerd[1479]: time="2026-03-04T00:59:59.752862041Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 00:59:59.752960 containerd[1479]: time="2026-03-04T00:59:59.752887208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 00:59:59.752960 containerd[1479]: time="2026-03-04T00:59:59.752905682Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 00:59:59.752960 containerd[1479]: time="2026-03-04T00:59:59.752925169Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.752960 containerd[1479]: time="2026-03-04T00:59:59.752942030Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.752960 containerd[1479]: time="2026-03-04T00:59:59.752958602Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.753380109Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.753412068Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.753434250Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.753451803Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.753468824Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754441159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754469743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754489710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754506962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754525878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754542960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754557277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.754866013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756139 containerd[1479]: time="2026-03-04T00:59:59.755886950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756912 containerd[1479]: time="2026-03-04T00:59:59.755917146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756912 containerd[1479]: time="2026-03-04T00:59:59.755934187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756912 containerd[1479]: time="2026-03-04T00:59:59.755951631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756912 containerd[1479]: time="2026-03-04T00:59:59.755968472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.756912 containerd[1479]: time="2026-03-04T00:59:59.756815152Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 00:59:59.758181 containerd[1479]: time="2026-03-04T00:59:59.756970062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.758181 containerd[1479]: time="2026-03-04T00:59:59.757391980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.758181 containerd[1479]: time="2026-03-04T00:59:59.757411246Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 00:59:59.759878 containerd[1479]: time="2026-03-04T00:59:59.759393337Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 00:59:59.759878 containerd[1479]: time="2026-03-04T00:59:59.759522087Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 00:59:59.759878 containerd[1479]: time="2026-03-04T00:59:59.759541624Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 00:59:59.759878 containerd[1479]: time="2026-03-04T00:59:59.759557874Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 00:59:59.762481 containerd[1479]: time="2026-03-04T00:59:59.761334872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.763041 containerd[1479]: time="2026-03-04T00:59:59.762549601Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 00:59:59.763041 containerd[1479]: time="2026-03-04T00:59:59.762873265Z" level=info msg="NRI interface is disabled by configuration." Mar 4 00:59:59.763041 containerd[1479]: time="2026-03-04T00:59:59.762895848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 00:59:59.765856 systemd[1546]: Queued start job for default target default.target. Mar 4 00:59:59.780411 containerd[1479]: time="2026-03-04T00:59:59.767928385Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 00:59:59.780411 containerd[1479]: time="2026-03-04T00:59:59.769878426Z" level=info msg="Connect containerd service" Mar 4 00:59:59.780411 containerd[1479]: time="2026-03-04T00:59:59.769940883Z" level=info msg="using legacy CRI server" Mar 4 00:59:59.780411 containerd[1479]: time="2026-03-04T00:59:59.769954087Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 00:59:59.780411 containerd[1479]: time="2026-03-04T00:59:59.771875826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 00:59:59.780411 containerd[1479]: time="2026-03-04T00:59:59.778160798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 00:59:59.784157 containerd[1479]: time="2026-03-04T00:59:59.782057004Z" level=info msg="Start subscribing containerd event" Mar 4 00:59:59.781185 systemd[1546]: Created slice app.slice - User Application Slice. Mar 4 00:59:59.781214 systemd[1546]: Reached target paths.target - Paths. Mar 4 00:59:59.781235 systemd[1546]: Reached target timers.target - Timers. Mar 4 00:59:59.785126 containerd[1479]: time="2026-03-04T00:59:59.785057107Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 00:59:59.785170 containerd[1479]: time="2026-03-04T00:59:59.785139761Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 00:59:59.789305 containerd[1479]: time="2026-03-04T00:59:59.782567488Z" level=info msg="Start recovering state" Mar 4 00:59:59.789305 containerd[1479]: time="2026-03-04T00:59:59.787141409Z" level=info msg="Start event monitor" Mar 4 00:59:59.789305 containerd[1479]: time="2026-03-04T00:59:59.787184279Z" level=info msg="Start snapshots syncer" Mar 4 00:59:59.789305 containerd[1479]: time="2026-03-04T00:59:59.787198626Z" level=info msg="Start cni network conf syncer for default" Mar 4 00:59:59.789305 containerd[1479]: time="2026-03-04T00:59:59.787208595Z" level=info msg="Start streaming server" Mar 4 00:59:59.789305 containerd[1479]: time="2026-03-04T00:59:59.788288271Z" level=info msg="containerd successfully booted in 0.178804s" Mar 4 00:59:59.793857 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 00:59:59.795087 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 00:59:59.839668 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 00:59:59.840096 systemd[1546]: Reached target sockets.target - Sockets. Mar 4 00:59:59.840120 systemd[1546]: Reached target basic.target - Basic System. Mar 4 00:59:59.840563 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 00:59:59.842041 systemd[1546]: Reached target default.target - Main User Target. Mar 4 00:59:59.842811 systemd[1546]: Startup finished in 558ms. Mar 4 00:59:59.883205 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:00:00.031081 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:48716.service - OpenSSH per-connection server daemon (10.0.0.1:48716). Mar 4 01:00:00.154354 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 48716 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:00.158395 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:00.201089 systemd-logind[1464]: New session 2 of user core. Mar 4 01:00:00.211402 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:00:00.399506 sshd[1565]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:00.449341 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:48716.service: Deactivated successfully. Mar 4 01:00:00.463998 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:00:00.472264 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:00:00.499191 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:48720.service - OpenSSH per-connection server daemon (10.0.0.1:48720). Mar 4 01:00:00.534565 systemd-logind[1464]: Removed session 2. Mar 4 01:00:00.590910 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 48720 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:00.603510 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:00.627504 systemd-logind[1464]: New session 3 of user core. Mar 4 01:00:00.645485 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:00:00.770477 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:00.782992 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:48720.service: Deactivated successfully. Mar 4 01:00:00.789411 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:00:00.802894 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:00:00.823290 systemd-logind[1464]: Removed session 3. Mar 4 01:00:01.472165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:01.496471 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:00:01.518146 systemd[1]: Startup finished in 12.878s (kernel) + 27.167s (initrd) + 40.785s (userspace) = 1min 20.832s. Mar 4 01:00:01.571489 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:03.946269 kubelet[1582]: E0304 01:00:03.944493 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:03.962244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:03.962547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:03.966830 systemd[1]: kubelet.service: Consumed 4.134s CPU time. Mar 4 01:00:10.901049 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:46232.service - OpenSSH per-connection server daemon (10.0.0.1:46232). Mar 4 01:00:11.089107 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 46232 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:11.106141 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:11.260999 systemd-logind[1464]: New session 4 of user core. Mar 4 01:00:11.291436 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:00:11.644294 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:11.707937 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:46232.service: Deactivated successfully. Mar 4 01:00:11.712292 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:00:11.721522 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:00:11.752472 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:46242.service - OpenSSH per-connection server daemon (10.0.0.1:46242). Mar 4 01:00:11.756896 systemd-logind[1464]: Removed session 4. Mar 4 01:00:11.954180 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 46242 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:11.957566 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:12.032324 systemd-logind[1464]: New session 5 of user core. Mar 4 01:00:12.053173 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:00:12.178385 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:12.236097 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:34638.service - OpenSSH per-connection server daemon (10.0.0.1:34638). Mar 4 01:00:12.241047 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:46242.service: Deactivated successfully. Mar 4 01:00:12.253552 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:00:12.268398 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:00:12.341327 systemd-logind[1464]: Removed session 5. Mar 4 01:00:12.427933 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 34638 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:12.446469 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:12.502083 systemd-logind[1464]: New session 6 of user core. Mar 4 01:00:12.520520 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:00:12.668404 sshd[1609]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:12.740290 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). Mar 4 01:00:12.741303 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:34638.service: Deactivated successfully. Mar 4 01:00:12.748302 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:00:12.761997 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:00:12.790222 systemd-logind[1464]: Removed session 6. Mar 4 01:00:12.926364 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:12.946047 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:13.003120 systemd-logind[1464]: New session 7 of user core. Mar 4 01:00:13.026170 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:00:13.248525 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:00:13.249363 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:13.343031 sudo[1621]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:13.359049 sshd[1616]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:13.394015 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:34650.service: Deactivated successfully. Mar 4 01:00:13.398887 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:00:13.420387 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:00:13.466959 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:34654.service - OpenSSH per-connection server daemon (10.0.0.1:34654). Mar 4 01:00:13.497348 systemd-logind[1464]: Removed session 7. Mar 4 01:00:13.755280 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 34654 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:13.785085 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:13.847489 systemd-logind[1464]: New session 8 of user core. Mar 4 01:00:13.876420 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:00:14.094346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:00:14.121008 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:00:14.121879 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:14.137513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:14.166540 sudo[1630]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:14.201356 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:00:14.202375 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:14.338275 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:14.511324 auditctl[1636]: No rules Mar 4 01:00:14.519243 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:00:14.522393 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:14.558024 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:15.687399 augenrules[1654]: No rules Mar 4 01:00:15.707170 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:15.736563 sudo[1629]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:15.771916 sshd[1626]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:15.859003 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:34654.service: Deactivated successfully. Mar 4 01:00:15.862282 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:00:15.871370 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:00:15.941500 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:34664.service - OpenSSH per-connection server daemon (10.0.0.1:34664). Mar 4 01:00:15.950285 systemd-logind[1464]: Removed session 8. Mar 4 01:00:16.102445 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 34664 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:00:16.129479 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:16.211408 systemd-logind[1464]: New session 9 of user core. Mar 4 01:00:16.229363 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:00:16.456271 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:00:16.461272 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:17.630196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:17.707306 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:18.272142 kubelet[1674]: E0304 01:00:18.272092 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:18.284353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:18.285065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:18.286466 systemd[1]: kubelet.service: Consumed 2.115s CPU time. Mar 4 01:00:20.166122 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:00:20.195184 (dockerd)[1697]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:00:26.158215 dockerd[1697]: time="2026-03-04T01:00:26.157004091Z" level=info msg="Starting up" Mar 4 01:00:28.335036 systemd[1]: var-lib-docker-metacopy\x2dcheck1781880459-merged.mount: Deactivated successfully. Mar 4 01:00:28.344427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:00:28.408356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:28.601881 dockerd[1697]: time="2026-03-04T01:00:28.600188778Z" level=info msg="Loading containers: start." Mar 4 01:00:30.613865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:30.626056 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:31.711067 kernel: Initializing XFRM netlink socket Mar 4 01:00:32.672381 kubelet[1762]: E0304 01:00:32.670869 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:32.686378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:32.688235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:32.690563 systemd[1]: kubelet.service: Consumed 3.755s CPU time. Mar 4 01:00:33.025840 systemd-networkd[1398]: docker0: Link UP Mar 4 01:00:33.169461 dockerd[1697]: time="2026-03-04T01:00:33.169149261Z" level=info msg="Loading containers: done." Mar 4 01:00:33.282270 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3295319017-merged.mount: Deactivated successfully. Mar 4 01:00:33.328936 dockerd[1697]: time="2026-03-04T01:00:33.326084008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:00:33.328936 dockerd[1697]: time="2026-03-04T01:00:33.326469008Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:00:33.328936 dockerd[1697]: time="2026-03-04T01:00:33.326864536Z" level=info msg="Daemon has completed initialization" Mar 4 01:00:33.753913 dockerd[1697]: time="2026-03-04T01:00:33.752993425Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:00:33.758880 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:00:38.552486 containerd[1479]: time="2026-03-04T01:00:38.551258612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 4 01:00:39.707306 update_engine[1465]: I20260304 01:00:39.706187 1465 update_attempter.cc:509] Updating boot flags... Mar 4 01:00:39.986871 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1870) Mar 4 01:00:40.284753 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1873) Mar 4 01:00:40.450781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619834696.mount: Deactivated successfully. Mar 4 01:00:42.872714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 01:00:42.916282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:44.183212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:44.227529 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:44.471930 kubelet[1906]: E0304 01:00:44.469557 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:44.477943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:44.478337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:44.479289 systemd[1]: kubelet.service: Consumed 1.193s CPU time. Mar 4 01:00:53.168403 containerd[1479]: time="2026-03-04T01:00:53.164415737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:53.171078 containerd[1479]: time="2026-03-04T01:00:53.170823392Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 4 01:00:53.178866 containerd[1479]: time="2026-03-04T01:00:53.176864372Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:53.225815 containerd[1479]: time="2026-03-04T01:00:53.223317155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:00:53.227808 containerd[1479]: time="2026-03-04T01:00:53.227460661Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 14.675676637s" Mar 4 01:00:53.228776 containerd[1479]: time="2026-03-04T01:00:53.228347329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 4 01:00:53.239485 containerd[1479]: time="2026-03-04T01:00:53.239349442Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 4 01:00:54.598321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 4 01:00:54.651846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:56.804267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:56.915244 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:58.969794 kubelet[1961]: E0304 01:00:58.965763 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:59.003228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:59.004258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:59.006487 systemd[1]: kubelet.service: Consumed 4.006s CPU time. Mar 4 01:01:05.375781 containerd[1479]: time="2026-03-04T01:01:05.374548565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:05.395217 containerd[1479]: time="2026-03-04T01:01:05.389120716Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 4 01:01:05.405942 containerd[1479]: time="2026-03-04T01:01:05.405472852Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:05.427743 containerd[1479]: time="2026-03-04T01:01:05.427047412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:05.435739 containerd[1479]: time="2026-03-04T01:01:05.434207471Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 12.194478849s" Mar 4 01:01:05.435739 containerd[1479]: time="2026-03-04T01:01:05.435747626Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 4 01:01:05.440818 containerd[1479]: time="2026-03-04T01:01:05.440696309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 4 01:01:09.109909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 4 01:01:09.149695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:11.478937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:11.536829 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:01:13.466534 kubelet[1983]: E0304 01:01:13.465854 1983 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:01:13.479898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:01:13.519944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:01:13.522370 systemd[1]: kubelet.service: Consumed 3.340s CPU time. Mar 4 01:01:13.723482 containerd[1479]: time="2026-03-04T01:01:13.720471488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:13.753715 containerd[1479]: time="2026-03-04T01:01:13.733148279Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 4 01:01:13.753715 containerd[1479]: time="2026-03-04T01:01:13.752490733Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:13.878191 containerd[1479]: time="2026-03-04T01:01:13.877506991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:13.901860 containerd[1479]: time="2026-03-04T01:01:13.900818680Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 8.460074773s" Mar 4 01:01:13.902057 containerd[1479]: time="2026-03-04T01:01:13.901908949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 4 01:01:13.925436 containerd[1479]: time="2026-03-04T01:01:13.920457003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 4 01:01:20.153083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819227003.mount: Deactivated successfully. Mar 4 01:01:21.327182 containerd[1479]: time="2026-03-04T01:01:21.326979851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:21.330084 containerd[1479]: time="2026-03-04T01:01:21.329342996Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 4 01:01:21.332397 containerd[1479]: time="2026-03-04T01:01:21.332222678Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:21.338061 containerd[1479]: time="2026-03-04T01:01:21.337880504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:21.340774 containerd[1479]: time="2026-03-04T01:01:21.339098023Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 7.418172238s" Mar 4 01:01:21.340774 containerd[1479]: time="2026-03-04T01:01:21.339246940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 4 01:01:21.345067 containerd[1479]: time="2026-03-04T01:01:21.344394908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 4 01:01:22.019433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871102203.mount: Deactivated successfully. Mar 4 01:01:23.595154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 4 01:01:23.611105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:24.167023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:26.528519 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:01:27.938896 kubelet[2060]: E0304 01:01:27.938388 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:01:27.948032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:01:27.948910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:01:27.950555 systemd[1]: kubelet.service: Consumed 4.371s CPU time. Mar 4 01:01:28.820967 containerd[1479]: time="2026-03-04T01:01:28.820521339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:28.823109 containerd[1479]: time="2026-03-04T01:01:28.822509887Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 4 01:01:28.824820 containerd[1479]: time="2026-03-04T01:01:28.824529801Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:28.831151 containerd[1479]: time="2026-03-04T01:01:28.831032269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:28.836278 containerd[1479]: time="2026-03-04T01:01:28.832281125Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 7.487842013s" Mar 4 01:01:28.836278 containerd[1479]: time="2026-03-04T01:01:28.832422322Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 4 01:01:28.838558 containerd[1479]: time="2026-03-04T01:01:28.837911031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 01:01:29.513775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725451321.mount: Deactivated successfully. Mar 4 01:01:29.531173 containerd[1479]: time="2026-03-04T01:01:29.530823907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:29.534257 containerd[1479]: time="2026-03-04T01:01:29.533542878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 4 01:01:29.536947 containerd[1479]: time="2026-03-04T01:01:29.536733692Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:29.544568 containerd[1479]: time="2026-03-04T01:01:29.544430154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:29.546017 containerd[1479]: time="2026-03-04T01:01:29.545866242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 707.840254ms" Mar 4 01:01:29.546017 containerd[1479]: time="2026-03-04T01:01:29.545982974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 4 01:01:29.549201 containerd[1479]: time="2026-03-04T01:01:29.548977776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 4 01:01:30.635836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819639926.mount: Deactivated successfully. Mar 4 01:01:32.785555 containerd[1479]: time="2026-03-04T01:01:32.785276667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:32.787002 containerd[1479]: time="2026-03-04T01:01:32.786947263Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 4 01:01:32.789387 containerd[1479]: time="2026-03-04T01:01:32.789335404Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:32.799370 containerd[1479]: time="2026-03-04T01:01:32.798844692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:32.800335 containerd[1479]: time="2026-03-04T01:01:32.800139600Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.251044241s" Mar 4 01:01:32.800335 containerd[1479]: time="2026-03-04T01:01:32.800271650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 4 01:01:36.198429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:36.199100 systemd[1]: kubelet.service: Consumed 4.371s CPU time. Mar 4 01:01:36.210343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:36.258822 systemd[1]: Reloading requested from client PID 2165 ('systemctl') (unit session-9.scope)... Mar 4 01:01:36.258904 systemd[1]: Reloading... Mar 4 01:01:36.415840 zram_generator::config[2202]: No configuration found. Mar 4 01:01:36.616255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:01:36.757254 systemd[1]: Reloading finished in 497 ms. Mar 4 01:01:36.841284 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 01:01:36.841468 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 01:01:36.842105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:36.864767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:37.169091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:37.169776 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:01:37.360098 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:01:37.360098 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:01:37.361291 kubelet[2251]: I0304 01:01:37.360880 2251 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:01:37.870171 kubelet[2251]: I0304 01:01:37.870016 2251 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 01:01:37.870171 kubelet[2251]: I0304 01:01:37.870127 2251 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:01:37.870171 kubelet[2251]: I0304 01:01:37.870171 2251 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:01:37.870171 kubelet[2251]: I0304 01:01:37.870186 2251 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:01:37.870976 kubelet[2251]: I0304 01:01:37.870456 2251 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:01:37.987819 kubelet[2251]: E0304 01:01:37.987408 2251 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:01:37.989481 kubelet[2251]: I0304 01:01:37.989204 2251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:01:37.998105 kubelet[2251]: E0304 01:01:37.998065 2251 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:01:37.999211 kubelet[2251]: I0304 01:01:37.998134 2251 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:01:38.012145 kubelet[2251]: I0304 01:01:38.011913 2251 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:01:38.014030 kubelet[2251]: I0304 01:01:38.013796 2251 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:01:38.014030 kubelet[2251]: I0304 01:01:38.013889 2251 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:01:38.014474 kubelet[2251]: I0304 01:01:38.014037 2251 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:01:38.014474 kubelet[2251]: I0304 01:01:38.014046 2251 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 01:01:38.014474 kubelet[2251]: I0304 01:01:38.014145 2251 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:01:38.020189 kubelet[2251]: I0304 01:01:38.019971 2251 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:38.020806 kubelet[2251]: I0304 01:01:38.020380 2251 kubelet.go:475] "Attempting to sync node with API server" Mar 4 01:01:38.020806 kubelet[2251]: I0304 01:01:38.020751 2251 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:01:38.020806 kubelet[2251]: I0304 01:01:38.020794 2251 kubelet.go:387] "Adding apiserver pod source" Mar 4 01:01:38.020933 kubelet[2251]: I0304 01:01:38.020817 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:01:38.028800 kubelet[2251]: E0304 01:01:38.026012 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:38.030788 kubelet[2251]: I0304 01:01:38.030760 2251 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:01:38.031982 kubelet[2251]: E0304 01:01:38.030937 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:38.035924 kubelet[2251]: I0304 01:01:38.035378 2251 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:01:38.035924 kubelet[2251]: I0304 01:01:38.035785 2251 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:01:38.036401 kubelet[2251]: W0304 01:01:38.036198 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:01:38.058940 kubelet[2251]: I0304 01:01:38.058257 2251 server.go:1262] "Started kubelet" Mar 4 01:01:38.073878 kubelet[2251]: E0304 01:01:38.065887 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997daa245cf714 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:01:38.054985492 +0000 UTC m=+0.854123045,LastTimestamp:2026-03-04 01:01:38.054985492 +0000 UTC m=+0.854123045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:01:38.075301 kubelet[2251]: I0304 01:01:38.074239 2251 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:01:38.087025 kubelet[2251]: I0304 01:01:38.086760 2251 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:01:38.090706 kubelet[2251]: I0304 01:01:38.087864 2251 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:01:38.090706 kubelet[2251]: I0304 01:01:38.089044 2251 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:01:38.090706 kubelet[2251]: I0304 01:01:38.089839 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:01:38.091328 kubelet[2251]: I0304 01:01:38.091203 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:01:38.095024 kubelet[2251]: I0304 01:01:38.094811 2251 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 01:01:38.098182 kubelet[2251]: E0304 01:01:38.098031 2251 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:38.099487 kubelet[2251]: I0304 01:01:38.099295 2251 server.go:310] "Adding debug handlers to kubelet server" Mar 4 01:01:38.101089 kubelet[2251]: I0304 01:01:38.100944 2251 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:01:38.103487 kubelet[2251]: I0304 01:01:38.103152 2251 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:01:38.104072 kubelet[2251]: E0304 01:01:38.103979 2251 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:01:38.108221 kubelet[2251]: E0304 01:01:38.108020 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Mar 4 01:01:38.108752 kubelet[2251]: E0304 01:01:38.108266 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:38.108999 kubelet[2251]: I0304 01:01:38.108774 2251 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:01:38.109482 kubelet[2251]: I0304 01:01:38.109364 2251 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:01:38.114223 kubelet[2251]: I0304 01:01:38.114114 2251 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:01:38.178824 kubelet[2251]: I0304 01:01:38.178267 2251 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:01:38.178824 kubelet[2251]: I0304 01:01:38.178361 2251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:01:38.178824 kubelet[2251]: I0304 01:01:38.178379 2251 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:38.187978 kubelet[2251]: I0304 01:01:38.187909 2251 policy_none.go:49] "None policy: Start" Mar 4 01:01:38.187978 kubelet[2251]: I0304 01:01:38.187937 2251 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:01:38.187978 kubelet[2251]: I0304 01:01:38.187956 2251 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:01:38.190943 kubelet[2251]: I0304 01:01:38.190842 2251 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:01:38.198358 kubelet[2251]: I0304 01:01:38.198103 2251 policy_none.go:47] "Start" Mar 4 01:01:38.198448 kubelet[2251]: E0304 01:01:38.198322 2251 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:38.199059 kubelet[2251]: I0304 01:01:38.198816 2251 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:01:38.199059 kubelet[2251]: I0304 01:01:38.198937 2251 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 01:01:38.199931 kubelet[2251]: I0304 01:01:38.199399 2251 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 01:01:38.199931 kubelet[2251]: E0304 01:01:38.199822 2251 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:01:38.202833 kubelet[2251]: E0304 01:01:38.202420 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:38.216744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:01:38.243450 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:01:38.255762 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:01:38.267136 kubelet[2251]: E0304 01:01:38.266809 2251 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:01:38.267364 kubelet[2251]: I0304 01:01:38.267349 2251 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:01:38.267396 kubelet[2251]: I0304 01:01:38.267364 2251 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:01:38.268046 kubelet[2251]: I0304 01:01:38.267928 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:01:38.273462 kubelet[2251]: E0304 01:01:38.273193 2251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:01:38.273462 kubelet[2251]: E0304 01:01:38.273243 2251 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:01:38.306204 kubelet[2251]: I0304 01:01:38.305473 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d566d676fef92563fcc5eaa542d49d25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d566d676fef92563fcc5eaa542d49d25\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:38.306204 kubelet[2251]: I0304 01:01:38.305802 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d566d676fef92563fcc5eaa542d49d25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d566d676fef92563fcc5eaa542d49d25\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:38.306204 kubelet[2251]: I0304 01:01:38.305834 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d566d676fef92563fcc5eaa542d49d25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d566d676fef92563fcc5eaa542d49d25\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:38.310901 kubelet[2251]: E0304 01:01:38.310042 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Mar 4 01:01:38.343928 systemd[1]: Created slice kubepods-burstable-podd566d676fef92563fcc5eaa542d49d25.slice - libcontainer container kubepods-burstable-podd566d676fef92563fcc5eaa542d49d25.slice. Mar 4 01:01:38.365858 kubelet[2251]: E0304 01:01:38.365324 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:38.371362 kubelet[2251]: I0304 01:01:38.371322 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:38.372310 kubelet[2251]: E0304 01:01:38.372267 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 4 01:01:38.373348 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 4 01:01:38.390084 kubelet[2251]: E0304 01:01:38.389943 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:38.401376 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 4 01:01:38.406912 kubelet[2251]: I0304 01:01:38.406463 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:38.406912 kubelet[2251]: I0304 01:01:38.406793 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:38.406912 kubelet[2251]: I0304 01:01:38.406823 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:38.406912 kubelet[2251]: I0304 01:01:38.406881 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:38.406912 kubelet[2251]: I0304 01:01:38.406901 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:38.407292 kubelet[2251]: I0304 01:01:38.406924 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:38.407292 kubelet[2251]: E0304 01:01:38.407136 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:38.576916 kubelet[2251]: I0304 01:01:38.576206 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:38.577053 kubelet[2251]: E0304 01:01:38.577021 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 4 01:01:38.678825 kubelet[2251]: E0304 01:01:38.677160 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:38.682831 containerd[1479]: time="2026-03-04T01:01:38.679429770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d566d676fef92563fcc5eaa542d49d25,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:38.706864 kubelet[2251]: E0304 01:01:38.706137 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:38.708195 containerd[1479]: time="2026-03-04T01:01:38.707770504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:38.712158 kubelet[2251]: E0304 01:01:38.711998 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Mar 4 01:01:38.718298 kubelet[2251]: E0304 01:01:38.718274 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:38.721354 containerd[1479]: time="2026-03-04T01:01:38.720894671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:38.986190 kubelet[2251]: I0304 01:01:38.986021 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:38.986504 kubelet[2251]: E0304 01:01:38.986403 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 4 01:01:39.040184 kubelet[2251]: E0304 01:01:39.039370 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:39.077999 kubelet[2251]: E0304 01:01:39.076047 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:39.273740 kubelet[2251]: E0304 01:01:39.272976 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:39.353029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236520011.mount: Deactivated successfully. Mar 4 01:01:39.376782 containerd[1479]: time="2026-03-04T01:01:39.376250806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:39.387267 containerd[1479]: time="2026-03-04T01:01:39.387115579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:01:39.390227 containerd[1479]: time="2026-03-04T01:01:39.389867954Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:39.392973 containerd[1479]: time="2026-03-04T01:01:39.392459993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:01:39.394196 containerd[1479]: time="2026-03-04T01:01:39.394078823Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:39.396896 containerd[1479]: time="2026-03-04T01:01:39.396783875Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:39.398130 containerd[1479]: time="2026-03-04T01:01:39.398086239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:01:39.406039 containerd[1479]: time="2026-03-04T01:01:39.405766650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:39.414731 containerd[1479]: time="2026-03-04T01:01:39.414385879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 706.52763ms" Mar 4 01:01:39.417794 containerd[1479]: time="2026-03-04T01:01:39.417301858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 696.267583ms" Mar 4 01:01:39.421782 containerd[1479]: time="2026-03-04T01:01:39.420366085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.56552ms" Mar 4 01:01:39.515345 kubelet[2251]: E0304 01:01:39.514303 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Mar 4 01:01:39.529242 kubelet[2251]: E0304 01:01:39.528975 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:39.700921 containerd[1479]: time="2026-03-04T01:01:39.700227925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:39.700921 containerd[1479]: time="2026-03-04T01:01:39.700378780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:39.700921 containerd[1479]: time="2026-03-04T01:01:39.700401854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:39.700921 containerd[1479]: time="2026-03-04T01:01:39.700514547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:39.704246 containerd[1479]: time="2026-03-04T01:01:39.702938005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:39.704246 containerd[1479]: time="2026-03-04T01:01:39.703001734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:39.704246 containerd[1479]: time="2026-03-04T01:01:39.703023455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:39.704460 containerd[1479]: time="2026-03-04T01:01:39.704306312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:39.705275 containerd[1479]: time="2026-03-04T01:01:39.704837924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:39.705344 containerd[1479]: time="2026-03-04T01:01:39.705251654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:39.706081 containerd[1479]: time="2026-03-04T01:01:39.705363407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:39.717017 containerd[1479]: time="2026-03-04T01:01:39.716873937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:39.772909 systemd[1]: Started cri-containerd-2c2d973f7febc2c8f67c9ff7d35565117b44a5df81fcfbe3dd41d5e497252035.scope - libcontainer container 2c2d973f7febc2c8f67c9ff7d35565117b44a5df81fcfbe3dd41d5e497252035. Mar 4 01:01:39.777112 systemd[1]: Started cri-containerd-65aa4497245299dbcb9229c4b11a6ce5b9081524b4996ac5b1f5a132a7aa9917.scope - libcontainer container 65aa4497245299dbcb9229c4b11a6ce5b9081524b4996ac5b1f5a132a7aa9917. Mar 4 01:01:39.790108 systemd[1]: Started cri-containerd-f5c1e6efceee5d33ab4171eb70852ac0ea05c2f61170efcabce7b5326a1a7bb9.scope - libcontainer container f5c1e6efceee5d33ab4171eb70852ac0ea05c2f61170efcabce7b5326a1a7bb9. Mar 4 01:01:39.792278 kubelet[2251]: I0304 01:01:39.792113 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:39.794081 kubelet[2251]: E0304 01:01:39.792431 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Mar 4 01:01:39.901740 containerd[1479]: time="2026-03-04T01:01:39.900819192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d566d676fef92563fcc5eaa542d49d25,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c2d973f7febc2c8f67c9ff7d35565117b44a5df81fcfbe3dd41d5e497252035\"" Mar 4 01:01:39.903075 kubelet[2251]: E0304 01:01:39.902804 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:39.924905 containerd[1479]: time="2026-03-04T01:01:39.922745984Z" level=info msg="CreateContainer within sandbox \"2c2d973f7febc2c8f67c9ff7d35565117b44a5df81fcfbe3dd41d5e497252035\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:01:39.924905 containerd[1479]: time="2026-03-04T01:01:39.923133012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"65aa4497245299dbcb9229c4b11a6ce5b9081524b4996ac5b1f5a132a7aa9917\"" Mar 4 01:01:39.931446 kubelet[2251]: E0304 01:01:39.931373 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:39.943157 containerd[1479]: time="2026-03-04T01:01:39.943098923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5c1e6efceee5d33ab4171eb70852ac0ea05c2f61170efcabce7b5326a1a7bb9\"" Mar 4 01:01:39.945135 containerd[1479]: time="2026-03-04T01:01:39.944868709Z" level=info msg="CreateContainer within sandbox \"65aa4497245299dbcb9229c4b11a6ce5b9081524b4996ac5b1f5a132a7aa9917\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:01:39.953100 kubelet[2251]: E0304 01:01:39.952453 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:39.967306 containerd[1479]: time="2026-03-04T01:01:39.966912170Z" level=info msg="CreateContainer within sandbox \"f5c1e6efceee5d33ab4171eb70852ac0ea05c2f61170efcabce7b5326a1a7bb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:01:40.001978 containerd[1479]: time="2026-03-04T01:01:40.001881399Z" level=info msg="CreateContainer within sandbox \"2c2d973f7febc2c8f67c9ff7d35565117b44a5df81fcfbe3dd41d5e497252035\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63bb294e2938b3b6288ca778dfa9bd7c1214d248910fbe23973aeed94e6a0b38\"" Mar 4 01:01:40.003541 containerd[1479]: time="2026-03-04T01:01:40.003408498Z" level=info msg="StartContainer for \"63bb294e2938b3b6288ca778dfa9bd7c1214d248910fbe23973aeed94e6a0b38\"" Mar 4 01:01:40.014771 containerd[1479]: time="2026-03-04T01:01:40.013923868Z" level=info msg="CreateContainer within sandbox \"65aa4497245299dbcb9229c4b11a6ce5b9081524b4996ac5b1f5a132a7aa9917\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1351354d5135f562282362b4fc3f5d6f6dfe8a6866efc960cea00905946b5f1d\"" Mar 4 01:01:40.017065 containerd[1479]: time="2026-03-04T01:01:40.015749137Z" level=info msg="StartContainer for \"1351354d5135f562282362b4fc3f5d6f6dfe8a6866efc960cea00905946b5f1d\"" Mar 4 01:01:40.038961 containerd[1479]: time="2026-03-04T01:01:40.038385850Z" level=info msg="CreateContainer within sandbox \"f5c1e6efceee5d33ab4171eb70852ac0ea05c2f61170efcabce7b5326a1a7bb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff8387b0606ef376bc1d1e9afb785c6dbf8578d044f124abd25236436992dcf2\"" Mar 4 01:01:40.041890 containerd[1479]: time="2026-03-04T01:01:40.041854068Z" level=info msg="StartContainer for \"ff8387b0606ef376bc1d1e9afb785c6dbf8578d044f124abd25236436992dcf2\"" Mar 4 01:01:40.102059 systemd[1]: Started cri-containerd-63bb294e2938b3b6288ca778dfa9bd7c1214d248910fbe23973aeed94e6a0b38.scope - libcontainer container 63bb294e2938b3b6288ca778dfa9bd7c1214d248910fbe23973aeed94e6a0b38. Mar 4 01:01:40.121336 systemd[1]: Started cri-containerd-1351354d5135f562282362b4fc3f5d6f6dfe8a6866efc960cea00905946b5f1d.scope - libcontainer container 1351354d5135f562282362b4fc3f5d6f6dfe8a6866efc960cea00905946b5f1d. Mar 4 01:01:40.140712 kubelet[2251]: E0304 01:01:40.140314 2251 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:01:40.145367 systemd[1]: Started cri-containerd-ff8387b0606ef376bc1d1e9afb785c6dbf8578d044f124abd25236436992dcf2.scope - libcontainer container ff8387b0606ef376bc1d1e9afb785c6dbf8578d044f124abd25236436992dcf2. Mar 4 01:01:40.259262 containerd[1479]: time="2026-03-04T01:01:40.259213288Z" level=info msg="StartContainer for \"63bb294e2938b3b6288ca778dfa9bd7c1214d248910fbe23973aeed94e6a0b38\" returns successfully" Mar 4 01:01:40.291751 containerd[1479]: time="2026-03-04T01:01:40.291264473Z" level=info msg="StartContainer for \"ff8387b0606ef376bc1d1e9afb785c6dbf8578d044f124abd25236436992dcf2\" returns successfully" Mar 4 01:01:40.293539 containerd[1479]: time="2026-03-04T01:01:40.292256198Z" level=info msg="StartContainer for \"1351354d5135f562282362b4fc3f5d6f6dfe8a6866efc960cea00905946b5f1d\" returns successfully" Mar 4 01:01:41.287208 kubelet[2251]: E0304 01:01:41.286156 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:41.287208 kubelet[2251]: E0304 01:01:41.286302 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:41.287208 kubelet[2251]: E0304 01:01:41.286543 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:41.287208 kubelet[2251]: E0304 01:01:41.286850 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:41.294078 kubelet[2251]: E0304 01:01:41.294057 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:41.297057 kubelet[2251]: E0304 01:01:41.297034 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:41.402539 kubelet[2251]: I0304 01:01:41.402332 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:42.301841 kubelet[2251]: E0304 01:01:42.301162 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:42.301841 kubelet[2251]: E0304 01:01:42.301395 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:42.301841 kubelet[2251]: E0304 01:01:42.301441 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:42.301841 kubelet[2251]: E0304 01:01:42.301518 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:42.308893 kubelet[2251]: E0304 01:01:42.307825 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:42.308893 kubelet[2251]: E0304 01:01:42.307987 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:43.309789 kubelet[2251]: E0304 01:01:43.309517 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:43.310348 kubelet[2251]: E0304 01:01:43.309895 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:43.310348 kubelet[2251]: E0304 01:01:43.310122 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:43.310348 kubelet[2251]: E0304 01:01:43.310204 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:43.563467 kubelet[2251]: E0304 01:01:43.562150 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:01:43.676305 kubelet[2251]: I0304 01:01:43.675160 2251 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:01:43.709119 kubelet[2251]: I0304 01:01:43.708260 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:43.748998 kubelet[2251]: E0304 01:01:43.748078 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:43.748998 kubelet[2251]: I0304 01:01:43.748199 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:43.759179 kubelet[2251]: E0304 01:01:43.756362 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:43.759179 kubelet[2251]: I0304 01:01:43.756393 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:43.783885 kubelet[2251]: E0304 01:01:43.783079 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:44.032207 kubelet[2251]: I0304 01:01:44.031541 2251 apiserver.go:52] "Watching apiserver" Mar 4 01:01:44.102511 kubelet[2251]: I0304 01:01:44.102260 2251 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:01:49.060829 kubelet[2251]: I0304 01:01:49.045799 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:49.411498 kubelet[2251]: I0304 01:01:49.407185 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:49.939543 kubelet[2251]: E0304 01:01:49.938818 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:49.951982 kubelet[2251]: E0304 01:01:49.950423 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:50.922275 kubelet[2251]: E0304 01:01:50.921091 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:50.922275 kubelet[2251]: E0304 01:01:50.922149 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:51.256924 kubelet[2251]: I0304 01:01:51.254354 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:51.329111 kubelet[2251]: E0304 01:01:51.326247 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:51.420189 kubelet[2251]: I0304 01:01:51.419879 2251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.419559915 podStartE2EDuration="2.419559915s" podCreationTimestamp="2026-03-04 01:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:51.388189976 +0000 UTC m=+14.187327548" watchObservedRunningTime="2026-03-04 01:01:51.419559915 +0000 UTC m=+14.218697447" Mar 4 01:01:51.941548 kubelet[2251]: E0304 01:01:51.939001 2251 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:52.302372 kubelet[2251]: I0304 01:01:52.279443 2251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.279424681 podStartE2EDuration="3.279424681s" podCreationTimestamp="2026-03-04 01:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:51.420550582 +0000 UTC m=+14.219688124" watchObservedRunningTime="2026-03-04 01:01:52.279424681 +0000 UTC m=+15.078562233" Mar 4 01:01:52.302372 kubelet[2251]: I0304 01:01:52.279553 2251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.27954646 podStartE2EDuration="1.27954646s" podCreationTimestamp="2026-03-04 01:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:01:52.275262846 +0000 UTC m=+15.074400388" watchObservedRunningTime="2026-03-04 01:01:52.27954646 +0000 UTC m=+15.078684002" Mar 4 01:01:54.400190 systemd[1]: Reloading requested from client PID 2547 ('systemctl') (unit session-9.scope)... Mar 4 01:01:54.400219 systemd[1]: Reloading... Mar 4 01:01:54.658102 zram_generator::config[2586]: No configuration found. Mar 4 01:01:55.048529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:01:55.220216 systemd[1]: Reloading finished in 818 ms. Mar 4 01:01:55.316135 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:55.346475 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:01:55.347732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:55.347951 systemd[1]: kubelet.service: Consumed 6.101s CPU time, 130.1M memory peak, 0B memory swap peak. Mar 4 01:01:55.368781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:55.887231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:55.915354 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:01:56.154324 kubelet[2631]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:01:56.154324 kubelet[2631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:01:56.154324 kubelet[2631]: I0304 01:01:56.154220 2631 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:01:56.176247 kubelet[2631]: I0304 01:01:56.176018 2631 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 01:01:56.176247 kubelet[2631]: I0304 01:01:56.176128 2631 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:01:56.176247 kubelet[2631]: I0304 01:01:56.176172 2631 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:01:56.176247 kubelet[2631]: I0304 01:01:56.176183 2631 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:01:56.176767 kubelet[2631]: I0304 01:01:56.176512 2631 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:01:56.179506 kubelet[2631]: I0304 01:01:56.179455 2631 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:01:56.189780 kubelet[2631]: I0304 01:01:56.189718 2631 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:01:56.207802 kubelet[2631]: E0304 01:01:56.205249 2631 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:01:56.207802 kubelet[2631]: I0304 01:01:56.205328 2631 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:01:56.233799 kubelet[2631]: I0304 01:01:56.233359 2631 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:01:56.234067 kubelet[2631]: I0304 01:01:56.233932 2631 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:01:56.236363 kubelet[2631]: I0304 01:01:56.233975 2631 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:01:56.236363 kubelet[2631]: I0304 01:01:56.234255 2631 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:01:56.236363 kubelet[2631]: I0304 01:01:56.234274 2631 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 01:01:56.236363 kubelet[2631]: I0304 01:01:56.234307 2631 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:01:56.236363 kubelet[2631]: I0304 01:01:56.234532 2631 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:56.237091 kubelet[2631]: I0304 01:01:56.234976 2631 kubelet.go:475] "Attempting to sync node with API server" Mar 4 01:01:56.237091 kubelet[2631]: I0304 01:01:56.234991 2631 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:01:56.237091 kubelet[2631]: I0304 01:01:56.235021 2631 kubelet.go:387] "Adding apiserver pod source" Mar 4 01:01:56.237091 kubelet[2631]: I0304 01:01:56.235035 2631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:01:56.248263 kubelet[2631]: I0304 01:01:56.248031 2631 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:01:56.249766 kubelet[2631]: I0304 01:01:56.249227 2631 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:01:56.249766 kubelet[2631]: I0304 01:01:56.249337 2631 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:01:56.263316 kubelet[2631]: I0304 01:01:56.262761 2631 server.go:1262] "Started kubelet" Mar 4 01:01:56.263316 kubelet[2631]: I0304 01:01:56.263160 2631 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:01:56.263484 kubelet[2631]: I0304 01:01:56.263314 2631 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:01:56.263484 kubelet[2631]: I0304 01:01:56.263359 2631 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:01:56.264216 kubelet[2631]: I0304 01:01:56.263800 2631 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:01:56.289445 kubelet[2631]: I0304 01:01:56.280290 2631 server.go:310] "Adding debug handlers to kubelet server" Mar 4 01:01:56.289445 kubelet[2631]: I0304 01:01:56.279345 2631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:01:56.291367 kubelet[2631]: E0304 01:01:56.291259 2631 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:01:56.294657 kubelet[2631]: I0304 01:01:56.291505 2631 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:01:56.302732 kubelet[2631]: I0304 01:01:56.297552 2631 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 01:01:56.302732 kubelet[2631]: I0304 01:01:56.298103 2631 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:01:56.302732 kubelet[2631]: I0304 01:01:56.298265 2631 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:01:56.310274 kubelet[2631]: I0304 01:01:56.308997 2631 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:01:56.310274 kubelet[2631]: I0304 01:01:56.309199 2631 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:01:56.340306 kubelet[2631]: I0304 01:01:56.339428 2631 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:01:56.390463 sudo[2657]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 4 01:01:56.391440 sudo[2657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 4 01:01:56.419933 kubelet[2631]: I0304 01:01:56.419190 2631 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:01:56.443526 kubelet[2631]: I0304 01:01:56.443395 2631 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:01:56.443526 kubelet[2631]: I0304 01:01:56.443516 2631 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 01:01:56.443965 kubelet[2631]: I0304 01:01:56.443554 2631 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 01:01:56.443965 kubelet[2631]: E0304 01:01:56.443796 2631 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.541728 2631 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.541752 2631 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.541777 2631 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.542480 2631 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.542499 2631 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.542526 2631 policy_none.go:49] "None policy: Start" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.542539 2631 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:01:56.542554 kubelet[2631]: I0304 01:01:56.542559 2631 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:01:56.543186 kubelet[2631]: I0304 01:01:56.542923 2631 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 01:01:56.543186 kubelet[2631]: I0304 01:01:56.542937 2631 policy_none.go:47] "Start" Mar 4 01:01:56.560288 kubelet[2631]: E0304 01:01:56.559253 2631 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:01:56.560288 kubelet[2631]: I0304 01:01:56.559509 2631 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:01:56.560288 kubelet[2631]: I0304 01:01:56.559528 2631 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:01:56.565560 kubelet[2631]: I0304 01:01:56.565538 2631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:01:56.575728 kubelet[2631]: I0304 01:01:56.575390 2631 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:56.661040 kubelet[2631]: I0304 01:01:56.605486 2631 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:56.677267 kubelet[2631]: I0304 01:01:56.668495 2631 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:56.713317 kubelet[2631]: E0304 01:01:56.702376 2631 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:01:56.729483 kubelet[2631]: E0304 01:01:56.717958 2631 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:56.735312 kubelet[2631]: I0304 01:01:56.734515 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:56.735312 kubelet[2631]: I0304 01:01:56.734811 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d566d676fef92563fcc5eaa542d49d25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d566d676fef92563fcc5eaa542d49d25\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:56.735312 kubelet[2631]: I0304 01:01:56.734938 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:56.735312 kubelet[2631]: I0304 01:01:56.734966 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:56.735312 kubelet[2631]: I0304 01:01:56.735174 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:56.873612 kubelet[2631]: I0304 01:01:56.735740 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:56.873612 kubelet[2631]: I0304 01:01:56.736017 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d566d676fef92563fcc5eaa542d49d25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d566d676fef92563fcc5eaa542d49d25\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:56.873612 kubelet[2631]: I0304 01:01:56.736061 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d566d676fef92563fcc5eaa542d49d25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d566d676fef92563fcc5eaa542d49d25\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:56.873612 kubelet[2631]: I0304 01:01:56.736087 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:56.942125 kubelet[2631]: I0304 01:01:56.938912 2631 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:57.023164 kubelet[2631]: E0304 01:01:57.022166 2631 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:57.023164 kubelet[2631]: E0304 01:01:57.022499 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.052540 kubelet[2631]: E0304 01:01:57.049123 2631 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:57.054220 kubelet[2631]: E0304 01:01:57.053064 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.054220 kubelet[2631]: E0304 01:01:57.053265 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.126794 kubelet[2631]: I0304 01:01:57.126413 2631 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 4 01:01:57.126794 kubelet[2631]: I0304 01:01:57.126744 2631 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:01:57.239357 kubelet[2631]: I0304 01:01:57.238333 2631 apiserver.go:52] "Watching apiserver" Mar 4 01:01:57.300309 kubelet[2631]: I0304 01:01:57.298974 2631 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:01:57.503794 kubelet[2631]: I0304 01:01:57.501417 2631 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:57.506197 kubelet[2631]: E0304 01:01:57.505767 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.507402 kubelet[2631]: E0304 01:01:57.506397 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.582360 kubelet[2631]: E0304 01:01:57.574109 2631 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:57.588308 kubelet[2631]: E0304 01:01:57.588148 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:58.317315 sudo[2657]: pam_unix(sudo:session): session closed for user root Mar 4 01:01:58.515240 kubelet[2631]: E0304 01:01:58.514040 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:59.643217 kubelet[2631]: E0304 01:01:59.598813 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:00.045040 kubelet[2631]: I0304 01:02:00.029262 2631 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:02:00.046423 containerd[1479]: time="2026-03-04T01:02:00.045979585Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:02:00.047428 kubelet[2631]: I0304 01:02:00.047203 2631 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:02:00.618745 kubelet[2631]: E0304 01:02:00.618089 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:02.748859 kubelet[2631]: E0304 01:02:02.748081 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:02.749447 systemd[1]: Created slice kubepods-burstable-podf4622411_eb40_43f9_8c9a_0104a632c61b.slice - libcontainer container kubepods-burstable-podf4622411_eb40_43f9_8c9a_0104a632c61b.slice. Mar 4 01:02:02.772847 systemd[1]: Created slice kubepods-besteffort-poda115f4bb_11ec_4092_81a5_41118b532269.slice - libcontainer container kubepods-besteffort-poda115f4bb_11ec_4092_81a5_41118b532269.slice. Mar 4 01:02:02.781120 kubelet[2631]: I0304 01:02:02.777721 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a115f4bb-11ec-4092-81a5-41118b532269-lib-modules\") pod \"kube-proxy-kvk9v\" (UID: \"a115f4bb-11ec-4092-81a5-41118b532269\") " pod="kube-system/kube-proxy-kvk9v" Mar 4 01:02:02.781120 kubelet[2631]: I0304 01:02:02.777768 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5jrw\" (UniqueName: \"kubernetes.io/projected/a115f4bb-11ec-4092-81a5-41118b532269-kube-api-access-l5jrw\") pod \"kube-proxy-kvk9v\" (UID: \"a115f4bb-11ec-4092-81a5-41118b532269\") " pod="kube-system/kube-proxy-kvk9v" Mar 4 01:02:02.781120 kubelet[2631]: I0304 01:02:02.777800 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-bpf-maps\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781120 kubelet[2631]: I0304 01:02:02.777821 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-etc-cni-netd\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781120 kubelet[2631]: I0304 01:02:02.777840 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-lib-modules\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781120 kubelet[2631]: I0304 01:02:02.777857 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-config-path\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781393 kubelet[2631]: I0304 01:02:02.777881 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-hubble-tls\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781393 kubelet[2631]: I0304 01:02:02.778025 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d559l\" (UniqueName: \"kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-kube-api-access-d559l\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781393 kubelet[2631]: I0304 01:02:02.778053 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-xtables-lock\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781393 kubelet[2631]: I0304 01:02:02.778074 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4622411-eb40-43f9-8c9a-0104a632c61b-clustermesh-secrets\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781393 kubelet[2631]: I0304 01:02:02.778093 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-net\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781393 kubelet[2631]: I0304 01:02:02.778113 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a115f4bb-11ec-4092-81a5-41118b532269-kube-proxy\") pod \"kube-proxy-kvk9v\" (UID: \"a115f4bb-11ec-4092-81a5-41118b532269\") " pod="kube-system/kube-proxy-kvk9v" Mar 4 01:02:02.781866 kubelet[2631]: I0304 01:02:02.778133 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a115f4bb-11ec-4092-81a5-41118b532269-xtables-lock\") pod \"kube-proxy-kvk9v\" (UID: \"a115f4bb-11ec-4092-81a5-41118b532269\") " pod="kube-system/kube-proxy-kvk9v" Mar 4 01:02:02.781866 kubelet[2631]: I0304 01:02:02.778167 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-run\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781866 kubelet[2631]: I0304 01:02:02.778383 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-hostproc\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781866 kubelet[2631]: I0304 01:02:02.778425 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cni-path\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781866 kubelet[2631]: I0304 01:02:02.778447 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-kernel\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:02.781866 kubelet[2631]: I0304 01:02:02.778486 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-cgroup\") pod \"cilium-p5xrg\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " pod="kube-system/cilium-p5xrg" Mar 4 01:02:03.137100 systemd[1]: Created slice kubepods-besteffort-podf485ac81_9446_4bf0_b8ea_2042137505a1.slice - libcontainer container kubepods-besteffort-podf485ac81_9446_4bf0_b8ea_2042137505a1.slice. Mar 4 01:02:03.221188 kubelet[2631]: I0304 01:02:03.220498 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f485ac81-9446-4bf0-b8ea-2042137505a1-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fpsx4\" (UID: \"f485ac81-9446-4bf0-b8ea-2042137505a1\") " pod="kube-system/cilium-operator-6f9c7c5859-fpsx4" Mar 4 01:02:03.221188 kubelet[2631]: I0304 01:02:03.220821 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbb44\" (UniqueName: \"kubernetes.io/projected/f485ac81-9446-4bf0-b8ea-2042137505a1-kube-api-access-kbb44\") pod \"cilium-operator-6f9c7c5859-fpsx4\" (UID: \"f485ac81-9446-4bf0-b8ea-2042137505a1\") " pod="kube-system/cilium-operator-6f9c7c5859-fpsx4" Mar 4 01:02:03.400900 kubelet[2631]: E0304 01:02:03.396432 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:03.401562 containerd[1479]: time="2026-03-04T01:02:03.400880451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5xrg,Uid:f4622411-eb40-43f9-8c9a-0104a632c61b,Namespace:kube-system,Attempt:0,}" Mar 4 01:02:03.450125 kubelet[2631]: E0304 01:02:03.448003 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:03.457102 containerd[1479]: time="2026-03-04T01:02:03.456000819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvk9v,Uid:a115f4bb-11ec-4092-81a5-41118b532269,Namespace:kube-system,Attempt:0,}" Mar 4 01:02:03.477869 kubelet[2631]: E0304 01:02:03.475507 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:03.478428 containerd[1479]: time="2026-03-04T01:02:03.477303377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fpsx4,Uid:f485ac81-9446-4bf0-b8ea-2042137505a1,Namespace:kube-system,Attempt:0,}" Mar 4 01:02:04.174118 containerd[1479]: time="2026-03-04T01:02:04.170812891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:04.174118 containerd[1479]: time="2026-03-04T01:02:04.171337045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:04.174118 containerd[1479]: time="2026-03-04T01:02:04.171364276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:04.175155 containerd[1479]: time="2026-03-04T01:02:04.171789885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:04.476149 containerd[1479]: time="2026-03-04T01:02:04.473087358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:04.562514 containerd[1479]: time="2026-03-04T01:02:04.499403311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:04.562514 containerd[1479]: time="2026-03-04T01:02:04.528813973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:04.711027 containerd[1479]: time="2026-03-04T01:02:04.656085462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:04.915223 containerd[1479]: time="2026-03-04T01:02:04.914180310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:04.915223 containerd[1479]: time="2026-03-04T01:02:04.914421203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:04.915223 containerd[1479]: time="2026-03-04T01:02:04.914437523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:04.916144 containerd[1479]: time="2026-03-04T01:02:04.915745848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:05.047096 systemd[1]: Started cri-containerd-f38ba64e457263e0b7b9e2295b2019152527b1b707c2ce9a02103e59a74b721e.scope - libcontainer container f38ba64e457263e0b7b9e2295b2019152527b1b707c2ce9a02103e59a74b721e. Mar 4 01:02:05.072256 systemd[1]: Started cri-containerd-f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69.scope - libcontainer container f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69. Mar 4 01:02:05.331213 kubelet[2631]: E0304 01:02:05.325436 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:05.331777 systemd[1]: Started cri-containerd-8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633.scope - libcontainer container 8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633. Mar 4 01:02:05.443006 containerd[1479]: time="2026-03-04T01:02:05.442885572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvk9v,Uid:a115f4bb-11ec-4092-81a5-41118b532269,Namespace:kube-system,Attempt:0,} returns sandbox id \"f38ba64e457263e0b7b9e2295b2019152527b1b707c2ce9a02103e59a74b721e\"" Mar 4 01:02:05.445135 kubelet[2631]: E0304 01:02:05.444316 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:05.477212 containerd[1479]: time="2026-03-04T01:02:05.477164865Z" level=info msg="CreateContainer within sandbox \"f38ba64e457263e0b7b9e2295b2019152527b1b707c2ce9a02103e59a74b721e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:02:05.522847 containerd[1479]: time="2026-03-04T01:02:05.515418721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5xrg,Uid:f4622411-eb40-43f9-8c9a-0104a632c61b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\"" Mar 4 01:02:05.547269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154179242.mount: Deactivated successfully. Mar 4 01:02:05.553040 kubelet[2631]: E0304 01:02:05.548291 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:05.553331 containerd[1479]: time="2026-03-04T01:02:05.552322513Z" level=info msg="CreateContainer within sandbox \"f38ba64e457263e0b7b9e2295b2019152527b1b707c2ce9a02103e59a74b721e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bd467f0350cab89013fd70aba9d5cb0ea53b66cb183a4a9de03e58536457d39\"" Mar 4 01:02:05.560516 containerd[1479]: time="2026-03-04T01:02:05.556077900Z" level=info msg="StartContainer for \"6bd467f0350cab89013fd70aba9d5cb0ea53b66cb183a4a9de03e58536457d39\"" Mar 4 01:02:05.627072 containerd[1479]: time="2026-03-04T01:02:05.625105751Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 4 01:02:05.704875 containerd[1479]: time="2026-03-04T01:02:05.702212302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fpsx4,Uid:f485ac81-9446-4bf0-b8ea-2042137505a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633\"" Mar 4 01:02:05.722071 kubelet[2631]: E0304 01:02:05.721454 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:06.050837 kubelet[2631]: E0304 01:02:06.046082 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:06.378181 systemd[1]: Started cri-containerd-6bd467f0350cab89013fd70aba9d5cb0ea53b66cb183a4a9de03e58536457d39.scope - libcontainer container 6bd467f0350cab89013fd70aba9d5cb0ea53b66cb183a4a9de03e58536457d39. Mar 4 01:02:06.564273 kubelet[2631]: E0304 01:02:06.561081 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:06.800338 containerd[1479]: time="2026-03-04T01:02:06.799158263Z" level=info msg="StartContainer for \"6bd467f0350cab89013fd70aba9d5cb0ea53b66cb183a4a9de03e58536457d39\" returns successfully" Mar 4 01:02:07.059855 kubelet[2631]: E0304 01:02:07.059232 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:07.067349 kubelet[2631]: E0304 01:02:07.066425 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:07.212462 kubelet[2631]: I0304 01:02:07.212386 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kvk9v" podStartSLOduration=7.212365273 podStartE2EDuration="7.212365273s" podCreationTimestamp="2026-03-04 01:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:07.120070466 +0000 UTC m=+11.189518362" watchObservedRunningTime="2026-03-04 01:02:07.212365273 +0000 UTC m=+11.281813159" Mar 4 01:02:08.768107 kubelet[2631]: E0304 01:02:08.767368 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:26.895358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377453987.mount: Deactivated successfully. Mar 4 01:02:36.643192 containerd[1479]: time="2026-03-04T01:02:36.637501496Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:36.670135 containerd[1479]: time="2026-03-04T01:02:36.665311753Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 4 01:02:36.670135 containerd[1479]: time="2026-03-04T01:02:36.667511268Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:36.671148 containerd[1479]: time="2026-03-04T01:02:36.671112502Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 31.045813318s" Mar 4 01:02:36.671736 containerd[1479]: time="2026-03-04T01:02:36.671362410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 4 01:02:36.707373 containerd[1479]: time="2026-03-04T01:02:36.706119838Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 4 01:02:36.742864 containerd[1479]: time="2026-03-04T01:02:36.742135430Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 01:02:36.857882 containerd[1479]: time="2026-03-04T01:02:36.857142619Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\"" Mar 4 01:02:36.862761 containerd[1479]: time="2026-03-04T01:02:36.860112217Z" level=info msg="StartContainer for \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\"" Mar 4 01:02:37.177423 systemd[1]: Started cri-containerd-7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543.scope - libcontainer container 7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543. Mar 4 01:02:37.388539 containerd[1479]: time="2026-03-04T01:02:37.377436124Z" level=info msg="StartContainer for \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\" returns successfully" Mar 4 01:02:37.454176 kubelet[2631]: E0304 01:02:37.452409 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:37.455731 systemd[1]: cri-containerd-7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543.scope: Deactivated successfully. Mar 4 01:02:37.815321 containerd[1479]: time="2026-03-04T01:02:37.809905441Z" level=info msg="shim disconnected" id=7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543 namespace=k8s.io Mar 4 01:02:37.815321 containerd[1479]: time="2026-03-04T01:02:37.814472245Z" level=warning msg="cleaning up after shim disconnected" id=7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543 namespace=k8s.io Mar 4 01:02:37.815321 containerd[1479]: time="2026-03-04T01:02:37.814489237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:02:37.829850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543-rootfs.mount: Deactivated successfully. Mar 4 01:02:38.022461 containerd[1479]: time="2026-03-04T01:02:38.020147611Z" level=warning msg="cleanup warnings time=\"2026-03-04T01:02:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 01:02:38.069162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310581028.mount: Deactivated successfully. Mar 4 01:02:38.454781 kubelet[2631]: E0304 01:02:38.454710 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:38.505483 containerd[1479]: time="2026-03-04T01:02:38.505354315Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 01:02:38.623882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708059332.mount: Deactivated successfully. Mar 4 01:02:38.648291 containerd[1479]: time="2026-03-04T01:02:38.648033211Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\"" Mar 4 01:02:38.650652 containerd[1479]: time="2026-03-04T01:02:38.649106080Z" level=info msg="StartContainer for \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\"" Mar 4 01:02:38.729113 systemd[1]: Started cri-containerd-b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57.scope - libcontainer container b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57. Mar 4 01:02:38.827026 containerd[1479]: time="2026-03-04T01:02:38.826918483Z" level=info msg="StartContainer for \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\" returns successfully" Mar 4 01:02:38.847381 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:02:38.847856 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:02:38.847913 systemd[1]: systemd-sysctl.service: Consumed 1.729s CPU time. Mar 4 01:02:38.847962 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:02:38.855666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:02:38.856297 systemd[1]: cri-containerd-b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57.scope: Deactivated successfully. Mar 4 01:02:38.915154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57-rootfs.mount: Deactivated successfully. Mar 4 01:02:38.938109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:02:38.960164 containerd[1479]: time="2026-03-04T01:02:38.959860982Z" level=info msg="shim disconnected" id=b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57 namespace=k8s.io Mar 4 01:02:38.960164 containerd[1479]: time="2026-03-04T01:02:38.959974194Z" level=warning msg="cleaning up after shim disconnected" id=b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57 namespace=k8s.io Mar 4 01:02:38.960164 containerd[1479]: time="2026-03-04T01:02:38.959991506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:02:39.504203 kubelet[2631]: E0304 01:02:39.503422 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:39.541217 containerd[1479]: time="2026-03-04T01:02:39.540891559Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 01:02:39.634679 containerd[1479]: time="2026-03-04T01:02:39.633934451Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\"" Mar 4 01:02:39.641835 containerd[1479]: time="2026-03-04T01:02:39.641024871Z" level=info msg="StartContainer for \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\"" Mar 4 01:02:39.773147 systemd[1]: Started cri-containerd-a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979.scope - libcontainer container a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979. Mar 4 01:02:39.853834 containerd[1479]: time="2026-03-04T01:02:39.853194234Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:39.859346 containerd[1479]: time="2026-03-04T01:02:39.853680623Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 4 01:02:39.868889 containerd[1479]: time="2026-03-04T01:02:39.868840030Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:39.876519 containerd[1479]: time="2026-03-04T01:02:39.876387557Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.16983086s" Mar 4 01:02:39.876519 containerd[1479]: time="2026-03-04T01:02:39.876530335Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 4 01:02:39.909794 containerd[1479]: time="2026-03-04T01:02:39.907839362Z" level=info msg="CreateContainer within sandbox \"8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 4 01:02:39.928868 containerd[1479]: time="2026-03-04T01:02:39.928652648Z" level=info msg="StartContainer for \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\" returns successfully" Mar 4 01:02:39.934843 systemd[1]: cri-containerd-a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979.scope: Deactivated successfully. Mar 4 01:02:39.966960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1456331999.mount: Deactivated successfully. Mar 4 01:02:40.029988 containerd[1479]: time="2026-03-04T01:02:40.029196265Z" level=info msg="CreateContainer within sandbox \"8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\"" Mar 4 01:02:40.037032 containerd[1479]: time="2026-03-04T01:02:40.036551999Z" level=info msg="StartContainer for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\"" Mar 4 01:02:40.143497 containerd[1479]: time="2026-03-04T01:02:40.142966550Z" level=info msg="shim disconnected" id=a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979 namespace=k8s.io Mar 4 01:02:40.143497 containerd[1479]: time="2026-03-04T01:02:40.143054967Z" level=warning msg="cleaning up after shim disconnected" id=a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979 namespace=k8s.io Mar 4 01:02:40.143497 containerd[1479]: time="2026-03-04T01:02:40.143074543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:02:40.163113 systemd[1]: Started cri-containerd-d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465.scope - libcontainer container d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465. Mar 4 01:02:40.273839 containerd[1479]: time="2026-03-04T01:02:40.273333680Z" level=info msg="StartContainer for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" returns successfully" Mar 4 01:02:40.496961 kubelet[2631]: E0304 01:02:40.495224 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:40.498831 kubelet[2631]: E0304 01:02:40.498761 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:40.512694 containerd[1479]: time="2026-03-04T01:02:40.512513295Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 01:02:40.550139 containerd[1479]: time="2026-03-04T01:02:40.549992560Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\"" Mar 4 01:02:40.552635 containerd[1479]: time="2026-03-04T01:02:40.551016206Z" level=info msg="StartContainer for \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\"" Mar 4 01:02:40.708071 systemd[1]: Started cri-containerd-73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55.scope - libcontainer container 73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55. Mar 4 01:02:40.829815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979-rootfs.mount: Deactivated successfully. Mar 4 01:02:40.854885 kubelet[2631]: I0304 01:02:40.852500 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fpsx4" podStartSLOduration=5.700846815 podStartE2EDuration="39.852479732s" podCreationTimestamp="2026-03-04 01:02:01 +0000 UTC" firstStartedPulling="2026-03-04 01:02:05.729137793 +0000 UTC m=+9.798585679" lastFinishedPulling="2026-03-04 01:02:39.880770709 +0000 UTC m=+43.950218596" observedRunningTime="2026-03-04 01:02:40.578551479 +0000 UTC m=+44.647999386" watchObservedRunningTime="2026-03-04 01:02:40.852479732 +0000 UTC m=+44.921927638" Mar 4 01:02:40.873286 systemd[1]: cri-containerd-73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55.scope: Deactivated successfully. Mar 4 01:02:40.906139 containerd[1479]: time="2026-03-04T01:02:40.887554212Z" level=info msg="StartContainer for \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\" returns successfully" Mar 4 01:02:40.975722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55-rootfs.mount: Deactivated successfully. Mar 4 01:02:41.046009 containerd[1479]: time="2026-03-04T01:02:41.038003863Z" level=info msg="shim disconnected" id=73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55 namespace=k8s.io Mar 4 01:02:41.046009 containerd[1479]: time="2026-03-04T01:02:41.038978466Z" level=warning msg="cleaning up after shim disconnected" id=73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55 namespace=k8s.io Mar 4 01:02:41.057473 containerd[1479]: time="2026-03-04T01:02:41.056917282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:02:41.145005 containerd[1479]: time="2026-03-04T01:02:41.144902365Z" level=warning msg="cleanup warnings time=\"2026-03-04T01:02:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 01:02:41.512113 kubelet[2631]: E0304 01:02:41.511104 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:41.516350 kubelet[2631]: E0304 01:02:41.516087 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:41.531052 containerd[1479]: time="2026-03-04T01:02:41.530950767Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 01:02:41.614512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622908697.mount: Deactivated successfully. Mar 4 01:02:41.628487 containerd[1479]: time="2026-03-04T01:02:41.628109298Z" level=info msg="CreateContainer within sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\"" Mar 4 01:02:41.632147 containerd[1479]: time="2026-03-04T01:02:41.632013027Z" level=info msg="StartContainer for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\"" Mar 4 01:02:41.738853 systemd[1]: Started cri-containerd-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad.scope - libcontainer container aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad. Mar 4 01:02:41.820973 containerd[1479]: time="2026-03-04T01:02:41.819099963Z" level=info msg="StartContainer for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" returns successfully" Mar 4 01:02:41.950202 systemd[1]: run-containerd-runc-k8s.io-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad-runc.tPhd81.mount: Deactivated successfully. Mar 4 01:02:42.192472 kubelet[2631]: I0304 01:02:42.192233 2631 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 4 01:02:42.257893 systemd[1]: Created slice kubepods-burstable-podf7de4bd7_48fb_4afb_9906_a7af121de2e4.slice - libcontainer container kubepods-burstable-podf7de4bd7_48fb_4afb_9906_a7af121de2e4.slice. Mar 4 01:02:42.268518 systemd[1]: Created slice kubepods-burstable-pod67bf3af9_3e5a_400b_a898_fac572d39e76.slice - libcontainer container kubepods-burstable-pod67bf3af9_3e5a_400b_a898_fac572d39e76.slice. Mar 4 01:02:42.274764 kubelet[2631]: I0304 01:02:42.274709 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7de4bd7-48fb-4afb-9906-a7af121de2e4-config-volume\") pod \"coredns-66bc5c9577-gdxbq\" (UID: \"f7de4bd7-48fb-4afb-9906-a7af121de2e4\") " pod="kube-system/coredns-66bc5c9577-gdxbq" Mar 4 01:02:42.274764 kubelet[2631]: I0304 01:02:42.274755 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2bfr\" (UniqueName: \"kubernetes.io/projected/f7de4bd7-48fb-4afb-9906-a7af121de2e4-kube-api-access-n2bfr\") pod \"coredns-66bc5c9577-gdxbq\" (UID: \"f7de4bd7-48fb-4afb-9906-a7af121de2e4\") " pod="kube-system/coredns-66bc5c9577-gdxbq" Mar 4 01:02:42.275048 kubelet[2631]: I0304 01:02:42.274779 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67bf3af9-3e5a-400b-a898-fac572d39e76-config-volume\") pod \"coredns-66bc5c9577-sk7l2\" (UID: \"67bf3af9-3e5a-400b-a898-fac572d39e76\") " pod="kube-system/coredns-66bc5c9577-sk7l2" Mar 4 01:02:42.275048 kubelet[2631]: I0304 01:02:42.274803 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bwp\" (UniqueName: \"kubernetes.io/projected/67bf3af9-3e5a-400b-a898-fac572d39e76-kube-api-access-r8bwp\") pod \"coredns-66bc5c9577-sk7l2\" (UID: \"67bf3af9-3e5a-400b-a898-fac572d39e76\") " pod="kube-system/coredns-66bc5c9577-sk7l2" Mar 4 01:02:42.524773 kubelet[2631]: E0304 01:02:42.523927 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:42.573988 kubelet[2631]: E0304 01:02:42.573807 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:42.580232 kubelet[2631]: E0304 01:02:42.580194 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:42.592897 containerd[1479]: time="2026-03-04T01:02:42.592455426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gdxbq,Uid:f7de4bd7-48fb-4afb-9906-a7af121de2e4,Namespace:kube-system,Attempt:0,}" Mar 4 01:02:42.604754 containerd[1479]: time="2026-03-04T01:02:42.603904861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sk7l2,Uid:67bf3af9-3e5a-400b-a898-fac572d39e76,Namespace:kube-system,Attempt:0,}" Mar 4 01:02:43.528195 kubelet[2631]: E0304 01:02:43.528072 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:44.537076 kubelet[2631]: E0304 01:02:44.536705 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:44.752557 systemd-networkd[1398]: cilium_host: Link UP Mar 4 01:02:44.753123 systemd-networkd[1398]: cilium_net: Link UP Mar 4 01:02:44.753699 systemd-networkd[1398]: cilium_net: Gained carrier Mar 4 01:02:44.754056 systemd-networkd[1398]: cilium_host: Gained carrier Mar 4 01:02:45.073992 systemd-networkd[1398]: cilium_vxlan: Link UP Mar 4 01:02:45.074003 systemd-networkd[1398]: cilium_vxlan: Gained carrier Mar 4 01:02:45.199991 systemd-networkd[1398]: cilium_net: Gained IPv6LL Mar 4 01:02:45.387205 systemd-networkd[1398]: cilium_host: Gained IPv6LL Mar 4 01:02:45.567104 kernel: NET: Registered PF_ALG protocol family Mar 4 01:02:46.921225 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Mar 4 01:02:47.300188 systemd-networkd[1398]: lxc_health: Link UP Mar 4 01:02:47.322953 systemd-networkd[1398]: lxc_health: Gained carrier Mar 4 01:02:47.363205 kubelet[2631]: E0304 01:02:47.360809 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:47.403392 kubelet[2631]: I0304 01:02:47.403254 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p5xrg" podStartSLOduration=15.261651293 podStartE2EDuration="46.403233532s" podCreationTimestamp="2026-03-04 01:02:01 +0000 UTC" firstStartedPulling="2026-03-04 01:02:05.558286395 +0000 UTC m=+9.627734291" lastFinishedPulling="2026-03-04 01:02:36.699868634 +0000 UTC m=+40.769316530" observedRunningTime="2026-03-04 01:02:42.557741292 +0000 UTC m=+46.627189199" watchObservedRunningTime="2026-03-04 01:02:47.403233532 +0000 UTC m=+51.472681418" Mar 4 01:02:47.575335 kubelet[2631]: E0304 01:02:47.573547 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:47.922872 kernel: eth0: renamed from tmp145e5 Mar 4 01:02:47.963262 systemd-networkd[1398]: lxc00ad252c66d3: Link UP Mar 4 01:02:47.974953 systemd-networkd[1398]: lxc00ad252c66d3: Gained carrier Mar 4 01:02:47.975256 systemd-networkd[1398]: lxcca0fb0b8ba00: Link UP Mar 4 01:02:48.019275 kernel: eth0: renamed from tmp0cec0 Mar 4 01:02:48.038261 systemd-networkd[1398]: lxcca0fb0b8ba00: Gained carrier Mar 4 01:02:49.117273 systemd-networkd[1398]: lxc_health: Gained IPv6LL Mar 4 01:02:49.119873 systemd-networkd[1398]: lxc00ad252c66d3: Gained IPv6LL Mar 4 01:02:49.673202 systemd-networkd[1398]: lxcca0fb0b8ba00: Gained IPv6LL Mar 4 01:02:52.636935 systemd[1]: run-containerd-runc-k8s.io-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad-runc.Msep4K.mount: Deactivated successfully. Mar 4 01:02:54.263416 sudo[1665]: pam_unix(sudo:session): session closed for user root Mar 4 01:02:54.269455 sshd[1662]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:54.275697 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:34664.service: Deactivated successfully. Mar 4 01:02:54.280109 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:02:54.281545 systemd[1]: session-9.scope: Consumed 18.760s CPU time, 164.9M memory peak, 0B memory swap peak. Mar 4 01:02:54.288238 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:02:54.292205 systemd-logind[1464]: Removed session 9. Mar 4 01:02:54.611746 containerd[1479]: time="2026-03-04T01:02:54.610672871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:54.611746 containerd[1479]: time="2026-03-04T01:02:54.610991929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:54.611746 containerd[1479]: time="2026-03-04T01:02:54.611021224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:54.611746 containerd[1479]: time="2026-03-04T01:02:54.611168448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:54.656013 containerd[1479]: time="2026-03-04T01:02:54.654874301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:54.656013 containerd[1479]: time="2026-03-04T01:02:54.655056742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:54.656013 containerd[1479]: time="2026-03-04T01:02:54.655081369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:54.662019 containerd[1479]: time="2026-03-04T01:02:54.659959561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:54.683846 systemd[1]: Started cri-containerd-0cec0b984b9de99cd455d637ee209bccf90dbcf155a55c311cf8dedc33126647.scope - libcontainer container 0cec0b984b9de99cd455d637ee209bccf90dbcf155a55c311cf8dedc33126647. Mar 4 01:02:54.703305 systemd[1]: Started cri-containerd-145e52246ab1a33326302dfca491e9cbc2b34fbc182d115fc7fb010f8e6cf9ea.scope - libcontainer container 145e52246ab1a33326302dfca491e9cbc2b34fbc182d115fc7fb010f8e6cf9ea. Mar 4 01:02:54.721653 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:54.736119 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:02:54.779021 containerd[1479]: time="2026-03-04T01:02:54.778910122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sk7l2,Uid:67bf3af9-3e5a-400b-a898-fac572d39e76,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cec0b984b9de99cd455d637ee209bccf90dbcf155a55c311cf8dedc33126647\"" Mar 4 01:02:54.780645 kubelet[2631]: E0304 01:02:54.780322 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:54.797319 containerd[1479]: time="2026-03-04T01:02:54.797206755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gdxbq,Uid:f7de4bd7-48fb-4afb-9906-a7af121de2e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"145e52246ab1a33326302dfca491e9cbc2b34fbc182d115fc7fb010f8e6cf9ea\"" Mar 4 01:02:54.798819 kubelet[2631]: E0304 01:02:54.798735 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:54.803682 containerd[1479]: time="2026-03-04T01:02:54.803556939Z" level=info msg="CreateContainer within sandbox \"0cec0b984b9de99cd455d637ee209bccf90dbcf155a55c311cf8dedc33126647\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:02:54.819696 containerd[1479]: time="2026-03-04T01:02:54.818395670Z" level=info msg="CreateContainer within sandbox \"145e52246ab1a33326302dfca491e9cbc2b34fbc182d115fc7fb010f8e6cf9ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:02:54.873664 containerd[1479]: time="2026-03-04T01:02:54.873257501Z" level=info msg="CreateContainer within sandbox \"0cec0b984b9de99cd455d637ee209bccf90dbcf155a55c311cf8dedc33126647\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b753bc18591952f2f9fb68824c5e3dd205ea3251811b7f3ffdc62922805ae1fd\"" Mar 4 01:02:54.878275 containerd[1479]: time="2026-03-04T01:02:54.878102181Z" level=info msg="StartContainer for \"b753bc18591952f2f9fb68824c5e3dd205ea3251811b7f3ffdc62922805ae1fd\"" Mar 4 01:02:54.905900 containerd[1479]: time="2026-03-04T01:02:54.905809256Z" level=info msg="CreateContainer within sandbox \"145e52246ab1a33326302dfca491e9cbc2b34fbc182d115fc7fb010f8e6cf9ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff43f8230bc7f5bc40520413cc42b3ed19eb28f2abca16499bea0fce093d52ea\"" Mar 4 01:02:54.907910 containerd[1479]: time="2026-03-04T01:02:54.907874161Z" level=info msg="StartContainer for \"ff43f8230bc7f5bc40520413cc42b3ed19eb28f2abca16499bea0fce093d52ea\"" Mar 4 01:02:54.939991 systemd[1]: Started cri-containerd-b753bc18591952f2f9fb68824c5e3dd205ea3251811b7f3ffdc62922805ae1fd.scope - libcontainer container b753bc18591952f2f9fb68824c5e3dd205ea3251811b7f3ffdc62922805ae1fd. Mar 4 01:02:54.980869 systemd[1]: Started cri-containerd-ff43f8230bc7f5bc40520413cc42b3ed19eb28f2abca16499bea0fce093d52ea.scope - libcontainer container ff43f8230bc7f5bc40520413cc42b3ed19eb28f2abca16499bea0fce093d52ea. Mar 4 01:02:54.996486 containerd[1479]: time="2026-03-04T01:02:54.996439662Z" level=info msg="StartContainer for \"b753bc18591952f2f9fb68824c5e3dd205ea3251811b7f3ffdc62922805ae1fd\" returns successfully" Mar 4 01:02:55.048558 containerd[1479]: time="2026-03-04T01:02:55.047989110Z" level=info msg="StartContainer for \"ff43f8230bc7f5bc40520413cc42b3ed19eb28f2abca16499bea0fce093d52ea\" returns successfully" Mar 4 01:02:55.741905 kubelet[2631]: E0304 01:02:55.738201 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:55.745914 kubelet[2631]: E0304 01:02:55.744303 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:55.835327 kubelet[2631]: I0304 01:02:55.835140 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gdxbq" podStartSLOduration=55.835119604 podStartE2EDuration="55.835119604s" podCreationTimestamp="2026-03-04 01:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:55.834309557 +0000 UTC m=+59.903757463" watchObservedRunningTime="2026-03-04 01:02:55.835119604 +0000 UTC m=+59.904567510" Mar 4 01:02:55.969918 kubelet[2631]: I0304 01:02:55.969692 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sk7l2" podStartSLOduration=55.969668079 podStartE2EDuration="55.969668079s" podCreationTimestamp="2026-03-04 01:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:55.966714131 +0000 UTC m=+60.036162046" watchObservedRunningTime="2026-03-04 01:02:55.969668079 +0000 UTC m=+60.039115975" Mar 4 01:02:56.761502 kubelet[2631]: E0304 01:02:56.761447 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:56.763537 kubelet[2631]: E0304 01:02:56.763141 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:57.765671 kubelet[2631]: E0304 01:02:57.764827 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:57.767443 kubelet[2631]: E0304 01:02:57.767288 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:07.447264 kubelet[2631]: E0304 01:03:07.446980 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:18.448526 kubelet[2631]: E0304 01:03:18.447729 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:30.449022 kubelet[2631]: E0304 01:03:30.447470 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:37.446654 kubelet[2631]: E0304 01:03:37.446513 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:51.449444 kubelet[2631]: E0304 01:03:51.449284 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:56.480124 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:33442.service - OpenSSH per-connection server daemon (10.0.0.1:33442). Mar 4 01:03:56.572510 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 33442 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:03:56.576746 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:56.598791 systemd-logind[1464]: New session 10 of user core. Mar 4 01:03:56.606123 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:03:56.979030 sshd[4158]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:56.990838 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:33442.service: Deactivated successfully. Mar 4 01:03:56.995273 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:03:56.999131 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:03:57.003339 systemd-logind[1464]: Removed session 10. Mar 4 01:04:02.029517 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:33450.service - OpenSSH per-connection server daemon (10.0.0.1:33450). Mar 4 01:04:02.114253 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 33450 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:02.122511 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:02.144730 systemd-logind[1464]: New session 11 of user core. Mar 4 01:04:02.162759 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:04:02.436275 sshd[4175]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:02.444751 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:33450.service: Deactivated successfully. Mar 4 01:04:02.448761 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:04:02.449843 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:04:02.453447 systemd-logind[1464]: Removed session 11. Mar 4 01:04:07.718752 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:33850.service - OpenSSH per-connection server daemon (10.0.0.1:33850). Mar 4 01:04:08.298234 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 33850 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:08.299364 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:08.327785 systemd-logind[1464]: New session 12 of user core. Mar 4 01:04:08.337716 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:04:08.746772 sshd[4190]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:08.755770 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:33850.service: Deactivated successfully. Mar 4 01:04:08.761371 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:04:08.765001 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:04:08.769671 systemd-logind[1464]: Removed session 12. Mar 4 01:04:12.446831 kubelet[2631]: E0304 01:04:12.446050 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:13.773845 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Mar 4 01:04:13.871332 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:13.876153 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:13.889185 systemd-logind[1464]: New session 13 of user core. Mar 4 01:04:13.928509 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:04:14.320727 sshd[4208]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:14.327483 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:60250.service: Deactivated successfully. Mar 4 01:04:14.331143 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:04:14.340072 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:04:14.346220 systemd-logind[1464]: Removed session 13. Mar 4 01:04:14.445803 kubelet[2631]: E0304 01:04:14.445736 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:15.448539 kubelet[2631]: E0304 01:04:15.447344 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:16.457779 kubelet[2631]: E0304 01:04:16.453156 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:19.729777 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:60256.service - OpenSSH per-connection server daemon (10.0.0.1:60256). Mar 4 01:04:19.858076 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 60256 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:19.860891 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:19.871430 systemd-logind[1464]: New session 14 of user core. Mar 4 01:04:19.883517 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:04:20.098250 sshd[4225]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:20.105264 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:60256.service: Deactivated successfully. Mar 4 01:04:20.108410 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:04:20.110851 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:04:20.113462 systemd-logind[1464]: Removed session 14. Mar 4 01:04:22.446387 kubelet[2631]: E0304 01:04:22.446187 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:25.117223 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:51448.service - OpenSSH per-connection server daemon (10.0.0.1:51448). Mar 4 01:04:25.175404 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 51448 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:25.179334 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:25.205126 systemd-logind[1464]: New session 15 of user core. Mar 4 01:04:25.215917 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:04:25.419515 sshd[4240]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:25.425193 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:51448.service: Deactivated successfully. Mar 4 01:04:25.428378 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:04:25.430162 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:04:25.434506 systemd-logind[1464]: Removed session 15. Mar 4 01:04:30.442495 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:51462.service - OpenSSH per-connection server daemon (10.0.0.1:51462). Mar 4 01:04:30.495941 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 51462 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:30.498954 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:30.511246 systemd-logind[1464]: New session 16 of user core. Mar 4 01:04:30.518225 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:04:30.705156 sshd[4255]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:30.712095 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:51462.service: Deactivated successfully. Mar 4 01:04:30.715030 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:04:30.717875 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:04:30.720698 systemd-logind[1464]: Removed session 16. Mar 4 01:04:35.730329 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:58178.service - OpenSSH per-connection server daemon (10.0.0.1:58178). Mar 4 01:04:35.775144 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 58178 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:35.778075 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:35.803194 systemd-logind[1464]: New session 17 of user core. Mar 4 01:04:35.809909 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:04:36.006691 sshd[4271]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:36.012828 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:58178.service: Deactivated successfully. Mar 4 01:04:36.015886 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:04:36.018765 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:04:36.021124 systemd-logind[1464]: Removed session 17. Mar 4 01:04:41.020485 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:58182.service - OpenSSH per-connection server daemon (10.0.0.1:58182). Mar 4 01:04:41.067074 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 58182 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:41.069795 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:41.076100 systemd-logind[1464]: New session 18 of user core. Mar 4 01:04:41.082897 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:04:41.256898 sshd[4289]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:41.262681 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:58182.service: Deactivated successfully. Mar 4 01:04:41.265452 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:04:41.266814 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:04:41.268984 systemd-logind[1464]: Removed session 18. Mar 4 01:04:46.276731 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:58722.service - OpenSSH per-connection server daemon (10.0.0.1:58722). Mar 4 01:04:46.352482 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 58722 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:46.355043 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:46.361339 systemd-logind[1464]: New session 19 of user core. Mar 4 01:04:46.368110 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:04:46.546940 sshd[4305]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:46.560982 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:58722.service: Deactivated successfully. Mar 4 01:04:46.564173 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:04:46.566837 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:04:46.575154 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:58734.service - OpenSSH per-connection server daemon (10.0.0.1:58734). Mar 4 01:04:46.576920 systemd-logind[1464]: Removed session 19. Mar 4 01:04:46.630663 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 58734 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:46.632369 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:46.639356 systemd-logind[1464]: New session 20 of user core. Mar 4 01:04:46.654029 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:04:46.877555 sshd[4321]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:46.889684 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:58734.service: Deactivated successfully. Mar 4 01:04:46.893521 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:04:46.896549 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:04:46.911189 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:58740.service - OpenSSH per-connection server daemon (10.0.0.1:58740). Mar 4 01:04:46.913735 systemd-logind[1464]: Removed session 20. Mar 4 01:04:46.947877 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 58740 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:46.949967 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:46.957764 systemd-logind[1464]: New session 21 of user core. Mar 4 01:04:46.970996 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:04:47.125870 sshd[4334]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:47.130521 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:58740.service: Deactivated successfully. Mar 4 01:04:47.132994 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:04:47.134208 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:04:47.135817 systemd-logind[1464]: Removed session 21. Mar 4 01:04:47.445727 kubelet[2631]: E0304 01:04:47.445523 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:52.139207 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:50644.service - OpenSSH per-connection server daemon (10.0.0.1:50644). Mar 4 01:04:52.186770 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 50644 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:52.189258 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:52.196497 systemd-logind[1464]: New session 22 of user core. Mar 4 01:04:52.201799 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:04:52.342810 sshd[4348]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:52.348873 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:50644.service: Deactivated successfully. Mar 4 01:04:52.352539 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:04:52.353972 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:04:52.358028 systemd-logind[1464]: Removed session 22. Mar 4 01:04:54.445775 kubelet[2631]: E0304 01:04:54.445681 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:55.445852 kubelet[2631]: E0304 01:04:55.445800 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:57.362281 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:50650.service - OpenSSH per-connection server daemon (10.0.0.1:50650). Mar 4 01:04:57.428691 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:04:57.431533 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:57.445626 systemd-logind[1464]: New session 23 of user core. Mar 4 01:04:57.455121 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:04:57.626281 sshd[4369]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:57.631225 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:50650.service: Deactivated successfully. Mar 4 01:04:57.634161 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:04:57.636916 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:04:57.638534 systemd-logind[1464]: Removed session 23. Mar 4 01:05:02.671029 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:47404.service - OpenSSH per-connection server daemon (10.0.0.1:47404). Mar 4 01:05:02.755628 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 47404 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:02.759785 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:02.772133 systemd-logind[1464]: New session 24 of user core. Mar 4 01:05:02.783955 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:05:03.020730 sshd[4383]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:03.033338 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:47404.service: Deactivated successfully. Mar 4 01:05:03.036383 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:05:03.040169 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:05:03.051231 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:47420.service - OpenSSH per-connection server daemon (10.0.0.1:47420). Mar 4 01:05:03.053339 systemd-logind[1464]: Removed session 24. Mar 4 01:05:03.107020 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 47420 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:03.113528 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:03.134266 systemd-logind[1464]: New session 25 of user core. Mar 4 01:05:03.146987 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:05:03.874284 sshd[4397]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:03.911323 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:47420.service: Deactivated successfully. Mar 4 01:05:03.914494 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:05:03.921298 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:05:03.933667 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:47428.service - OpenSSH per-connection server daemon (10.0.0.1:47428). Mar 4 01:05:03.937074 systemd-logind[1464]: Removed session 25. Mar 4 01:05:04.032089 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 47428 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:04.039321 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:04.054873 systemd-logind[1464]: New session 26 of user core. Mar 4 01:05:04.068065 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:05:05.144350 sshd[4409]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:05.154476 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:47428.service: Deactivated successfully. Mar 4 01:05:05.158781 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:05:05.160361 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:05:05.169324 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:47442.service - OpenSSH per-connection server daemon (10.0.0.1:47442). Mar 4 01:05:05.175246 systemd-logind[1464]: Removed session 26. Mar 4 01:05:05.249059 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 47442 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:05.253297 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:05.270936 systemd-logind[1464]: New session 27 of user core. Mar 4 01:05:05.284047 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 01:05:05.772509 sshd[4436]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:05.797507 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:47442.service: Deactivated successfully. Mar 4 01:05:05.814829 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 01:05:05.819019 systemd-logind[1464]: Session 27 logged out. Waiting for processes to exit. Mar 4 01:05:05.833280 systemd[1]: Started sshd@27-10.0.0.28:22-10.0.0.1:47454.service - OpenSSH per-connection server daemon (10.0.0.1:47454). Mar 4 01:05:05.835416 systemd-logind[1464]: Removed session 27. Mar 4 01:05:05.875513 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 47454 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:05.878737 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:05.905933 systemd-logind[1464]: New session 28 of user core. Mar 4 01:05:05.913090 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 4 01:05:06.096340 sshd[4449]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:06.101829 systemd[1]: sshd@27-10.0.0.28:22-10.0.0.1:47454.service: Deactivated successfully. Mar 4 01:05:06.104842 systemd[1]: session-28.scope: Deactivated successfully. Mar 4 01:05:06.106309 systemd-logind[1464]: Session 28 logged out. Waiting for processes to exit. Mar 4 01:05:06.108336 systemd-logind[1464]: Removed session 28. Mar 4 01:05:11.139101 systemd[1]: Started sshd@28-10.0.0.28:22-10.0.0.1:47464.service - OpenSSH per-connection server daemon (10.0.0.1:47464). Mar 4 01:05:11.180541 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 47464 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:11.183287 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:11.215966 systemd-logind[1464]: New session 29 of user core. Mar 4 01:05:11.229925 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 4 01:05:11.406405 sshd[4467]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:11.410419 systemd[1]: sshd@28-10.0.0.28:22-10.0.0.1:47464.service: Deactivated successfully. Mar 4 01:05:11.413376 systemd[1]: session-29.scope: Deactivated successfully. Mar 4 01:05:11.416715 systemd-logind[1464]: Session 29 logged out. Waiting for processes to exit. Mar 4 01:05:11.419009 systemd-logind[1464]: Removed session 29. Mar 4 01:05:16.431838 systemd[1]: Started sshd@29-10.0.0.28:22-10.0.0.1:47158.service - OpenSSH per-connection server daemon (10.0.0.1:47158). Mar 4 01:05:16.492711 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 47158 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:16.494567 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:16.501430 systemd-logind[1464]: New session 30 of user core. Mar 4 01:05:16.510001 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 4 01:05:16.679314 sshd[4481]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:16.685282 systemd[1]: sshd@29-10.0.0.28:22-10.0.0.1:47158.service: Deactivated successfully. Mar 4 01:05:16.705764 systemd[1]: session-30.scope: Deactivated successfully. Mar 4 01:05:16.706982 systemd-logind[1464]: Session 30 logged out. Waiting for processes to exit. Mar 4 01:05:16.709118 systemd-logind[1464]: Removed session 30. Mar 4 01:05:20.447633 kubelet[2631]: E0304 01:05:20.445730 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:21.697513 systemd[1]: Started sshd@30-10.0.0.28:22-10.0.0.1:47172.service - OpenSSH per-connection server daemon (10.0.0.1:47172). Mar 4 01:05:21.736750 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 47172 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:21.738674 sshd[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:21.745551 systemd-logind[1464]: New session 31 of user core. Mar 4 01:05:21.755899 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 4 01:05:21.896225 sshd[4497]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:21.901798 systemd[1]: sshd@30-10.0.0.28:22-10.0.0.1:47172.service: Deactivated successfully. Mar 4 01:05:21.904868 systemd[1]: session-31.scope: Deactivated successfully. Mar 4 01:05:21.906383 systemd-logind[1464]: Session 31 logged out. Waiting for processes to exit. Mar 4 01:05:21.908257 systemd-logind[1464]: Removed session 31. Mar 4 01:05:26.921213 systemd[1]: Started sshd@31-10.0.0.28:22-10.0.0.1:60386.service - OpenSSH per-connection server daemon (10.0.0.1:60386). Mar 4 01:05:26.970611 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 60386 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:26.973473 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:26.980872 systemd-logind[1464]: New session 32 of user core. Mar 4 01:05:26.994059 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 4 01:05:27.164830 sshd[4511]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:27.179481 systemd[1]: sshd@31-10.0.0.28:22-10.0.0.1:60386.service: Deactivated successfully. Mar 4 01:05:27.182277 systemd[1]: session-32.scope: Deactivated successfully. Mar 4 01:05:27.184412 systemd-logind[1464]: Session 32 logged out. Waiting for processes to exit. Mar 4 01:05:27.193289 systemd[1]: Started sshd@32-10.0.0.28:22-10.0.0.1:60388.service - OpenSSH per-connection server daemon (10.0.0.1:60388). Mar 4 01:05:27.206020 systemd-logind[1464]: Removed session 32. Mar 4 01:05:27.261548 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 60388 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:27.264186 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:27.273186 systemd-logind[1464]: New session 33 of user core. Mar 4 01:05:27.281928 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 4 01:05:27.445093 kubelet[2631]: E0304 01:05:27.444861 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:28.880208 containerd[1479]: time="2026-03-04T01:05:28.879995301Z" level=info msg="StopContainer for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" with timeout 30 (s)" Mar 4 01:05:28.881891 containerd[1479]: time="2026-03-04T01:05:28.881407481Z" level=info msg="Stop container \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" with signal terminated" Mar 4 01:05:28.898748 systemd[1]: run-containerd-runc-k8s.io-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad-runc.5MJYpA.mount: Deactivated successfully. Mar 4 01:05:28.921992 systemd[1]: cri-containerd-d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465.scope: Deactivated successfully. Mar 4 01:05:28.922395 systemd[1]: cri-containerd-d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465.scope: Consumed 1.526s CPU time. Mar 4 01:05:28.939102 containerd[1479]: time="2026-03-04T01:05:28.939033987Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:05:28.948393 containerd[1479]: time="2026-03-04T01:05:28.948335832Z" level=info msg="StopContainer for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" with timeout 2 (s)" Mar 4 01:05:28.948861 containerd[1479]: time="2026-03-04T01:05:28.948691103Z" level=info msg="Stop container \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" with signal terminated" Mar 4 01:05:28.959139 systemd-networkd[1398]: lxc_health: Link DOWN Mar 4 01:05:28.959151 systemd-networkd[1398]: lxc_health: Lost carrier Mar 4 01:05:28.976125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465-rootfs.mount: Deactivated successfully. Mar 4 01:05:28.991498 systemd[1]: cri-containerd-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad.scope: Deactivated successfully. Mar 4 01:05:28.992469 systemd[1]: cri-containerd-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad.scope: Consumed 17.879s CPU time. Mar 4 01:05:29.004091 containerd[1479]: time="2026-03-04T01:05:29.004026625Z" level=info msg="shim disconnected" id=d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465 namespace=k8s.io Mar 4 01:05:29.004091 containerd[1479]: time="2026-03-04T01:05:29.004072722Z" level=warning msg="cleaning up after shim disconnected" id=d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465 namespace=k8s.io Mar 4 01:05:29.004091 containerd[1479]: time="2026-03-04T01:05:29.004081989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:29.027760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad-rootfs.mount: Deactivated successfully. Mar 4 01:05:29.041705 containerd[1479]: time="2026-03-04T01:05:29.041628660Z" level=info msg="StopContainer for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" returns successfully" Mar 4 01:05:29.042862 containerd[1479]: time="2026-03-04T01:05:29.042398570Z" level=info msg="StopPodSandbox for \"8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633\"" Mar 4 01:05:29.042862 containerd[1479]: time="2026-03-04T01:05:29.042771292Z" level=info msg="Container to stop \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:05:29.045851 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633-shm.mount: Deactivated successfully. Mar 4 01:05:29.050113 containerd[1479]: time="2026-03-04T01:05:29.050000261Z" level=info msg="shim disconnected" id=aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad namespace=k8s.io Mar 4 01:05:29.050113 containerd[1479]: time="2026-03-04T01:05:29.050085200Z" level=warning msg="cleaning up after shim disconnected" id=aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad namespace=k8s.io Mar 4 01:05:29.050113 containerd[1479]: time="2026-03-04T01:05:29.050099238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:29.058254 systemd[1]: cri-containerd-8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633.scope: Deactivated successfully. Mar 4 01:05:29.083284 containerd[1479]: time="2026-03-04T01:05:29.083170001Z" level=info msg="StopContainer for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" returns successfully" Mar 4 01:05:29.083981 containerd[1479]: time="2026-03-04T01:05:29.083903761Z" level=info msg="StopPodSandbox for \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\"" Mar 4 01:05:29.084083 containerd[1479]: time="2026-03-04T01:05:29.083979744Z" level=info msg="Container to stop \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:05:29.084083 containerd[1479]: time="2026-03-04T01:05:29.084001215Z" level=info msg="Container to stop \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:05:29.084083 containerd[1479]: time="2026-03-04T01:05:29.084015332Z" level=info msg="Container to stop \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:05:29.084083 containerd[1479]: time="2026-03-04T01:05:29.084028937Z" level=info msg="Container to stop \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:05:29.084083 containerd[1479]: time="2026-03-04T01:05:29.084042503Z" level=info msg="Container to stop \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:05:29.108467 systemd[1]: cri-containerd-f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69.scope: Deactivated successfully. Mar 4 01:05:29.123614 containerd[1479]: time="2026-03-04T01:05:29.123383706Z" level=info msg="shim disconnected" id=8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633 namespace=k8s.io Mar 4 01:05:29.123614 containerd[1479]: time="2026-03-04T01:05:29.123474286Z" level=warning msg="cleaning up after shim disconnected" id=8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633 namespace=k8s.io Mar 4 01:05:29.123614 containerd[1479]: time="2026-03-04T01:05:29.123490667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:29.145328 containerd[1479]: time="2026-03-04T01:05:29.144669028Z" level=info msg="TearDown network for sandbox \"8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633\" successfully" Mar 4 01:05:29.145328 containerd[1479]: time="2026-03-04T01:05:29.144729391Z" level=info msg="StopPodSandbox for \"8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633\" returns successfully" Mar 4 01:05:29.156388 containerd[1479]: time="2026-03-04T01:05:29.156320892Z" level=info msg="shim disconnected" id=f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69 namespace=k8s.io Mar 4 01:05:29.157017 containerd[1479]: time="2026-03-04T01:05:29.156945857Z" level=warning msg="cleaning up after shim disconnected" id=f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69 namespace=k8s.io Mar 4 01:05:29.157017 containerd[1479]: time="2026-03-04T01:05:29.157005860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:29.195151 containerd[1479]: time="2026-03-04T01:05:29.195108383Z" level=info msg="TearDown network for sandbox \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" successfully" Mar 4 01:05:29.195668 containerd[1479]: time="2026-03-04T01:05:29.195298541Z" level=info msg="StopPodSandbox for \"f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69\" returns successfully" Mar 4 01:05:29.265423 kubelet[2631]: I0304 01:05:29.265170 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4622411-eb40-43f9-8c9a-0104a632c61b-clustermesh-secrets\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.265423 kubelet[2631]: I0304 01:05:29.265275 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-etc-cni-netd\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.265423 kubelet[2631]: I0304 01:05:29.265312 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-hostproc\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.265423 kubelet[2631]: I0304 01:05:29.265342 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-xtables-lock\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.265423 kubelet[2631]: I0304 01:05:29.265371 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-hubble-tls\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.265423 kubelet[2631]: I0304 01:05:29.265398 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d559l\" (UniqueName: \"kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-kube-api-access-d559l\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266435 kubelet[2631]: I0304 01:05:29.265422 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cni-path\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266435 kubelet[2631]: I0304 01:05:29.265440 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-config-path\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266435 kubelet[2631]: I0304 01:05:29.265664 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.266435 kubelet[2631]: I0304 01:05:29.265721 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.266435 kubelet[2631]: I0304 01:05:29.265748 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-hostproc" (OuterVolumeSpecName: "hostproc") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.266805 kubelet[2631]: I0304 01:05:29.265781 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-cgroup\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266805 kubelet[2631]: I0304 01:05:29.265811 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-lib-modules\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266805 kubelet[2631]: I0304 01:05:29.265839 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-kernel\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266805 kubelet[2631]: I0304 01:05:29.265870 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbb44\" (UniqueName: \"kubernetes.io/projected/f485ac81-9446-4bf0-b8ea-2042137505a1-kube-api-access-kbb44\") pod \"f485ac81-9446-4bf0-b8ea-2042137505a1\" (UID: \"f485ac81-9446-4bf0-b8ea-2042137505a1\") " Mar 4 01:05:29.266805 kubelet[2631]: I0304 01:05:29.265897 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-run\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.266805 kubelet[2631]: I0304 01:05:29.265922 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f485ac81-9446-4bf0-b8ea-2042137505a1-cilium-config-path\") pod \"f485ac81-9446-4bf0-b8ea-2042137505a1\" (UID: \"f485ac81-9446-4bf0-b8ea-2042137505a1\") " Mar 4 01:05:29.267023 kubelet[2631]: I0304 01:05:29.265944 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-bpf-maps\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.267023 kubelet[2631]: I0304 01:05:29.265965 2631 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-net\") pod \"f4622411-eb40-43f9-8c9a-0104a632c61b\" (UID: \"f4622411-eb40-43f9-8c9a-0104a632c61b\") " Mar 4 01:05:29.267023 kubelet[2631]: I0304 01:05:29.266010 2631 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.267023 kubelet[2631]: I0304 01:05:29.266021 2631 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.267023 kubelet[2631]: I0304 01:05:29.266032 2631 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.267023 kubelet[2631]: I0304 01:05:29.266058 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.267249 kubelet[2631]: I0304 01:05:29.266088 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.267249 kubelet[2631]: I0304 01:05:29.266109 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.267249 kubelet[2631]: I0304 01:05:29.266126 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.269635 kubelet[2631]: I0304 01:05:29.267791 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cni-path" (OuterVolumeSpecName: "cni-path") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.271473 kubelet[2631]: I0304 01:05:29.271380 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f485ac81-9446-4bf0-b8ea-2042137505a1-kube-api-access-kbb44" (OuterVolumeSpecName: "kube-api-access-kbb44") pod "f485ac81-9446-4bf0-b8ea-2042137505a1" (UID: "f485ac81-9446-4bf0-b8ea-2042137505a1"). InnerVolumeSpecName "kube-api-access-kbb44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:05:29.271473 kubelet[2631]: I0304 01:05:29.271447 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.272073 kubelet[2631]: I0304 01:05:29.272046 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:05:29.273156 kubelet[2631]: I0304 01:05:29.273055 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:05:29.273217 kubelet[2631]: I0304 01:05:29.273162 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-kube-api-access-d559l" (OuterVolumeSpecName: "kube-api-access-d559l") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "kube-api-access-d559l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:05:29.275026 kubelet[2631]: I0304 01:05:29.274953 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:05:29.275220 kubelet[2631]: I0304 01:05:29.275095 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4622411-eb40-43f9-8c9a-0104a632c61b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f4622411-eb40-43f9-8c9a-0104a632c61b" (UID: "f4622411-eb40-43f9-8c9a-0104a632c61b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:05:29.275563 kubelet[2631]: I0304 01:05:29.275477 2631 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f485ac81-9446-4bf0-b8ea-2042137505a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f485ac81-9446-4bf0-b8ea-2042137505a1" (UID: "f485ac81-9446-4bf0-b8ea-2042137505a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367070 2631 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f485ac81-9446-4bf0-b8ea-2042137505a1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367162 2631 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367181 2631 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367193 2631 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4622411-eb40-43f9-8c9a-0104a632c61b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367206 2631 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367218 2631 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d559l\" (UniqueName: \"kubernetes.io/projected/f4622411-eb40-43f9-8c9a-0104a632c61b-kube-api-access-d559l\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367235 2631 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.367473 kubelet[2631]: I0304 01:05:29.367247 2631 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.368096 kubelet[2631]: I0304 01:05:29.367260 2631 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.368096 kubelet[2631]: I0304 01:05:29.367272 2631 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.368096 kubelet[2631]: I0304 01:05:29.367284 2631 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.368096 kubelet[2631]: I0304 01:05:29.367295 2631 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kbb44\" (UniqueName: \"kubernetes.io/projected/f485ac81-9446-4bf0-b8ea-2042137505a1-kube-api-access-kbb44\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.368096 kubelet[2631]: I0304 01:05:29.367308 2631 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4622411-eb40-43f9-8c9a-0104a632c61b-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 4 01:05:29.399004 kubelet[2631]: I0304 01:05:29.398747 2631 scope.go:117] "RemoveContainer" containerID="d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465" Mar 4 01:05:29.404445 containerd[1479]: time="2026-03-04T01:05:29.404280609Z" level=info msg="RemoveContainer for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\"" Mar 4 01:05:29.409137 systemd[1]: Removed slice kubepods-besteffort-podf485ac81_9446_4bf0_b8ea_2042137505a1.slice - libcontainer container kubepods-besteffort-podf485ac81_9446_4bf0_b8ea_2042137505a1.slice. Mar 4 01:05:29.409276 systemd[1]: kubepods-besteffort-podf485ac81_9446_4bf0_b8ea_2042137505a1.slice: Consumed 1.659s CPU time. Mar 4 01:05:29.416350 containerd[1479]: time="2026-03-04T01:05:29.416073659Z" level=info msg="RemoveContainer for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" returns successfully" Mar 4 01:05:29.417294 kubelet[2631]: I0304 01:05:29.417159 2631 scope.go:117] "RemoveContainer" containerID="d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465" Mar 4 01:05:29.417672 containerd[1479]: time="2026-03-04T01:05:29.417411150Z" level=error msg="ContainerStatus for \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\": not found" Mar 4 01:05:29.417867 kubelet[2631]: E0304 01:05:29.417816 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\": not found" containerID="d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465" Mar 4 01:05:29.417921 kubelet[2631]: I0304 01:05:29.417869 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465"} err="failed to get container status \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9f5292e66ba88ed57b957c697948b743d1da06f32e800dbcd44ec19072cc465\": not found" Mar 4 01:05:29.417921 kubelet[2631]: I0304 01:05:29.417913 2631 scope.go:117] "RemoveContainer" containerID="aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad" Mar 4 01:05:29.420664 containerd[1479]: time="2026-03-04T01:05:29.420300858Z" level=info msg="RemoveContainer for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\"" Mar 4 01:05:29.420814 systemd[1]: Removed slice kubepods-burstable-podf4622411_eb40_43f9_8c9a_0104a632c61b.slice - libcontainer container kubepods-burstable-podf4622411_eb40_43f9_8c9a_0104a632c61b.slice. Mar 4 01:05:29.420947 systemd[1]: kubepods-burstable-podf4622411_eb40_43f9_8c9a_0104a632c61b.slice: Consumed 18.406s CPU time. Mar 4 01:05:29.426254 containerd[1479]: time="2026-03-04T01:05:29.426174635Z" level=info msg="RemoveContainer for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" returns successfully" Mar 4 01:05:29.426451 kubelet[2631]: I0304 01:05:29.426396 2631 scope.go:117] "RemoveContainer" containerID="73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55" Mar 4 01:05:29.429633 containerd[1479]: time="2026-03-04T01:05:29.429269891Z" level=info msg="RemoveContainer for \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\"" Mar 4 01:05:29.440315 containerd[1479]: time="2026-03-04T01:05:29.440185534Z" level=info msg="RemoveContainer for \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\" returns successfully" Mar 4 01:05:29.441516 kubelet[2631]: I0304 01:05:29.440445 2631 scope.go:117] "RemoveContainer" containerID="a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979" Mar 4 01:05:29.444919 containerd[1479]: time="2026-03-04T01:05:29.444494845Z" level=info msg="RemoveContainer for \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\"" Mar 4 01:05:29.451307 containerd[1479]: time="2026-03-04T01:05:29.451165308Z" level=info msg="RemoveContainer for \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\" returns successfully" Mar 4 01:05:29.451506 kubelet[2631]: I0304 01:05:29.451459 2631 scope.go:117] "RemoveContainer" containerID="b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57" Mar 4 01:05:29.453112 containerd[1479]: time="2026-03-04T01:05:29.453062306Z" level=info msg="RemoveContainer for \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\"" Mar 4 01:05:29.458752 containerd[1479]: time="2026-03-04T01:05:29.458659560Z" level=info msg="RemoveContainer for \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\" returns successfully" Mar 4 01:05:29.459217 kubelet[2631]: I0304 01:05:29.459113 2631 scope.go:117] "RemoveContainer" containerID="7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543" Mar 4 01:05:29.461021 containerd[1479]: time="2026-03-04T01:05:29.460967927Z" level=info msg="RemoveContainer for \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\"" Mar 4 01:05:29.466150 containerd[1479]: time="2026-03-04T01:05:29.466024631Z" level=info msg="RemoveContainer for \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\" returns successfully" Mar 4 01:05:29.466631 kubelet[2631]: I0304 01:05:29.466423 2631 scope.go:117] "RemoveContainer" containerID="aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad" Mar 4 01:05:29.466923 containerd[1479]: time="2026-03-04T01:05:29.466790368Z" level=error msg="ContainerStatus for \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\": not found" Mar 4 01:05:29.467251 kubelet[2631]: E0304 01:05:29.467192 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\": not found" containerID="aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad" Mar 4 01:05:29.467251 kubelet[2631]: I0304 01:05:29.467227 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad"} err="failed to get container status \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa8e12d338740dac293056bf034f470ce07a6703bfdc4270a7718aa4915e21ad\": not found" Mar 4 01:05:29.467331 kubelet[2631]: I0304 01:05:29.467254 2631 scope.go:117] "RemoveContainer" containerID="73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55" Mar 4 01:05:29.467654 containerd[1479]: time="2026-03-04T01:05:29.467441984Z" level=error msg="ContainerStatus for \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\": not found" Mar 4 01:05:29.467911 kubelet[2631]: E0304 01:05:29.467860 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\": not found" containerID="73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55" Mar 4 01:05:29.467973 kubelet[2631]: I0304 01:05:29.467923 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55"} err="failed to get container status \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\": rpc error: code = NotFound desc = an error occurred when try to find container \"73a4af86a4afbc07b45cccf468d873366954cb1713377211875a110d24317c55\": not found" Mar 4 01:05:29.467973 kubelet[2631]: I0304 01:05:29.467944 2631 scope.go:117] "RemoveContainer" containerID="a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979" Mar 4 01:05:29.468270 containerd[1479]: time="2026-03-04T01:05:29.468215188Z" level=error msg="ContainerStatus for \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\": not found" Mar 4 01:05:29.468662 kubelet[2631]: E0304 01:05:29.468393 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\": not found" containerID="a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979" Mar 4 01:05:29.468662 kubelet[2631]: I0304 01:05:29.468443 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979"} err="failed to get container status \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8916be1fba6161ed1da213481415cbbd07ab951056ff32d97c4a85dbf0c2979\": not found" Mar 4 01:05:29.468662 kubelet[2631]: I0304 01:05:29.468464 2631 scope.go:117] "RemoveContainer" containerID="b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57" Mar 4 01:05:29.468881 containerd[1479]: time="2026-03-04T01:05:29.468812361Z" level=error msg="ContainerStatus for \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\": not found" Mar 4 01:05:29.469247 kubelet[2631]: E0304 01:05:29.469082 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\": not found" containerID="b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57" Mar 4 01:05:29.469247 kubelet[2631]: I0304 01:05:29.469153 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57"} err="failed to get container status \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8480de12bcfc1f6431434999f6faa8da102f93e8d8ecde6e6e992666641dc57\": not found" Mar 4 01:05:29.469247 kubelet[2631]: I0304 01:05:29.469185 2631 scope.go:117] "RemoveContainer" containerID="7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543" Mar 4 01:05:29.469813 containerd[1479]: time="2026-03-04T01:05:29.469746387Z" level=error msg="ContainerStatus for \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\": not found" Mar 4 01:05:29.470129 kubelet[2631]: E0304 01:05:29.470035 2631 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\": not found" containerID="7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543" Mar 4 01:05:29.470129 kubelet[2631]: I0304 01:05:29.470110 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543"} err="failed to get container status \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f7a0467a681cb0ee7e160bcb42950ad1765e59895ffc59380b5c44369a1f543\": not found" Mar 4 01:05:29.892893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c78af0940be170c20ec73a1dd71b58b90ed3e9b27993233b2d2d36e87aea633-rootfs.mount: Deactivated successfully. Mar 4 01:05:29.893085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69-rootfs.mount: Deactivated successfully. Mar 4 01:05:29.893191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2b7b2f4953397bd94681b3ce6c9eaf91ed24957de7e8e7428374aac4549ec69-shm.mount: Deactivated successfully. Mar 4 01:05:29.893281 systemd[1]: var-lib-kubelet-pods-f485ac81\x2d9446\x2d4bf0\x2db8ea\x2d2042137505a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkbb44.mount: Deactivated successfully. Mar 4 01:05:29.893351 systemd[1]: var-lib-kubelet-pods-f4622411\x2deb40\x2d43f9\x2d8c9a\x2d0104a632c61b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd559l.mount: Deactivated successfully. Mar 4 01:05:29.893449 systemd[1]: var-lib-kubelet-pods-f4622411\x2deb40\x2d43f9\x2d8c9a\x2d0104a632c61b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 4 01:05:29.893519 systemd[1]: var-lib-kubelet-pods-f4622411\x2deb40\x2d43f9\x2d8c9a\x2d0104a632c61b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 4 01:05:30.448972 kubelet[2631]: I0304 01:05:30.448802 2631 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4622411-eb40-43f9-8c9a-0104a632c61b" path="/var/lib/kubelet/pods/f4622411-eb40-43f9-8c9a-0104a632c61b/volumes" Mar 4 01:05:30.450319 kubelet[2631]: I0304 01:05:30.450207 2631 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f485ac81-9446-4bf0-b8ea-2042137505a1" path="/var/lib/kubelet/pods/f485ac81-9446-4bf0-b8ea-2042137505a1/volumes" Mar 4 01:05:30.793226 sshd[4525]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:30.806734 systemd[1]: sshd@32-10.0.0.28:22-10.0.0.1:60388.service: Deactivated successfully. Mar 4 01:05:30.809269 systemd[1]: session-33.scope: Deactivated successfully. Mar 4 01:05:30.812087 systemd-logind[1464]: Session 33 logged out. Waiting for processes to exit. Mar 4 01:05:30.820344 systemd[1]: Started sshd@33-10.0.0.28:22-10.0.0.1:60396.service - OpenSSH per-connection server daemon (10.0.0.1:60396). Mar 4 01:05:30.823197 systemd-logind[1464]: Removed session 33. Mar 4 01:05:30.895886 sshd[4690]: Accepted publickey for core from 10.0.0.1 port 60396 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:30.898817 sshd[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:30.908681 systemd-logind[1464]: New session 34 of user core. Mar 4 01:05:30.920008 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 4 01:05:31.403004 sshd[4690]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:31.418302 systemd[1]: sshd@33-10.0.0.28:22-10.0.0.1:60396.service: Deactivated successfully. Mar 4 01:05:31.422494 systemd[1]: session-34.scope: Deactivated successfully. Mar 4 01:05:31.429189 systemd-logind[1464]: Session 34 logged out. Waiting for processes to exit. Mar 4 01:05:31.439286 systemd[1]: Started sshd@34-10.0.0.28:22-10.0.0.1:60404.service - OpenSSH per-connection server daemon (10.0.0.1:60404). Mar 4 01:05:31.442961 systemd-logind[1464]: Removed session 34. Mar 4 01:05:31.481649 systemd[1]: Created slice kubepods-burstable-podbf2e7cff_d177_4573_beb6_48f55e55a947.slice - libcontainer container kubepods-burstable-podbf2e7cff_d177_4573_beb6_48f55e55a947.slice. Mar 4 01:05:31.486129 kubelet[2631]: I0304 01:05:31.484415 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-bpf-maps\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.486129 kubelet[2631]: I0304 01:05:31.484448 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-lib-modules\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.486129 kubelet[2631]: I0304 01:05:31.484465 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf2e7cff-d177-4573-beb6-48f55e55a947-cilium-config-path\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.486129 kubelet[2631]: I0304 01:05:31.484478 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-cni-path\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.486129 kubelet[2631]: I0304 01:05:31.484491 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-xtables-lock\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.486129 kubelet[2631]: I0304 01:05:31.484506 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-cilium-cgroup\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487067 kubelet[2631]: I0304 01:05:31.484517 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-etc-cni-netd\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487067 kubelet[2631]: I0304 01:05:31.484530 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf2e7cff-d177-4573-beb6-48f55e55a947-clustermesh-secrets\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487067 kubelet[2631]: I0304 01:05:31.484545 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-hostproc\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487067 kubelet[2631]: I0304 01:05:31.484643 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bf2e7cff-d177-4573-beb6-48f55e55a947-cilium-ipsec-secrets\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487067 kubelet[2631]: I0304 01:05:31.484658 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-host-proc-sys-net\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487067 kubelet[2631]: I0304 01:05:31.484671 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf2e7cff-d177-4573-beb6-48f55e55a947-hubble-tls\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487242 kubelet[2631]: I0304 01:05:31.484685 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlxj\" (UniqueName: \"kubernetes.io/projected/bf2e7cff-d177-4573-beb6-48f55e55a947-kube-api-access-4nlxj\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487242 kubelet[2631]: I0304 01:05:31.484701 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-cilium-run\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.487242 kubelet[2631]: I0304 01:05:31.484714 2631 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf2e7cff-d177-4573-beb6-48f55e55a947-host-proc-sys-kernel\") pod \"cilium-g7qg4\" (UID: \"bf2e7cff-d177-4573-beb6-48f55e55a947\") " pod="kube-system/cilium-g7qg4" Mar 4 01:05:31.530228 sshd[4704]: Accepted publickey for core from 10.0.0.1 port 60404 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:31.533118 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:31.551012 systemd-logind[1464]: New session 35 of user core. Mar 4 01:05:31.555110 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 4 01:05:31.624131 sshd[4704]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:31.635469 systemd[1]: sshd@34-10.0.0.28:22-10.0.0.1:60404.service: Deactivated successfully. Mar 4 01:05:31.638776 systemd[1]: session-35.scope: Deactivated successfully. Mar 4 01:05:31.641491 systemd-logind[1464]: Session 35 logged out. Waiting for processes to exit. Mar 4 01:05:31.658421 systemd[1]: Started sshd@35-10.0.0.28:22-10.0.0.1:60414.service - OpenSSH per-connection server daemon (10.0.0.1:60414). Mar 4 01:05:31.662003 systemd-logind[1464]: Removed session 35. Mar 4 01:05:31.699976 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 60414 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:05:31.702708 sshd[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:31.709844 systemd-logind[1464]: New session 36 of user core. Mar 4 01:05:31.725010 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 4 01:05:31.791923 kubelet[2631]: E0304 01:05:31.791826 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:31.794391 containerd[1479]: time="2026-03-04T01:05:31.792496790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7qg4,Uid:bf2e7cff-d177-4573-beb6-48f55e55a947,Namespace:kube-system,Attempt:0,}" Mar 4 01:05:31.831908 containerd[1479]: time="2026-03-04T01:05:31.830710454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:05:31.831908 containerd[1479]: time="2026-03-04T01:05:31.830941639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:05:31.831908 containerd[1479]: time="2026-03-04T01:05:31.831064851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:31.831908 containerd[1479]: time="2026-03-04T01:05:31.831868451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:31.864891 systemd[1]: Started cri-containerd-3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714.scope - libcontainer container 3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714. Mar 4 01:05:31.912928 containerd[1479]: time="2026-03-04T01:05:31.912747199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7qg4,Uid:bf2e7cff-d177-4573-beb6-48f55e55a947,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\"" Mar 4 01:05:31.916091 kubelet[2631]: E0304 01:05:31.915955 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:31.924185 containerd[1479]: time="2026-03-04T01:05:31.923858667Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 01:05:31.942088 containerd[1479]: time="2026-03-04T01:05:31.941906096Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95\"" Mar 4 01:05:31.942914 containerd[1479]: time="2026-03-04T01:05:31.942840144Z" level=info msg="StartContainer for \"a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95\"" Mar 4 01:05:31.993020 systemd[1]: Started cri-containerd-a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95.scope - libcontainer container a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95. Mar 4 01:05:32.045859 containerd[1479]: time="2026-03-04T01:05:32.045681562Z" level=info msg="StartContainer for \"a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95\" returns successfully" Mar 4 01:05:32.058157 kubelet[2631]: E0304 01:05:32.057304 2631 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 4 01:05:32.061839 systemd[1]: cri-containerd-a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95.scope: Deactivated successfully. Mar 4 01:05:32.109168 containerd[1479]: time="2026-03-04T01:05:32.108970075Z" level=info msg="shim disconnected" id=a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95 namespace=k8s.io Mar 4 01:05:32.109168 containerd[1479]: time="2026-03-04T01:05:32.109076566Z" level=warning msg="cleaning up after shim disconnected" id=a3aa38cc8523f253e7ef7764c900dab6f11d967de6206c38a4b3e72ed9775b95 namespace=k8s.io Mar 4 01:05:32.109168 containerd[1479]: time="2026-03-04T01:05:32.109091955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:32.422615 kubelet[2631]: E0304 01:05:32.422428 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:32.429205 containerd[1479]: time="2026-03-04T01:05:32.428519082Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 01:05:32.449859 containerd[1479]: time="2026-03-04T01:05:32.449717668Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2\"" Mar 4 01:05:32.450655 containerd[1479]: time="2026-03-04T01:05:32.450506769Z" level=info msg="StartContainer for \"d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2\"" Mar 4 01:05:32.489882 systemd[1]: Started cri-containerd-d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2.scope - libcontainer container d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2. Mar 4 01:05:32.546509 containerd[1479]: time="2026-03-04T01:05:32.546437892Z" level=info msg="StartContainer for \"d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2\" returns successfully" Mar 4 01:05:32.556239 systemd[1]: cri-containerd-d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2.scope: Deactivated successfully. Mar 4 01:05:32.600477 containerd[1479]: time="2026-03-04T01:05:32.600347125Z" level=info msg="shim disconnected" id=d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2 namespace=k8s.io Mar 4 01:05:32.600477 containerd[1479]: time="2026-03-04T01:05:32.600464978Z" level=warning msg="cleaning up after shim disconnected" id=d31a71bbba26a7ea8e83920b0cd8e1b8bce953384b40c377df583805e8159cc2 namespace=k8s.io Mar 4 01:05:32.600477 containerd[1479]: time="2026-03-04T01:05:32.600482680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:33.428023 kubelet[2631]: E0304 01:05:33.427953 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:33.433792 containerd[1479]: time="2026-03-04T01:05:33.433511309Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 01:05:33.454364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507616393.mount: Deactivated successfully. Mar 4 01:05:33.458056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899535747.mount: Deactivated successfully. Mar 4 01:05:33.459778 containerd[1479]: time="2026-03-04T01:05:33.459724268Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247\"" Mar 4 01:05:33.460420 containerd[1479]: time="2026-03-04T01:05:33.460389712Z" level=info msg="StartContainer for \"99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247\"" Mar 4 01:05:33.505839 systemd[1]: Started cri-containerd-99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247.scope - libcontainer container 99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247. Mar 4 01:05:33.553972 containerd[1479]: time="2026-03-04T01:05:33.553847740Z" level=info msg="StartContainer for \"99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247\" returns successfully" Mar 4 01:05:33.563454 systemd[1]: cri-containerd-99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247.scope: Deactivated successfully. Mar 4 01:05:33.608375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247-rootfs.mount: Deactivated successfully. Mar 4 01:05:33.622952 containerd[1479]: time="2026-03-04T01:05:33.622787098Z" level=info msg="shim disconnected" id=99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247 namespace=k8s.io Mar 4 01:05:33.622952 containerd[1479]: time="2026-03-04T01:05:33.622886966Z" level=warning msg="cleaning up after shim disconnected" id=99ccbb369171e99869bc5614cccadf92728be331c86641b89cfc9e1d82bc0247 namespace=k8s.io Mar 4 01:05:33.622952 containerd[1479]: time="2026-03-04T01:05:33.622903818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:34.433918 kubelet[2631]: E0304 01:05:34.433689 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:34.444368 containerd[1479]: time="2026-03-04T01:05:34.442529260Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 01:05:34.465727 containerd[1479]: time="2026-03-04T01:05:34.465487686Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820\"" Mar 4 01:05:34.466457 containerd[1479]: time="2026-03-04T01:05:34.466308707Z" level=info msg="StartContainer for \"ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820\"" Mar 4 01:05:34.511961 systemd[1]: Started cri-containerd-ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820.scope - libcontainer container ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820. Mar 4 01:05:34.560829 systemd[1]: cri-containerd-ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820.scope: Deactivated successfully. Mar 4 01:05:34.567311 containerd[1479]: time="2026-03-04T01:05:34.566992728Z" level=info msg="StartContainer for \"ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820\" returns successfully" Mar 4 01:05:34.605064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820-rootfs.mount: Deactivated successfully. Mar 4 01:05:34.614255 containerd[1479]: time="2026-03-04T01:05:34.614087574Z" level=info msg="shim disconnected" id=ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820 namespace=k8s.io Mar 4 01:05:34.614255 containerd[1479]: time="2026-03-04T01:05:34.614189597Z" level=warning msg="cleaning up after shim disconnected" id=ca6b030ec0a0cbb4e7af24e18f92c39b1f451649ea3962d8518012c397485820 namespace=k8s.io Mar 4 01:05:34.614255 containerd[1479]: time="2026-03-04T01:05:34.614202400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:05:35.442041 kubelet[2631]: E0304 01:05:35.441549 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:35.444424 kubelet[2631]: E0304 01:05:35.444265 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:35.448911 containerd[1479]: time="2026-03-04T01:05:35.448712096Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 01:05:35.477111 containerd[1479]: time="2026-03-04T01:05:35.477035308Z" level=info msg="CreateContainer within sandbox \"3cc27f2bd71c47988bf0698a1f3d43dd7cc58f9b224982750303ed4d18270714\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2e94bb5a5a1dfa9e72f804763670a36476e603e66ef9aadcb0a91e657f768b5\"" Mar 4 01:05:35.477911 containerd[1479]: time="2026-03-04T01:05:35.477857737Z" level=info msg="StartContainer for \"e2e94bb5a5a1dfa9e72f804763670a36476e603e66ef9aadcb0a91e657f768b5\"" Mar 4 01:05:35.528032 systemd[1]: Started cri-containerd-e2e94bb5a5a1dfa9e72f804763670a36476e603e66ef9aadcb0a91e657f768b5.scope - libcontainer container e2e94bb5a5a1dfa9e72f804763670a36476e603e66ef9aadcb0a91e657f768b5. Mar 4 01:05:35.578927 containerd[1479]: time="2026-03-04T01:05:35.578823670Z" level=info msg="StartContainer for \"e2e94bb5a5a1dfa9e72f804763670a36476e603e66ef9aadcb0a91e657f768b5\" returns successfully" Mar 4 01:05:36.206099 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 4 01:05:36.453031 kubelet[2631]: E0304 01:05:36.452886 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:37.446372 kubelet[2631]: E0304 01:05:37.446178 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:37.790331 kubelet[2631]: E0304 01:05:37.789947 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:40.277288 systemd-networkd[1398]: lxc_health: Link UP Mar 4 01:05:40.287182 systemd[1]: run-containerd-runc-k8s.io-e2e94bb5a5a1dfa9e72f804763670a36476e603e66ef9aadcb0a91e657f768b5-runc.BlB1XE.mount: Deactivated successfully. Mar 4 01:05:40.291961 systemd-networkd[1398]: lxc_health: Gained carrier Mar 4 01:05:40.448743 kubelet[2631]: E0304 01:05:40.448519 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:41.512200 systemd-networkd[1398]: lxc_health: Gained IPv6LL Mar 4 01:05:41.788100 kubelet[2631]: E0304 01:05:41.787921 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:41.809671 kubelet[2631]: I0304 01:05:41.809547 2631 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g7qg4" podStartSLOduration=10.809532675 podStartE2EDuration="10.809532675s" podCreationTimestamp="2026-03-04 01:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:05:36.488108384 +0000 UTC m=+220.557556269" watchObservedRunningTime="2026-03-04 01:05:41.809532675 +0000 UTC m=+225.878980561" Mar 4 01:05:42.471663 kubelet[2631]: E0304 01:05:42.471402 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:43.477249 kubelet[2631]: E0304 01:05:43.477109 2631 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:44.714870 kubelet[2631]: E0304 01:05:44.714369 2631 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53060->127.0.0.1:42485: write tcp 127.0.0.1:53060->127.0.0.1:42485: write: broken pipe Mar 4 01:05:46.881396 sshd[4716]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:46.886925 systemd[1]: sshd@35-10.0.0.28:22-10.0.0.1:60414.service: Deactivated successfully. Mar 4 01:05:46.889238 systemd[1]: session-36.scope: Deactivated successfully. Mar 4 01:05:46.890405 systemd-logind[1464]: Session 36 logged out. Waiting for processes to exit. Mar 4 01:05:46.891976 systemd-logind[1464]: Removed session 36.