Mar 13 00:49:31.199679 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:49:31.199703 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:49:31.199715 kernel: BIOS-provided physical RAM map: Mar 13 00:49:31.199721 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 13 00:49:31.199727 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 13 00:49:31.199733 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:49:31.199740 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 13 00:49:31.199746 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 13 00:49:31.199752 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:49:31.199757 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:49:31.199763 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:49:31.199772 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:49:31.199778 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:49:31.199784 kernel: NX (Execute Disable) protection: active Mar 13 00:49:31.199791 kernel: APIC: Static calls initialized Mar 13 00:49:31.199797 kernel: SMBIOS 2.8 present. Mar 13 00:49:31.199806 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 13 00:49:31.199812 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:49:31.199819 kernel: Hypervisor detected: KVM Mar 13 00:49:31.199825 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:49:31.199831 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:49:31.199837 kernel: kvm-clock: using sched offset of 9795420631 cycles Mar 13 00:49:31.199844 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:49:31.199850 kernel: tsc: Detected 2445.426 MHz processor Mar 13 00:49:31.199857 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:49:31.199864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:49:31.199873 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:49:31.199880 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:49:31.199886 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:49:31.199958 kernel: Using GB pages for direct mapping Mar 13 00:49:31.199965 kernel: ACPI: Early table checksum verification disabled Mar 13 00:49:31.199971 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 13 00:49:31.199977 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.199984 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.199990 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.200001 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 13 00:49:31.200008 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.200014 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.200021 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.200027 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:49:31.200037 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 13 00:49:31.200045 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 13 00:49:31.200055 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 13 00:49:31.200061 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 13 00:49:31.200068 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 13 00:49:31.200075 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 13 00:49:31.200082 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 13 00:49:31.200090 kernel: No NUMA configuration found Mar 13 00:49:31.200103 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 13 00:49:31.200119 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 13 00:49:31.200131 kernel: Zone ranges: Mar 13 00:49:31.200144 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:49:31.200156 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 13 00:49:31.200168 kernel: Normal empty Mar 13 00:49:31.200179 kernel: Device empty Mar 13 00:49:31.200191 kernel: Movable zone start for each node Mar 13 00:49:31.200202 kernel: Early memory node ranges Mar 13 00:49:31.200214 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:49:31.200226 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 13 00:49:31.200244 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 13 00:49:31.200253 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:49:31.200260 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:49:31.200267 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 13 00:49:31.200275 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:49:31.200287 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:49:31.200299 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:49:31.200312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:49:31.200324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:49:31.200341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:49:31.200354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:49:31.200366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:49:31.200377 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:49:31.200387 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:49:31.200394 kernel: TSC deadline timer available Mar 13 00:49:31.200401 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:49:31.200407 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:49:31.200415 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:49:31.200421 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:49:31.200431 kernel: CPU topo: Num. cores per package: 4 Mar 13 00:49:31.200438 kernel: CPU topo: Num. threads per package: 4 Mar 13 00:49:31.200445 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 13 00:49:31.200451 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:49:31.200458 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:49:31.200465 kernel: kvm-guest: setup PV sched yield Mar 13 00:49:31.200472 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:49:31.200478 kernel: Booting paravirtualized kernel on KVM Mar 13 00:49:31.200485 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:49:31.200495 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 13 00:49:31.200502 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 13 00:49:31.200508 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 13 00:49:31.200515 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 13 00:49:31.200521 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:49:31.200528 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:49:31.200536 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:49:31.200543 kernel: random: crng init done Mar 13 00:49:31.200595 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:49:31.200602 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:49:31.200609 kernel: Fallback order for Node 0: 0 Mar 13 00:49:31.200615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 13 00:49:31.200622 kernel: Policy zone: DMA32 Mar 13 00:49:31.200629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:49:31.200636 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 13 00:49:31.200643 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:49:31.200649 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:49:31.200659 kernel: Dynamic Preempt: voluntary Mar 13 00:49:31.200666 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:49:31.200673 kernel: rcu: RCU event tracing is enabled. Mar 13 00:49:31.200680 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 13 00:49:31.200687 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:49:31.200694 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:49:31.200700 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:49:31.200707 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:49:31.200714 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 13 00:49:31.200721 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:49:31.200730 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:49:31.200737 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:49:31.200744 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 13 00:49:31.200751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:49:31.200766 kernel: Console: colour VGA+ 80x25 Mar 13 00:49:31.200776 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:49:31.200783 kernel: ACPI: Core revision 20240827 Mar 13 00:49:31.200790 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:49:31.200797 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:49:31.200804 kernel: x2apic enabled Mar 13 00:49:31.200811 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:49:31.200820 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:49:31.200828 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:49:31.200834 kernel: kvm-guest: setup PV IPIs Mar 13 00:49:31.200841 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:49:31.200849 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:49:31.200859 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 13 00:49:31.200865 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:49:31.200873 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:49:31.200880 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:49:31.200887 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:49:31.200994 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:49:31.201002 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:49:31.201009 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:49:31.201016 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:49:31.201028 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:49:31.201035 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:49:31.201042 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:49:31.201049 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:49:31.201056 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:49:31.201064 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:49:31.201071 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:49:31.201078 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:49:31.201088 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:49:31.201095 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 13 00:49:31.201102 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:49:31.201109 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:49:31.201116 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:49:31.201123 kernel: landlock: Up and running. Mar 13 00:49:31.201130 kernel: SELinux: Initializing. Mar 13 00:49:31.201137 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:49:31.201144 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:49:31.201154 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:49:31.201161 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 13 00:49:31.201168 kernel: signal: max sigframe size: 1776 Mar 13 00:49:31.201175 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:49:31.201182 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:49:31.201189 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:49:31.201196 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:49:31.201203 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:49:31.201210 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:49:31.201219 kernel: .... node #0, CPUs: #1 #2 #3 Mar 13 00:49:31.201226 kernel: smp: Brought up 1 node, 4 CPUs Mar 13 00:49:31.201233 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 13 00:49:31.201241 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 13 00:49:31.201248 kernel: devtmpfs: initialized Mar 13 00:49:31.201255 kernel: x86/mm: Memory block size: 128MB Mar 13 00:49:31.201262 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:49:31.201269 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 13 00:49:31.201276 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:49:31.201285 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:49:31.201292 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:49:31.201299 kernel: audit: type=2000 audit(1773362966.293:1): state=initialized audit_enabled=0 res=1 Mar 13 00:49:31.201306 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:49:31.201313 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:49:31.201320 kernel: cpuidle: using governor menu Mar 13 00:49:31.201327 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:49:31.201334 kernel: dca service started, version 1.12.1 Mar 13 00:49:31.201341 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:49:31.201351 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:49:31.201358 kernel: PCI: Using configuration type 1 for base access Mar 13 00:49:31.201365 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:49:31.201372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:49:31.201478 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:49:31.201487 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:49:31.201494 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:49:31.201501 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:49:31.201508 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:49:31.201518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:49:31.201525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:49:31.201532 kernel: ACPI: Interpreter enabled Mar 13 00:49:31.201539 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:49:31.201546 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:49:31.201584 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:49:31.201591 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:49:31.201598 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:49:31.201605 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:49:31.202054 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:49:31.202242 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:49:31.202389 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:49:31.202400 kernel: PCI host bridge to bus 0000:00 Mar 13 00:49:31.202703 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:49:31.202840 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:49:31.203041 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:49:31.203174 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 13 00:49:31.203323 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:49:31.203486 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 13 00:49:31.203954 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:49:31.204477 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:49:31.204826 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:49:31.205092 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:49:31.205303 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:49:31.205521 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:49:31.205794 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:49:31.206224 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:49:31.206447 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 13 00:49:31.206649 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:49:31.206856 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:49:31.207433 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:49:31.207715 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 13 00:49:31.208032 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:49:31.208259 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:49:31.208619 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:49:31.208853 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 13 00:49:31.209169 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:49:31.209500 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 13 00:49:31.209775 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:49:31.210186 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:49:31.210449 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:49:31.210809 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:49:31.211154 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 13 00:49:31.211374 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 13 00:49:31.211807 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:49:31.212315 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:49:31.212337 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:49:31.212351 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:49:31.212363 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:49:31.212383 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:49:31.212394 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:49:31.212407 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:49:31.212420 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:49:31.212431 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:49:31.212444 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:49:31.212456 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:49:31.212469 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:49:31.212480 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:49:31.212498 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:49:31.212509 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:49:31.212521 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:49:31.212533 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:49:31.212546 kernel: iommu: Default domain type: Translated Mar 13 00:49:31.212608 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:49:31.212620 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:49:31.212632 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:49:31.212644 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 13 00:49:31.212663 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 13 00:49:31.212839 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:49:31.213064 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:49:31.213208 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:49:31.213217 kernel: vgaarb: loaded Mar 13 00:49:31.213225 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:49:31.213232 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:49:31.213239 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:49:31.213247 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:49:31.213259 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:49:31.213266 kernel: pnp: PnP ACPI init Mar 13 00:49:31.213534 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:49:31.213546 kernel: pnp: PnP ACPI: found 6 devices Mar 13 00:49:31.213589 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:49:31.213597 kernel: NET: Registered PF_INET protocol family Mar 13 00:49:31.213604 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:49:31.213611 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:49:31.213623 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:49:31.213630 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:49:31.213637 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:49:31.213644 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:49:31.213652 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:49:31.213659 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:49:31.213666 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:49:31.213673 kernel: NET: Registered PF_XDP protocol family Mar 13 00:49:31.213809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:49:31.214007 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:49:31.214140 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:49:31.214267 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 13 00:49:31.214395 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:49:31.214522 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 13 00:49:31.214531 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:49:31.214539 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:49:31.214547 kernel: Initialise system trusted keyrings Mar 13 00:49:31.214601 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:49:31.214608 kernel: Key type asymmetric registered Mar 13 00:49:31.214615 kernel: Asymmetric key parser 'x509' registered Mar 13 00:49:31.214622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:49:31.214629 kernel: io scheduler mq-deadline registered Mar 13 00:49:31.214636 kernel: io scheduler kyber registered Mar 13 00:49:31.214643 kernel: io scheduler bfq registered Mar 13 00:49:31.214650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:49:31.214658 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:49:31.214668 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:49:31.214675 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:49:31.214682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:49:31.214689 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:49:31.214696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:49:31.214704 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:49:31.214710 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:49:31.215054 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 13 00:49:31.215068 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:49:31.215219 kernel: rtc_cmos 00:04: registered as rtc0 Mar 13 00:49:31.215356 kernel: rtc_cmos 00:04: setting system clock to 2026-03-13T00:49:30 UTC (1773362970) Mar 13 00:49:31.217198 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:49:31.217212 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:49:31.217220 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:49:31.217227 kernel: Segment Routing with IPv6 Mar 13 00:49:31.217235 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:49:31.217242 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:49:31.217254 kernel: Key type dns_resolver registered Mar 13 00:49:31.217261 kernel: IPI shorthand broadcast: enabled Mar 13 00:49:31.217268 kernel: sched_clock: Marking stable (4424022263, 464526450)->(5105659274, -217110561) Mar 13 00:49:31.217275 kernel: registered taskstats version 1 Mar 13 00:49:31.217283 kernel: Loading compiled-in X.509 certificates Mar 13 00:49:31.217290 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:49:31.217297 kernel: Demotion targets for Node 0: null Mar 13 00:49:31.217304 kernel: Key type .fscrypt registered Mar 13 00:49:31.217311 kernel: Key type fscrypt-provisioning registered Mar 13 00:49:31.217320 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:49:31.217328 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:49:31.217335 kernel: ima: No architecture policies found Mar 13 00:49:31.217342 kernel: clk: Disabling unused clocks Mar 13 00:49:31.217349 kernel: Warning: unable to open an initial console. Mar 13 00:49:31.217356 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:49:31.217363 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:49:31.217371 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:49:31.217378 kernel: Run /init as init process Mar 13 00:49:31.217387 kernel: with arguments: Mar 13 00:49:31.217394 kernel: /init Mar 13 00:49:31.217401 kernel: with environment: Mar 13 00:49:31.217408 kernel: HOME=/ Mar 13 00:49:31.217415 kernel: TERM=linux Mar 13 00:49:31.217423 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:49:31.217433 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:49:31.217444 systemd[1]: Detected virtualization kvm. Mar 13 00:49:31.217451 systemd[1]: Detected architecture x86-64. Mar 13 00:49:31.217459 systemd[1]: Running in initrd. Mar 13 00:49:31.217466 systemd[1]: No hostname configured, using default hostname. Mar 13 00:49:31.217474 systemd[1]: Hostname set to . Mar 13 00:49:31.217481 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:49:31.217489 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:49:31.217496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:49:31.217517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:49:31.217528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:49:31.217536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:49:31.217544 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:49:31.217600 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:49:31.217613 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:49:31.217621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:49:31.217629 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:49:31.217637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:49:31.217644 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:49:31.217652 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:49:31.217660 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:49:31.217668 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:49:31.217676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:49:31.217686 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:49:31.217694 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:49:31.217702 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:49:31.217710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:49:31.217718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:49:31.217725 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:49:31.217733 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:49:31.217741 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:49:31.217751 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:49:31.217759 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:49:31.217767 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:49:31.217774 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:49:31.217782 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:49:31.217790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:49:31.217798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:31.217805 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:49:31.217847 systemd-journald[204]: Collecting audit messages is disabled. Mar 13 00:49:31.217870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:49:31.217878 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:49:31.217887 systemd-journald[204]: Journal started Mar 13 00:49:31.217976 systemd-journald[204]: Runtime Journal (/run/log/journal/5fcd85364acf4acd8206116d623bd5af) is 6M, max 48.3M, 42.2M free. Mar 13 00:49:31.218122 systemd-modules-load[205]: Inserted module 'overlay' Mar 13 00:49:31.225718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:49:31.228018 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:49:31.256224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:49:31.361341 kernel: hrtimer: interrupt took 6419091 ns Mar 13 00:49:31.394080 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:49:31.398089 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:49:31.406766 kernel: Bridge firewalling registered Mar 13 00:49:31.400537 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 13 00:49:31.409603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:49:31.415672 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:49:31.592531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:49:31.603045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:31.618768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:49:31.634311 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:49:31.654510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:49:31.678424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:49:31.683075 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:49:31.686055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:49:31.709010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:49:31.715433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:49:31.761450 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:49:31.763226 systemd-resolved[233]: Positive Trust Anchors: Mar 13 00:49:31.763236 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:49:31.763262 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:49:31.768295 systemd-resolved[233]: Defaulting to hostname 'linux'. Mar 13 00:49:31.771741 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:49:31.782356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:49:31.966055 kernel: SCSI subsystem initialized Mar 13 00:49:31.977034 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:49:31.993047 kernel: iscsi: registered transport (tcp) Mar 13 00:49:32.023040 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:49:32.023113 kernel: QLogic iSCSI HBA Driver Mar 13 00:49:32.057433 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:49:32.111282 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:49:32.119378 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:49:32.399153 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:49:32.413517 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:49:32.509995 kernel: raid6: avx2x4 gen() 25325 MB/s Mar 13 00:49:32.527990 kernel: raid6: avx2x2 gen() 28551 MB/s Mar 13 00:49:32.547877 kernel: raid6: avx2x1 gen() 21259 MB/s Mar 13 00:49:32.547997 kernel: raid6: using algorithm avx2x2 gen() 28551 MB/s Mar 13 00:49:32.568122 kernel: raid6: .... xor() 20951 MB/s, rmw enabled Mar 13 00:49:32.568156 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:49:32.785166 kernel: xor: automatically using best checksumming function avx Mar 13 00:49:33.046051 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:49:33.069606 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:49:33.080859 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:49:33.144838 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 13 00:49:33.152255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:49:33.162177 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:49:33.201169 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Mar 13 00:49:33.249137 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:49:33.265076 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:49:33.640873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:49:33.655512 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:49:33.709088 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 13 00:49:33.725129 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 13 00:49:33.740995 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:49:33.741097 kernel: GPT:9289727 != 19775487 Mar 13 00:49:33.741116 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:49:33.741132 kernel: GPT:9289727 != 19775487 Mar 13 00:49:33.745499 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:49:33.745639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:49:33.769050 kernel: libata version 3.00 loaded. Mar 13 00:49:33.774143 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:49:33.777760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:33.778027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:33.792123 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:33.805375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:33.816396 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:49:33.816855 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:49:33.812641 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:49:33.828991 kernel: AES CTR mode by8 optimization enabled Mar 13 00:49:33.841723 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:49:33.842047 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:49:33.842301 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:49:33.877050 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 13 00:49:33.881967 kernel: scsi host0: ahci Mar 13 00:49:33.897055 kernel: scsi host1: ahci Mar 13 00:49:33.950697 kernel: scsi host2: ahci Mar 13 00:49:34.001426 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:49:34.035991 kernel: scsi host3: ahci Mar 13 00:49:34.049208 kernel: scsi host4: ahci Mar 13 00:49:34.070085 kernel: scsi host5: ahci Mar 13 00:49:34.106441 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 13 00:49:34.106688 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 13 00:49:34.106702 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 13 00:49:34.106713 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 13 00:49:34.106782 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 13 00:49:34.106848 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 13 00:49:34.086256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:49:34.114437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:49:34.119348 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:49:34.133571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:49:34.414345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:34.427478 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:49:34.427519 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:49:34.427534 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:49:34.437962 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:49:34.438213 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:49:34.438233 kernel: ata3.00: applying bridge limits Mar 13 00:49:34.441689 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:49:34.449637 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:49:34.484736 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:49:34.489270 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:49:34.490304 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:49:34.494968 kernel: ata3.00: configured for UDMA/100 Mar 13 00:49:34.517516 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:49:34.541270 disk-uuid[618]: Primary Header is updated. Mar 13 00:49:34.541270 disk-uuid[618]: Secondary Entries is updated. Mar 13 00:49:34.541270 disk-uuid[618]: Secondary Header is updated. Mar 13 00:49:34.559281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:49:34.576049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:49:34.644669 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:49:34.645114 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:49:34.713073 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:49:35.153394 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:49:35.161463 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:49:35.166494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:49:35.183090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:49:35.194788 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:49:35.246143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:49:35.573996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:49:35.574994 disk-uuid[619]: The operation has completed successfully. Mar 13 00:49:35.624789 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:49:35.625030 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:49:35.666321 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:49:35.697831 sh[648]: Success Mar 13 00:49:35.734116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:49:35.734205 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:49:35.739091 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:49:35.758994 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:49:35.815188 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:49:35.825211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:49:35.853765 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:49:35.873043 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Mar 13 00:49:35.873095 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:49:35.878070 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:35.904883 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:49:35.905016 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:49:35.908043 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:49:35.909377 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:49:35.915983 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:49:35.917462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:49:35.954586 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:49:35.994058 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (688) Mar 13 00:49:36.004711 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:36.004765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:36.018199 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:49:36.018250 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:49:36.030982 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:36.034300 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:49:36.037072 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:49:36.156285 ignition[745]: Ignition 2.22.0 Mar 13 00:49:36.156350 ignition[745]: Stage: fetch-offline Mar 13 00:49:36.156518 ignition[745]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:36.156551 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:49:36.156806 ignition[745]: parsed url from cmdline: "" Mar 13 00:49:36.167104 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:49:36.156812 ignition[745]: no config URL provided Mar 13 00:49:36.179389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:49:36.156818 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:49:36.156830 ignition[745]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:49:36.156855 ignition[745]: op(1): [started] loading QEMU firmware config module Mar 13 00:49:36.156860 ignition[745]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 13 00:49:36.174660 ignition[745]: op(1): [finished] loading QEMU firmware config module Mar 13 00:49:36.244560 systemd-networkd[839]: lo: Link UP Mar 13 00:49:36.244637 systemd-networkd[839]: lo: Gained carrier Mar 13 00:49:36.247133 systemd-networkd[839]: Enumeration completed Mar 13 00:49:36.248010 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:49:36.249664 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:36.249670 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:49:36.254033 systemd-networkd[839]: eth0: Link UP Mar 13 00:49:36.254373 systemd-networkd[839]: eth0: Gained carrier Mar 13 00:49:36.254388 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:36.256069 systemd[1]: Reached target network.target - Network. Mar 13 00:49:36.317016 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:49:36.436766 ignition[745]: parsing config with SHA512: 56113839ffd362cc2ba0ba2e2c00178b8981fb02045e7ce1a26cbc2c5f88102b47338bc061ea4276274ae64fb538d3aa0825d98cf5f7707746cc147364525fc3 Mar 13 00:49:36.444097 unknown[745]: fetched base config from "system" Mar 13 00:49:36.444141 unknown[745]: fetched user config from "qemu" Mar 13 00:49:36.445103 ignition[745]: fetch-offline: fetch-offline passed Mar 13 00:49:36.445199 ignition[745]: Ignition finished successfully Mar 13 00:49:36.460056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:49:36.460393 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 13 00:49:36.475380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:49:36.520320 ignition[844]: Ignition 2.22.0 Mar 13 00:49:36.520364 ignition[844]: Stage: kargs Mar 13 00:49:36.520479 ignition[844]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:36.520491 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:49:36.521173 ignition[844]: kargs: kargs passed Mar 13 00:49:36.521236 ignition[844]: Ignition finished successfully Mar 13 00:49:36.540330 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:49:36.547872 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:49:36.604820 ignition[852]: Ignition 2.22.0 Mar 13 00:49:36.604862 ignition[852]: Stage: disks Mar 13 00:49:36.605069 ignition[852]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:36.605081 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:49:36.605794 ignition[852]: disks: disks passed Mar 13 00:49:36.605838 ignition[852]: Ignition finished successfully Mar 13 00:49:36.620311 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:49:36.626100 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:49:36.629387 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:49:36.636707 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:49:36.644530 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:49:36.651744 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:49:36.664514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:49:36.712524 systemd-fsck[862]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:49:36.720993 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:49:36.727825 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:49:36.898998 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:49:36.900045 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:49:36.906414 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:49:36.916752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:49:36.918379 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:49:36.932360 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:49:36.932461 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:49:36.932496 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:49:36.966117 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:49:36.974173 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:49:36.988413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Mar 13 00:49:36.999187 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:36.999222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:37.011092 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:49:37.011124 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:49:37.013336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:49:37.044987 initrd-setup-root[894]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:49:37.058420 initrd-setup-root[901]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:49:37.071235 initrd-setup-root[908]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:49:37.083317 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:49:37.253214 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:49:37.260565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:49:37.263057 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:49:37.293047 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:49:37.302162 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:37.315361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:49:37.339431 ignition[983]: INFO : Ignition 2.22.0 Mar 13 00:49:37.339431 ignition[983]: INFO : Stage: mount Mar 13 00:49:37.348535 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:37.348535 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:49:37.348535 ignition[983]: INFO : mount: mount passed Mar 13 00:49:37.348535 ignition[983]: INFO : Ignition finished successfully Mar 13 00:49:37.343671 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:49:37.350008 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:49:37.903271 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:49:37.937040 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (996) Mar 13 00:49:37.943543 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:49:37.943568 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:49:37.954432 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:49:37.954460 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:49:37.957138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:49:38.012391 ignition[1013]: INFO : Ignition 2.22.0 Mar 13 00:49:38.012391 ignition[1013]: INFO : Stage: files Mar 13 00:49:38.017502 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:38.017502 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:49:38.017502 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:49:38.017502 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:49:38.017502 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:49:38.037074 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:49:38.037074 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:49:38.037074 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:49:38.037074 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:49:38.037074 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:49:38.020781 unknown[1013]: wrote ssh authorized keys file for user: core Mar 13 00:49:38.084238 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:49:38.147434 systemd-networkd[839]: eth0: Gained IPv6LL Mar 13 00:49:38.204730 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:49:38.204730 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:49:38.217215 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 00:49:38.335712 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 00:49:38.607463 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:49:38.607463 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:38.624728 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 00:49:38.841397 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 00:49:39.407877 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:49:39.407877 ignition[1013]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 00:49:39.419791 ignition[1013]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:49:39.429530 ignition[1013]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:49:39.429530 ignition[1013]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 00:49:39.429530 ignition[1013]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 13 00:49:39.429530 ignition[1013]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:49:39.455715 ignition[1013]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:49:39.455715 ignition[1013]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 13 00:49:39.455715 ignition[1013]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 13 00:49:39.496563 ignition[1013]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:49:39.507583 ignition[1013]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:49:39.512953 ignition[1013]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 13 00:49:39.512953 ignition[1013]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:49:39.512953 ignition[1013]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:49:39.512953 ignition[1013]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:49:39.512953 ignition[1013]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:49:39.512953 ignition[1013]: INFO : files: files passed Mar 13 00:49:39.512953 ignition[1013]: INFO : Ignition finished successfully Mar 13 00:49:39.543816 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:49:39.551884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:49:39.560204 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:49:39.581236 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:49:39.581398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:49:39.593159 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Mar 13 00:49:39.597765 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:49:39.607162 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:49:39.607162 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:49:39.601061 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:49:39.607780 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:49:39.613693 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:49:39.691357 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:49:39.691619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:49:39.699886 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:49:39.704779 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:49:39.712346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:49:39.714125 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:49:39.763368 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:49:39.765419 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:49:39.811500 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:49:39.816773 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:49:39.821429 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:49:39.833592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:49:39.833956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:49:39.850163 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:49:39.852358 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:49:39.861609 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:49:39.872215 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:49:39.872593 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:49:39.880605 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:49:39.888068 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:49:39.905096 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:49:39.911781 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:49:39.918622 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:49:39.926769 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:49:39.934584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:49:39.934790 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:49:39.955429 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:49:39.955871 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:49:39.965845 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:49:39.966507 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:49:39.981307 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:49:39.981611 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:49:39.995187 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:49:39.995452 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:49:39.997012 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:49:40.000840 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:49:40.005371 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:49:40.024494 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:49:40.028136 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:49:40.039429 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:49:40.039692 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:49:40.051558 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:49:40.051840 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:49:40.055428 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:49:40.055558 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:49:40.070182 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:49:40.070323 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:49:40.075141 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:49:40.088229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:49:40.088502 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:49:40.126435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:49:40.131152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:49:40.131344 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:49:40.154603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:49:40.154796 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:49:40.169222 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:49:40.174139 ignition[1070]: INFO : Ignition 2.22.0 Mar 13 00:49:40.174139 ignition[1070]: INFO : Stage: umount Mar 13 00:49:40.174139 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:49:40.174139 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:49:40.177529 ignition[1070]: INFO : umount: umount passed Mar 13 00:49:40.177529 ignition[1070]: INFO : Ignition finished successfully Mar 13 00:49:40.174724 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:49:40.174952 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:49:40.205454 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:49:40.205720 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:49:40.213707 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:49:40.214035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:49:40.225764 systemd[1]: Stopped target network.target - Network. Mar 13 00:49:40.226540 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:49:40.226624 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:49:40.229845 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:49:40.230007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:49:40.239057 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:49:40.239135 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:49:40.251295 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:49:40.251371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:49:40.254954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:49:40.255021 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:49:40.257217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:49:40.272630 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:49:40.285114 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:49:40.285318 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:49:40.297019 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:49:40.297370 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:49:40.297556 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:49:40.317436 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:49:40.320528 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:49:40.330128 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:49:40.330234 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:49:40.354967 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:49:40.365258 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:49:40.365432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:49:40.380310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:49:40.380430 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:49:40.394066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:49:40.394128 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:49:40.402832 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:49:40.402986 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:49:40.430492 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:49:40.486810 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:49:40.487029 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:49:40.518168 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:49:40.518577 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:49:40.523281 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:49:40.523489 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:49:40.525034 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:49:40.525134 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:49:40.534720 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:49:40.534791 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:49:40.545756 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:49:40.545840 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:49:40.560298 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:49:40.560387 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:49:40.571042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:49:40.571132 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:49:40.593567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:49:40.597124 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:49:40.597247 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:49:40.611535 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:49:40.611692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:49:40.623383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:49:40.623481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:40.642561 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:49:40.642707 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:49:40.642790 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:49:40.643488 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:49:40.643734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:49:40.656120 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:49:40.671094 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:49:40.724435 systemd[1]: Switching root. Mar 13 00:49:40.783691 systemd-journald[204]: Journal stopped Mar 13 00:49:42.853965 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Mar 13 00:49:42.854031 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:49:42.854051 kernel: SELinux: policy capability open_perms=1 Mar 13 00:49:42.854066 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:49:42.854077 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:49:42.854088 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:49:42.854100 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:49:42.854116 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:49:42.854127 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:49:42.854138 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:49:42.854149 kernel: audit: type=1403 audit(1773362981.123:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:49:42.854166 systemd[1]: Successfully loaded SELinux policy in 129.603ms. Mar 13 00:49:42.854188 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.651ms. Mar 13 00:49:42.854203 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:49:42.854215 systemd[1]: Detected virtualization kvm. Mar 13 00:49:42.854226 systemd[1]: Detected architecture x86-64. Mar 13 00:49:42.854238 systemd[1]: Detected first boot. Mar 13 00:49:42.854250 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:49:42.854261 kernel: Guest personality initialized and is inactive Mar 13 00:49:42.854273 zram_generator::config[1116]: No configuration found. Mar 13 00:49:42.854286 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:49:42.854299 kernel: Initialized host personality Mar 13 00:49:42.854311 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:49:42.854322 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:49:42.854335 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:49:42.854346 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:49:42.854358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:49:42.854375 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:49:42.854387 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:49:42.854401 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:49:42.854413 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:49:42.854424 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:49:42.854436 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:49:42.854448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:49:42.854466 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:49:42.854478 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:49:42.854489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:49:42.854501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:49:42.854516 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:49:42.854528 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:49:42.854545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:49:42.854557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:49:42.854569 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:49:42.854580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:49:42.854592 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:49:42.854603 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:49:42.854617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:49:42.854629 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:49:42.854640 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:49:42.854652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:49:42.854706 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:49:42.854719 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:49:42.854731 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:49:42.854742 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:49:42.854754 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:49:42.854771 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:49:42.854783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:49:42.854795 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:49:42.854807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:49:42.854818 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:49:42.854830 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:49:42.854842 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:49:42.854853 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:49:42.854865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:42.854879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:49:42.854948 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:49:42.854962 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:49:42.854975 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:49:42.854986 systemd[1]: Reached target machines.target - Containers. Mar 13 00:49:42.854998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:49:42.855010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:49:42.855022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:49:42.855038 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:49:42.855050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:49:42.855061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:49:42.855073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:49:42.855086 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:49:42.855098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:49:42.855110 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:49:42.855121 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:49:42.855133 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:49:42.855147 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:49:42.855159 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:49:42.855171 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:49:42.855182 kernel: ACPI: bus type drm_connector registered Mar 13 00:49:42.855193 kernel: fuse: init (API version 7.41) Mar 13 00:49:42.855204 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:49:42.855216 kernel: loop: module loaded Mar 13 00:49:42.855227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:49:42.855242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:49:42.855253 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:49:42.855291 systemd-journald[1201]: Collecting audit messages is disabled. Mar 13 00:49:42.855318 systemd-journald[1201]: Journal started Mar 13 00:49:42.855338 systemd-journald[1201]: Runtime Journal (/run/log/journal/5fcd85364acf4acd8206116d623bd5af) is 6M, max 48.3M, 42.2M free. Mar 13 00:49:42.203577 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:49:42.221848 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:49:42.222569 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:49:42.860977 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:49:42.890994 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:49:42.891572 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:49:42.897967 systemd[1]: Stopped verity-setup.service. Mar 13 00:49:42.908981 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:43.060101 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:49:43.065868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:49:43.071607 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:49:43.077263 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:49:43.081196 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:49:43.085337 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:49:43.089577 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:49:43.093766 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:49:43.099033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:49:43.104221 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:49:43.104627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:49:43.109626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:49:43.110094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:49:43.115237 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:49:43.115639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:49:43.120048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:49:43.120446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:49:43.125407 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:49:43.125992 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:49:43.130319 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:49:43.130785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:49:43.135319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:49:43.140317 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:49:43.145781 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:49:43.150747 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:49:43.174569 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:49:43.181155 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:49:43.188512 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:49:43.194224 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:49:43.194274 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:49:43.200801 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:49:43.210139 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:49:43.215975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:49:43.218638 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:49:43.227774 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:49:43.234829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:49:43.239116 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:49:43.244516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:49:43.252349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:49:43.265256 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:49:43.275403 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:49:43.282170 systemd-journald[1201]: Time spent on flushing to /var/log/journal/5fcd85364acf4acd8206116d623bd5af is 34.055ms for 979 entries. Mar 13 00:49:43.282170 systemd-journald[1201]: System Journal (/var/log/journal/5fcd85364acf4acd8206116d623bd5af) is 8M, max 195.6M, 187.6M free. Mar 13 00:49:43.362224 systemd-journald[1201]: Received client request to flush runtime journal. Mar 13 00:49:43.282595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:49:43.295972 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:49:43.302546 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:49:43.309057 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:49:43.328288 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:49:43.338327 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:49:43.351567 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:49:43.377789 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:49:43.367411 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:49:43.408631 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:49:43.412717 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:49:43.428962 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:49:43.435784 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:49:43.443849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:49:43.464020 kernel: loop1: detected capacity change from 0 to 128560 Mar 13 00:49:43.524619 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Mar 13 00:49:43.524700 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Mar 13 00:49:43.534795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:49:43.536952 kernel: loop2: detected capacity change from 0 to 228704 Mar 13 00:49:43.862000 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:49:43.894986 kernel: loop4: detected capacity change from 0 to 128560 Mar 13 00:49:43.922979 kernel: loop5: detected capacity change from 0 to 228704 Mar 13 00:49:43.979041 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 13 00:49:43.980347 (sd-merge)[1260]: Merged extensions into '/usr'. Mar 13 00:49:43.988606 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:49:43.988622 systemd[1]: Reloading... Mar 13 00:49:44.392016 zram_generator::config[1282]: No configuration found. Mar 13 00:49:44.781050 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:49:44.900384 systemd[1]: Reloading finished in 911 ms. Mar 13 00:49:44.937744 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:49:44.943344 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:49:44.949131 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:49:45.027171 systemd[1]: Starting ensure-sysext.service... Mar 13 00:49:45.032069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:49:45.045765 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:49:45.069103 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:49:45.069265 systemd[1]: Reloading... Mar 13 00:49:45.075167 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:49:45.075661 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:49:45.076493 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:49:45.077422 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:49:45.079988 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:49:45.080827 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 13 00:49:45.081116 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 13 00:49:45.091843 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:49:45.091862 systemd-tmpfiles[1327]: Skipping /boot Mar 13 00:49:45.113673 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:49:45.113784 systemd-tmpfiles[1327]: Skipping /boot Mar 13 00:49:45.135351 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Mar 13 00:49:45.152031 zram_generator::config[1350]: No configuration found. Mar 13 00:49:45.403090 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:49:45.416979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 00:49:45.432002 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:49:45.439369 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:49:45.439545 systemd[1]: Reloading finished in 369 ms. Mar 13 00:49:45.446992 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:49:45.451985 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:49:45.453189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:49:45.473857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:49:45.525810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:49:45.537464 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:45.542278 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:49:45.555530 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:49:45.560337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:49:45.566453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:49:45.577336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:49:45.585124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:49:45.589336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:49:45.591126 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:49:45.596099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:49:45.598545 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:49:45.607315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:49:45.621465 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:49:45.627410 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:49:45.631769 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:45.634525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:49:45.635007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:49:45.640040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:49:45.644156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:49:45.650601 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:49:45.652020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:49:45.684404 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:49:45.718308 systemd[1]: Finished ensure-sysext.service. Mar 13 00:49:45.723815 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:49:45.733825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:49:45.738581 augenrules[1478]: No rules Mar 13 00:49:45.743823 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:49:45.745505 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:49:45.767430 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:45.767839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:49:45.771204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:49:45.777747 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:49:45.786202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:49:45.799275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:49:45.804348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:49:45.804814 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:49:45.835487 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:49:45.844249 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:49:45.857054 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:49:45.862338 kernel: kvm_amd: TSC scaling supported Mar 13 00:49:45.862374 kernel: kvm_amd: Nested Virtualization enabled Mar 13 00:49:45.862388 kernel: kvm_amd: Nested Paging enabled Mar 13 00:49:45.862413 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 13 00:49:45.862430 kernel: kvm_amd: PMU virtualization is disabled Mar 13 00:49:45.920812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:49:45.925189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:49:45.927675 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:49:45.933997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:49:45.934385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:49:45.939629 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:49:45.940125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:49:45.945111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:49:45.945388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:49:45.951312 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:49:45.951584 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:49:45.958419 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:49:45.975265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:49:45.975395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:49:45.975439 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:49:45.986980 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:49:46.006271 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:49:46.110531 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:49:46.120976 systemd-networkd[1456]: lo: Link UP Mar 13 00:49:46.121008 systemd-networkd[1456]: lo: Gained carrier Mar 13 00:49:46.123465 systemd-networkd[1456]: Enumeration completed Mar 13 00:49:46.124402 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:46.124435 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:49:46.125520 systemd-networkd[1456]: eth0: Link UP Mar 13 00:49:46.125806 systemd-networkd[1456]: eth0: Gained carrier Mar 13 00:49:46.125845 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:49:46.138809 systemd-resolved[1459]: Positive Trust Anchors: Mar 13 00:49:46.138846 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:49:46.138875 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:49:46.141120 systemd-networkd[1456]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:49:46.142214 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Mar 13 00:49:46.830738 systemd-resolved[1459]: Defaulting to hostname 'linux'. Mar 13 00:49:46.830756 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 13 00:49:46.830825 systemd-timesyncd[1490]: Initial clock synchronization to Fri 2026-03-13 00:49:46.830648 UTC. Mar 13 00:49:46.974722 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:49:46.979922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:49:46.985391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:49:46.991225 systemd[1]: Reached target network.target - Network. Mar 13 00:49:46.995125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:49:46.999990 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:49:47.004885 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:49:47.009813 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:49:47.014795 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:49:47.019439 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:49:47.024541 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:49:47.024601 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:49:47.028454 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:49:47.032794 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:49:47.037519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:49:47.042646 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:49:47.049064 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:49:47.055088 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:49:47.062321 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:49:47.067725 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:49:47.073573 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:49:47.080696 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:49:47.085593 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:49:47.092667 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:49:47.099386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:49:47.100446 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:49:47.109861 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:49:47.114234 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:49:47.114392 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:49:47.114421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:49:47.124008 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:49:47.129792 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:49:47.134642 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:49:47.143407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:49:47.166232 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:49:47.171044 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:49:47.172409 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:49:47.173789 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:49:47.175471 jq[1523]: false Mar 13 00:49:47.208320 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:49:47.214256 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Mar 13 00:49:47.214564 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Mar 13 00:49:47.217807 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:49:47.223049 extend-filesystems[1524]: Found /dev/vda6 Mar 13 00:49:47.228309 extend-filesystems[1524]: Found /dev/vda9 Mar 13 00:49:47.231067 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:49:47.232438 extend-filesystems[1524]: Checking size of /dev/vda9 Mar 13 00:49:47.237327 oslogin_cache_refresh[1525]: Failure getting users, quitting Mar 13 00:49:47.237998 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Mar 13 00:49:47.237998 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:49:47.237998 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Mar 13 00:49:47.237352 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:49:47.237416 oslogin_cache_refresh[1525]: Refreshing group entry cache Mar 13 00:49:47.241374 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:49:47.246300 extend-filesystems[1524]: Resized partition /dev/vda9 Mar 13 00:49:47.250876 extend-filesystems[1547]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:49:47.249715 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:49:47.255032 oslogin_cache_refresh[1525]: Failure getting groups, quitting Mar 13 00:49:47.255134 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Mar 13 00:49:47.255134 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:49:47.250326 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:49:47.255052 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:49:47.257325 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:49:47.261367 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 13 00:49:47.271742 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:49:47.280553 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:49:47.286765 jq[1550]: true Mar 13 00:49:47.289651 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:49:47.295711 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:49:47.346025 update_engine[1548]: I20260313 00:49:47.331739 1548 main.cc:92] Flatcar Update Engine starting Mar 13 00:49:47.296026 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:49:47.296935 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:49:47.301750 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:49:47.306719 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:49:47.307354 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:49:47.313024 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:49:47.313371 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:49:47.344282 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:49:47.347255 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:49:47.347294 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:49:47.358911 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 13 00:49:47.365386 systemd-logind[1542]: New seat seat0. Mar 13 00:49:47.367527 tar[1554]: linux-amd64/LICENSE Mar 13 00:49:47.370639 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:49:47.382326 tar[1554]: linux-amd64/helm Mar 13 00:49:47.385454 extend-filesystems[1547]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:49:47.385454 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 13 00:49:47.385454 extend-filesystems[1547]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 13 00:49:47.402548 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Mar 13 00:49:47.406723 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:49:47.406876 jq[1555]: true Mar 13 00:49:47.388603 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:49:47.406714 dbus-daemon[1521]: [system] SELinux support is enabled Mar 13 00:49:47.388904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:49:47.409522 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:49:47.417899 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:49:47.418041 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:49:47.423086 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:49:47.423110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:49:47.430674 update_engine[1548]: I20260313 00:49:47.430579 1548 update_check_scheduler.cc:74] Next update check in 2m5s Mar 13 00:49:47.433507 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:49:47.437771 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:49:47.443073 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:49:47.465051 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:49:47.472105 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:49:47.489075 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:49:47.491486 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:49:47.499854 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:49:47.511110 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:49:47.515747 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:49:47.516067 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:49:47.523126 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:49:47.546195 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:49:47.553703 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:49:47.560237 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:49:47.564818 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:49:47.602350 containerd[1556]: time="2026-03-13T00:49:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:49:47.603448 containerd[1556]: time="2026-03-13T00:49:47.603375726Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:49:47.612822 containerd[1556]: time="2026-03-13T00:49:47.612745155Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.913µs" Mar 13 00:49:47.612865 containerd[1556]: time="2026-03-13T00:49:47.612819654Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:49:47.612865 containerd[1556]: time="2026-03-13T00:49:47.612847596Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:49:47.613191 containerd[1556]: time="2026-03-13T00:49:47.613068608Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:49:47.613191 containerd[1556]: time="2026-03-13T00:49:47.613119774Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:49:47.613248 containerd[1556]: time="2026-03-13T00:49:47.613224850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613351 containerd[1556]: time="2026-03-13T00:49:47.613296233Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613351 containerd[1556]: time="2026-03-13T00:49:47.613340947Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613646 containerd[1556]: time="2026-03-13T00:49:47.613590853Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613646 containerd[1556]: time="2026-03-13T00:49:47.613644022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613689 containerd[1556]: time="2026-03-13T00:49:47.613655914Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613689 containerd[1556]: time="2026-03-13T00:49:47.613663669Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:49:47.613814 containerd[1556]: time="2026-03-13T00:49:47.613754398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:49:47.614259 containerd[1556]: time="2026-03-13T00:49:47.614121142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:49:47.614350 containerd[1556]: time="2026-03-13T00:49:47.614299485Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:49:47.614350 containerd[1556]: time="2026-03-13T00:49:47.614338718Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:49:47.614430 containerd[1556]: time="2026-03-13T00:49:47.614386739Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:49:47.615034 containerd[1556]: time="2026-03-13T00:49:47.614986798Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:49:47.615192 containerd[1556]: time="2026-03-13T00:49:47.615088588Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:49:47.623258 containerd[1556]: time="2026-03-13T00:49:47.623134476Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:49:47.623315 containerd[1556]: time="2026-03-13T00:49:47.623277183Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:49:47.623315 containerd[1556]: time="2026-03-13T00:49:47.623297060Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:49:47.623315 containerd[1556]: time="2026-03-13T00:49:47.623311597Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:49:47.623370 containerd[1556]: time="2026-03-13T00:49:47.623325703Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:49:47.623370 containerd[1556]: time="2026-03-13T00:49:47.623339148Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:49:47.623370 containerd[1556]: time="2026-03-13T00:49:47.623363744Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:49:47.623417 containerd[1556]: time="2026-03-13T00:49:47.623386106Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:49:47.623417 containerd[1556]: time="2026-03-13T00:49:47.623401415Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:49:47.623456 containerd[1556]: time="2026-03-13T00:49:47.623414549Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:49:47.623456 containerd[1556]: time="2026-03-13T00:49:47.623426702Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:49:47.623456 containerd[1556]: time="2026-03-13T00:49:47.623441480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:49:47.623657 containerd[1556]: time="2026-03-13T00:49:47.623577643Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:49:47.623657 containerd[1556]: time="2026-03-13T00:49:47.623649016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:49:47.623704 containerd[1556]: time="2026-03-13T00:49:47.623670066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:49:47.623704 containerd[1556]: time="2026-03-13T00:49:47.623686086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:49:47.623737 containerd[1556]: time="2026-03-13T00:49:47.623715531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:49:47.623737 containerd[1556]: time="2026-03-13T00:49:47.623729727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:49:47.623777 containerd[1556]: time="2026-03-13T00:49:47.623743403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:49:47.623802 containerd[1556]: time="2026-03-13T00:49:47.623776595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:49:47.623824 containerd[1556]: time="2026-03-13T00:49:47.623799808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:49:47.623824 containerd[1556]: time="2026-03-13T00:49:47.623817080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:49:47.623857 containerd[1556]: time="2026-03-13T00:49:47.623832158Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:49:47.623977 containerd[1556]: time="2026-03-13T00:49:47.623885198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:49:47.624004 containerd[1556]: time="2026-03-13T00:49:47.623993099Z" level=info msg="Start snapshots syncer" Mar 13 00:49:47.625208 containerd[1556]: time="2026-03-13T00:49:47.624028284Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:49:47.625208 containerd[1556]: time="2026-03-13T00:49:47.624491880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624571448Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624615981Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624766292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624797581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624815093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624831724Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624850309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624878001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624894742Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624922874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.624938944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.625011009Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.625054390Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:49:47.625373 containerd[1556]: time="2026-03-13T00:49:47.625075639Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625087942Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625104674Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625117758Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625131684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625245968Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625273469Z" level=info msg="runtime interface created" Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625284650Z" level=info msg="created NRI interface" Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625296291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625311740Z" level=info msg="Connect containerd service" Mar 13 00:49:47.625599 containerd[1556]: time="2026-03-13T00:49:47.625345062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:49:47.629085 containerd[1556]: time="2026-03-13T00:49:47.629016185Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:49:47.902616 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:49:47.910203 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:53696.service - OpenSSH per-connection server daemon (10.0.0.1:53696). Mar 13 00:49:47.931878 containerd[1556]: time="2026-03-13T00:49:47.931839554Z" level=info msg="Start subscribing containerd event" Mar 13 00:49:47.932333 containerd[1556]: time="2026-03-13T00:49:47.932086675Z" level=info msg="Start recovering state" Mar 13 00:49:47.932333 containerd[1556]: time="2026-03-13T00:49:47.932251042Z" level=info msg="Start event monitor" Mar 13 00:49:47.932333 containerd[1556]: time="2026-03-13T00:49:47.932268494Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:49:47.932556 containerd[1556]: time="2026-03-13T00:49:47.932431589Z" level=info msg="Start streaming server" Mar 13 00:49:47.932556 containerd[1556]: time="2026-03-13T00:49:47.932454331Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:49:47.932556 containerd[1556]: time="2026-03-13T00:49:47.932464309Z" level=info msg="runtime interface starting up..." Mar 13 00:49:47.932556 containerd[1556]: time="2026-03-13T00:49:47.932470721Z" level=info msg="starting plugins..." Mar 13 00:49:47.932556 containerd[1556]: time="2026-03-13T00:49:47.932488495Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:49:47.933036 containerd[1556]: time="2026-03-13T00:49:47.933017513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:49:47.933310 containerd[1556]: time="2026-03-13T00:49:47.933238515Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:49:47.935917 containerd[1556]: time="2026-03-13T00:49:47.935708024Z" level=info msg="containerd successfully booted in 0.334084s" Mar 13 00:49:47.935804 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:49:47.983321 tar[1554]: linux-amd64/README.md Mar 13 00:49:48.022770 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 53696 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:49:48.027070 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:48.031868 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:49:48.045814 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:49:48.094077 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:49:48.158646 systemd-logind[1542]: New session 1 of user core. Mar 13 00:49:48.178598 systemd-networkd[1456]: eth0: Gained IPv6LL Mar 13 00:49:48.182771 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:49:48.187608 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:49:48.193468 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 13 00:49:48.198814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:49:48.215318 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:49:48.222907 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:49:48.244857 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:49:48.265471 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 13 00:49:48.266788 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 13 00:49:48.272360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:49:48.280919 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:49:48.288256 systemd-logind[1542]: New session c1 of user core. Mar 13 00:49:48.294525 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:49:48.903741 systemd[1648]: Queued start job for default target default.target. Mar 13 00:49:48.916916 systemd[1648]: Created slice app.slice - User Application Slice. Mar 13 00:49:48.917036 systemd[1648]: Reached target paths.target - Paths. Mar 13 00:49:48.917115 systemd[1648]: Reached target timers.target - Timers. Mar 13 00:49:48.919565 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:49:48.968837 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:49:48.969083 systemd[1648]: Reached target sockets.target - Sockets. Mar 13 00:49:48.969130 systemd[1648]: Reached target basic.target - Basic System. Mar 13 00:49:48.969298 systemd[1648]: Reached target default.target - Main User Target. Mar 13 00:49:48.969341 systemd[1648]: Startup finished in 654ms. Mar 13 00:49:48.970230 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:49:48.986514 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:49:49.099790 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:60874.service - OpenSSH per-connection server daemon (10.0.0.1:60874). Mar 13 00:49:49.327403 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 60874 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:49:49.330292 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:49.339831 systemd-logind[1542]: New session 2 of user core. Mar 13 00:49:49.361630 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:49:49.613107 sshd[1672]: Connection closed by 10.0.0.1 port 60874 Mar 13 00:49:49.615735 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:49.627720 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:60874.service: Deactivated successfully. Mar 13 00:49:49.631466 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:49:49.632938 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:49:49.637804 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:60882.service - OpenSSH per-connection server daemon (10.0.0.1:60882). Mar 13 00:49:49.650663 systemd-logind[1542]: Removed session 2. Mar 13 00:49:49.732785 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 60882 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:49:49.735041 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:49:49.745864 systemd-logind[1542]: New session 3 of user core. Mar 13 00:49:49.761419 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:49:50.093029 sshd[1681]: Connection closed by 10.0.0.1 port 60882 Mar 13 00:49:50.095025 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Mar 13 00:49:50.105655 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:60882.service: Deactivated successfully. Mar 13 00:49:50.110846 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:49:50.112306 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:49:50.114621 systemd-logind[1542]: Removed session 3. Mar 13 00:49:53.088297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:49:53.090848 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:49:53.091433 systemd[1]: Startup finished in 4.552s (kernel) + 10.336s (initrd) + 11.399s (userspace) = 26.288s. Mar 13 00:49:53.110988 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:49:55.525420 kubelet[1695]: E0313 00:49:55.524797 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:49:55.531382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:49:55.531693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:49:55.532451 systemd[1]: kubelet.service: Consumed 6.147s CPU time, 268.6M memory peak. Mar 13 00:50:00.104871 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:60500.service - OpenSSH per-connection server daemon (10.0.0.1:60500). Mar 13 00:50:00.170865 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 60500 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:50:00.172824 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:00.181429 systemd-logind[1542]: New session 4 of user core. Mar 13 00:50:00.194481 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:50:00.218657 sshd[1707]: Connection closed by 10.0.0.1 port 60500 Mar 13 00:50:00.220976 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:00.230225 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:60500.service: Deactivated successfully. Mar 13 00:50:00.232423 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:50:00.233939 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:50:00.236839 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:60504.service - OpenSSH per-connection server daemon (10.0.0.1:60504). Mar 13 00:50:00.239345 systemd-logind[1542]: Removed session 4. Mar 13 00:50:00.313341 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 60504 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:50:00.314996 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:00.322420 systemd-logind[1542]: New session 5 of user core. Mar 13 00:50:00.329481 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:50:00.342637 sshd[1716]: Connection closed by 10.0.0.1 port 60504 Mar 13 00:50:00.343519 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:00.354030 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:60504.service: Deactivated successfully. Mar 13 00:50:00.356053 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:50:00.357853 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:50:00.360829 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:60512.service - OpenSSH per-connection server daemon (10.0.0.1:60512). Mar 13 00:50:00.362659 systemd-logind[1542]: Removed session 5. Mar 13 00:50:00.417590 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 60512 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:50:00.419253 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:00.426403 systemd-logind[1542]: New session 6 of user core. Mar 13 00:50:00.440519 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:50:00.461820 sshd[1725]: Connection closed by 10.0.0.1 port 60512 Mar 13 00:50:00.461994 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:00.481408 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:60512.service: Deactivated successfully. Mar 13 00:50:00.483664 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:50:00.484962 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:50:00.487759 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:60524.service - OpenSSH per-connection server daemon (10.0.0.1:60524). Mar 13 00:50:00.489489 systemd-logind[1542]: Removed session 6. Mar 13 00:50:00.557940 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 60524 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:50:00.559535 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:00.566443 systemd-logind[1542]: New session 7 of user core. Mar 13 00:50:00.576381 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:50:00.600722 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:50:00.601379 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:00.625836 sudo[1735]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:00.628487 sshd[1734]: Connection closed by 10.0.0.1 port 60524 Mar 13 00:50:00.628670 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:00.638798 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:60524.service: Deactivated successfully. Mar 13 00:50:00.641840 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:50:00.645477 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:50:00.647574 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:60532.service - OpenSSH per-connection server daemon (10.0.0.1:60532). Mar 13 00:50:00.649432 systemd-logind[1542]: Removed session 7. Mar 13 00:50:00.730261 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 60532 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:50:00.732227 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:00.741274 systemd-logind[1542]: New session 8 of user core. Mar 13 00:50:00.751449 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:50:00.773841 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:50:00.774558 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:00.784982 sudo[1746]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:00.793559 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:50:00.794135 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:00.809267 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:50:00.889455 augenrules[1768]: No rules Mar 13 00:50:00.891292 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:50:00.891869 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:50:00.893427 sudo[1745]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:00.895123 sshd[1744]: Connection closed by 10.0.0.1 port 60532 Mar 13 00:50:00.896465 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:00.906331 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:60532.service: Deactivated successfully. Mar 13 00:50:00.908374 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:50:00.909605 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:50:00.912464 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:60540.service - OpenSSH per-connection server daemon (10.0.0.1:60540). Mar 13 00:50:00.913906 systemd-logind[1542]: Removed session 8. Mar 13 00:50:00.995626 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 60540 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:50:00.998798 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:50:01.006489 systemd-logind[1542]: New session 9 of user core. Mar 13 00:50:01.013381 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:50:01.031867 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:50:01.032364 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:50:01.421759 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:50:01.439722 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:50:03.909344 dockerd[1801]: time="2026-03-13T00:50:03.909017073Z" level=info msg="Starting up" Mar 13 00:50:03.912769 dockerd[1801]: time="2026-03-13T00:50:03.912620835Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:50:04.161923 dockerd[1801]: time="2026-03-13T00:50:04.161540198Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:50:04.206641 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport43011998-merged.mount: Deactivated successfully. Mar 13 00:50:04.247924 dockerd[1801]: time="2026-03-13T00:50:04.247785755Z" level=info msg="Loading containers: start." Mar 13 00:50:04.266208 kernel: Initializing XFRM netlink socket Mar 13 00:50:05.026529 systemd-networkd[1456]: docker0: Link UP Mar 13 00:50:05.033992 dockerd[1801]: time="2026-03-13T00:50:05.033869041Z" level=info msg="Loading containers: done." Mar 13 00:50:05.067490 dockerd[1801]: time="2026-03-13T00:50:05.067403109Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:50:05.067657 dockerd[1801]: time="2026-03-13T00:50:05.067619944Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:50:05.068240 dockerd[1801]: time="2026-03-13T00:50:05.068039436Z" level=info msg="Initializing buildkit" Mar 13 00:50:05.121397 dockerd[1801]: time="2026-03-13T00:50:05.121328629Z" level=info msg="Completed buildkit initialization" Mar 13 00:50:05.141092 dockerd[1801]: time="2026-03-13T00:50:05.140944258Z" level=info msg="Daemon has completed initialization" Mar 13 00:50:05.141638 dockerd[1801]: time="2026-03-13T00:50:05.141348367Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:50:05.141768 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:50:05.614728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:50:05.628768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:07.285874 containerd[1556]: time="2026-03-13T00:50:07.285640547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:50:07.537393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:07.562635 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:50:07.989721 kubelet[2030]: E0313 00:50:07.989264 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:50:07.996066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:50:07.996602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:50:07.997579 systemd[1]: kubelet.service: Consumed 1.790s CPU time, 112.5M memory peak. Mar 13 00:50:08.291746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329371367.mount: Deactivated successfully. Mar 13 00:50:10.204278 containerd[1556]: time="2026-03-13T00:50:10.203971547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:10.205014 containerd[1556]: time="2026-03-13T00:50:10.204890016Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 13 00:50:10.206890 containerd[1556]: time="2026-03-13T00:50:10.206687201Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:10.211695 containerd[1556]: time="2026-03-13T00:50:10.211604399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:10.212555 containerd[1556]: time="2026-03-13T00:50:10.212491227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.926729885s" Mar 13 00:50:10.212555 containerd[1556]: time="2026-03-13T00:50:10.212528427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 00:50:10.215449 containerd[1556]: time="2026-03-13T00:50:10.215344022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:50:12.303086 containerd[1556]: time="2026-03-13T00:50:12.302957709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:12.305425 containerd[1556]: time="2026-03-13T00:50:12.305162873Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 13 00:50:12.306892 containerd[1556]: time="2026-03-13T00:50:12.306810169Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:12.311867 containerd[1556]: time="2026-03-13T00:50:12.311741752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:12.313673 containerd[1556]: time="2026-03-13T00:50:12.313530190Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.098113372s" Mar 13 00:50:12.313673 containerd[1556]: time="2026-03-13T00:50:12.313604719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 00:50:12.314614 containerd[1556]: time="2026-03-13T00:50:12.314394734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:50:14.309816 containerd[1556]: time="2026-03-13T00:50:14.309672240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:14.310869 containerd[1556]: time="2026-03-13T00:50:14.310319195Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 13 00:50:14.316632 containerd[1556]: time="2026-03-13T00:50:14.316371642Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:14.321514 containerd[1556]: time="2026-03-13T00:50:14.321296402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:14.322806 containerd[1556]: time="2026-03-13T00:50:14.322623148Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.008195483s" Mar 13 00:50:14.322806 containerd[1556]: time="2026-03-13T00:50:14.322706765Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 00:50:14.326137 containerd[1556]: time="2026-03-13T00:50:14.325856691Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:50:17.620462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542922166.mount: Deactivated successfully. Mar 13 00:50:18.108378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:50:18.112315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:19.475331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:19.488342 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:50:20.588547 kubelet[2121]: E0313 00:50:20.588325 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:50:20.597835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:50:20.598839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:50:20.600590 systemd[1]: kubelet.service: Consumed 1.663s CPU time, 112.8M memory peak. Mar 13 00:50:21.374844 containerd[1556]: time="2026-03-13T00:50:21.374012773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:21.376635 containerd[1556]: time="2026-03-13T00:50:21.375689554Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 13 00:50:21.377938 containerd[1556]: time="2026-03-13T00:50:21.377827980Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:21.382716 containerd[1556]: time="2026-03-13T00:50:21.382497005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:21.383973 containerd[1556]: time="2026-03-13T00:50:21.383826214Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 7.057924249s" Mar 13 00:50:21.384232 containerd[1556]: time="2026-03-13T00:50:21.383932039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 00:50:21.388388 containerd[1556]: time="2026-03-13T00:50:21.387929335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:50:22.376122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655934082.mount: Deactivated successfully. Mar 13 00:50:25.398425 containerd[1556]: time="2026-03-13T00:50:25.398073736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:25.399743 containerd[1556]: time="2026-03-13T00:50:25.399656498Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 13 00:50:25.401337 containerd[1556]: time="2026-03-13T00:50:25.401269862Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:25.405462 containerd[1556]: time="2026-03-13T00:50:25.405372121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:25.409885 containerd[1556]: time="2026-03-13T00:50:25.409780836Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.021825112s" Mar 13 00:50:25.409885 containerd[1556]: time="2026-03-13T00:50:25.409854102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 00:50:25.412481 containerd[1556]: time="2026-03-13T00:50:25.412401578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:50:26.274716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996360148.mount: Deactivated successfully. Mar 13 00:50:26.283952 containerd[1556]: time="2026-03-13T00:50:26.283838482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:50:26.284810 containerd[1556]: time="2026-03-13T00:50:26.284771944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:50:26.286516 containerd[1556]: time="2026-03-13T00:50:26.286290896Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:50:26.288742 containerd[1556]: time="2026-03-13T00:50:26.288562943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:50:26.289951 containerd[1556]: time="2026-03-13T00:50:26.289798921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 877.314902ms" Mar 13 00:50:26.289951 containerd[1556]: time="2026-03-13T00:50:26.289839927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 00:50:26.292343 containerd[1556]: time="2026-03-13T00:50:26.292273872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:50:26.796051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596862100.mount: Deactivated successfully. Mar 13 00:50:28.816446 containerd[1556]: time="2026-03-13T00:50:28.816072775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:28.817839 containerd[1556]: time="2026-03-13T00:50:28.817737912Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 13 00:50:28.820049 containerd[1556]: time="2026-03-13T00:50:28.819878564Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:28.826082 containerd[1556]: time="2026-03-13T00:50:28.825812538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:50:28.827765 containerd[1556]: time="2026-03-13T00:50:28.827477096Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.535171717s" Mar 13 00:50:28.827765 containerd[1556]: time="2026-03-13T00:50:28.827559048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 00:50:30.604664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:50:30.607614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:30.916579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:30.931876 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:50:30.998007 kubelet[2283]: E0313 00:50:30.997858 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:50:31.002945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:50:31.003379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:50:31.004067 systemd[1]: kubelet.service: Consumed 315ms CPU time, 110.3M memory peak. Mar 13 00:50:32.176694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:32.176950 systemd[1]: kubelet.service: Consumed 315ms CPU time, 110.3M memory peak. Mar 13 00:50:32.180922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:32.217130 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-9.scope)... Mar 13 00:50:32.217252 systemd[1]: Reloading... Mar 13 00:50:32.349434 zram_generator::config[2344]: No configuration found. Mar 13 00:50:32.783922 systemd[1]: Reloading finished in 566 ms. Mar 13 00:50:32.831921 update_engine[1548]: I20260313 00:50:32.831816 1548 update_attempter.cc:509] Updating boot flags... Mar 13 00:50:32.871852 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:50:32.871999 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:50:32.872518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:32.872663 systemd[1]: kubelet.service: Consumed 210ms CPU time, 98.4M memory peak. Mar 13 00:50:32.877977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:33.155289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:33.180794 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:50:33.278218 kubelet[2405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:33.278218 kubelet[2405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:50:33.278218 kubelet[2405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:33.278858 kubelet[2405]: I0313 00:50:33.278470 2405 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:50:34.215617 kubelet[2405]: I0313 00:50:34.215516 2405 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:50:34.215617 kubelet[2405]: I0313 00:50:34.215595 2405 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:50:34.216016 kubelet[2405]: I0313 00:50:34.215936 2405 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:50:34.258496 kubelet[2405]: E0313 00:50:34.258383 2405 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:50:34.259401 kubelet[2405]: I0313 00:50:34.259314 2405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:50:34.275778 kubelet[2405]: I0313 00:50:34.275658 2405 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:50:34.288178 kubelet[2405]: I0313 00:50:34.288029 2405 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:50:34.289024 kubelet[2405]: I0313 00:50:34.288913 2405 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:50:34.289530 kubelet[2405]: I0313 00:50:34.288989 2405 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:50:34.289795 kubelet[2405]: I0313 00:50:34.289611 2405 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:50:34.289795 kubelet[2405]: I0313 00:50:34.289626 2405 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:50:34.290304 kubelet[2405]: I0313 00:50:34.290120 2405 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:34.298753 kubelet[2405]: I0313 00:50:34.298589 2405 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:50:34.298753 kubelet[2405]: I0313 00:50:34.298684 2405 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:50:34.298976 kubelet[2405]: I0313 00:50:34.298779 2405 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:50:34.301547 kubelet[2405]: I0313 00:50:34.301498 2405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:50:34.309244 kubelet[2405]: E0313 00:50:34.307296 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:50:34.309244 kubelet[2405]: E0313 00:50:34.307583 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:50:34.309244 kubelet[2405]: I0313 00:50:34.308850 2405 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:50:34.311545 kubelet[2405]: I0313 00:50:34.311442 2405 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:50:34.312553 kubelet[2405]: W0313 00:50:34.312439 2405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:50:34.320395 kubelet[2405]: I0313 00:50:34.320130 2405 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:50:34.320531 kubelet[2405]: I0313 00:50:34.320463 2405 server.go:1289] "Started kubelet" Mar 13 00:50:34.322309 kubelet[2405]: I0313 00:50:34.321863 2405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:50:34.322685 kubelet[2405]: I0313 00:50:34.322586 2405 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:50:34.322933 kubelet[2405]: I0313 00:50:34.322836 2405 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:50:34.323342 kubelet[2405]: I0313 00:50:34.323282 2405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:50:34.324805 kubelet[2405]: I0313 00:50:34.324725 2405 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:50:34.329239 kubelet[2405]: I0313 00:50:34.327848 2405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:50:34.329239 kubelet[2405]: E0313 00:50:34.326889 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c4048b679b560 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:50:34.320385376 +0000 UTC m=+1.130748228,LastTimestamp:2026-03-13 00:50:34.320385376 +0000 UTC m=+1.130748228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:50:34.329239 kubelet[2405]: E0313 00:50:34.328969 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:50:34.329239 kubelet[2405]: I0313 00:50:34.329129 2405 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:50:34.329739 kubelet[2405]: I0313 00:50:34.329624 2405 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:50:34.329947 kubelet[2405]: I0313 00:50:34.329829 2405 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:50:34.330912 kubelet[2405]: E0313 00:50:34.330638 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:50:34.331864 kubelet[2405]: E0313 00:50:34.331747 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" Mar 13 00:50:34.332508 kubelet[2405]: I0313 00:50:34.332420 2405 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:50:34.332717 kubelet[2405]: I0313 00:50:34.332608 2405 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:50:34.333625 kubelet[2405]: E0313 00:50:34.333439 2405 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:50:34.335733 kubelet[2405]: I0313 00:50:34.335603 2405 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:50:34.368349 kubelet[2405]: I0313 00:50:34.368110 2405 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:50:34.368349 kubelet[2405]: I0313 00:50:34.368237 2405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:50:34.368349 kubelet[2405]: I0313 00:50:34.368312 2405 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:34.371950 kubelet[2405]: I0313 00:50:34.371898 2405 policy_none.go:49] "None policy: Start" Mar 13 00:50:34.372348 kubelet[2405]: I0313 00:50:34.372114 2405 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:50:34.372587 kubelet[2405]: I0313 00:50:34.372421 2405 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:50:34.379315 kubelet[2405]: I0313 00:50:34.379120 2405 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:50:34.382816 kubelet[2405]: I0313 00:50:34.382496 2405 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:50:34.382816 kubelet[2405]: I0313 00:50:34.382729 2405 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:50:34.383018 kubelet[2405]: I0313 00:50:34.382998 2405 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:50:34.383398 kubelet[2405]: I0313 00:50:34.383378 2405 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:50:34.383616 kubelet[2405]: E0313 00:50:34.383523 2405 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:50:34.384730 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:50:34.386367 kubelet[2405]: E0313 00:50:34.386049 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:50:34.399638 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:50:34.405979 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:50:34.418323 kubelet[2405]: E0313 00:50:34.418227 2405 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:50:34.418746 kubelet[2405]: I0313 00:50:34.418594 2405 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:50:34.418746 kubelet[2405]: I0313 00:50:34.418676 2405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:50:34.419621 kubelet[2405]: I0313 00:50:34.419326 2405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:50:34.422368 kubelet[2405]: E0313 00:50:34.422021 2405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:50:34.422368 kubelet[2405]: E0313 00:50:34.422261 2405 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 13 00:50:34.504629 systemd[1]: Created slice kubepods-burstable-pod771bd69946d705c971f849ddf48782b0.slice - libcontainer container kubepods-burstable-pod771bd69946d705c971f849ddf48782b0.slice. Mar 13 00:50:34.522611 kubelet[2405]: I0313 00:50:34.522071 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:50:34.523502 kubelet[2405]: E0313 00:50:34.523422 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Mar 13 00:50:34.532936 kubelet[2405]: E0313 00:50:34.532693 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" Mar 13 00:50:34.537174 kubelet[2405]: E0313 00:50:34.537009 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:34.543449 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 13 00:50:34.546789 kubelet[2405]: E0313 00:50:34.546678 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:34.551214 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 13 00:50:34.553749 kubelet[2405]: E0313 00:50:34.553702 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:34.631670 kubelet[2405]: I0313 00:50:34.631523 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/771bd69946d705c971f849ddf48782b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"771bd69946d705c971f849ddf48782b0\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:34.631670 kubelet[2405]: I0313 00:50:34.631628 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:34.631670 kubelet[2405]: I0313 00:50:34.631655 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:34.631670 kubelet[2405]: I0313 00:50:34.631680 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:34.631956 kubelet[2405]: I0313 00:50:34.631704 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/771bd69946d705c971f849ddf48782b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"771bd69946d705c971f849ddf48782b0\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:34.631956 kubelet[2405]: I0313 00:50:34.631727 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:34.631956 kubelet[2405]: I0313 00:50:34.631814 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:34.631956 kubelet[2405]: I0313 00:50:34.631844 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:34.631956 kubelet[2405]: I0313 00:50:34.631865 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/771bd69946d705c971f849ddf48782b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"771bd69946d705c971f849ddf48782b0\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:34.726541 kubelet[2405]: I0313 00:50:34.726370 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:50:34.726813 kubelet[2405]: E0313 00:50:34.726746 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Mar 13 00:50:34.838576 kubelet[2405]: E0313 00:50:34.838322 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:34.840511 containerd[1556]: time="2026-03-13T00:50:34.840333076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:771bd69946d705c971f849ddf48782b0,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:34.847636 kubelet[2405]: E0313 00:50:34.847401 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:34.848609 containerd[1556]: time="2026-03-13T00:50:34.848477073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:34.855412 kubelet[2405]: E0313 00:50:34.855319 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:34.855979 containerd[1556]: time="2026-03-13T00:50:34.855908664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:34.888048 containerd[1556]: time="2026-03-13T00:50:34.887762700Z" level=info msg="connecting to shim e1b7c4e73cd36cab96b489fadc6143c719ee019a6f01cae2ff0b953809ac2f21" address="unix:///run/containerd/s/dcd40019962f88fe49c307b247ba28c84ad65c878730bcc1f178e2011697fae9" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:34.907725 containerd[1556]: time="2026-03-13T00:50:34.907547375Z" level=info msg="connecting to shim 10854c91debb6db3a003175947493303dc0392c47e269c6dda05a0de145d86f1" address="unix:///run/containerd/s/8a0f30ec8bf57d37ce6f6ea397adf7c4cec7cde50f9d13672d9d0160a1b5edc9" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:34.924383 containerd[1556]: time="2026-03-13T00:50:34.924249441Z" level=info msg="connecting to shim debce639540ba24cbb63645183d5f676c5f0a3d1e5b412b9c195ec9ca5c3227d" address="unix:///run/containerd/s/ecc70c7e194dd5aed3f2fa2c1d5a1b963784d34de13282d0a2fc765bdfe2becf" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:34.935739 kubelet[2405]: E0313 00:50:34.935651 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" Mar 13 00:50:34.943536 systemd[1]: Started cri-containerd-e1b7c4e73cd36cab96b489fadc6143c719ee019a6f01cae2ff0b953809ac2f21.scope - libcontainer container e1b7c4e73cd36cab96b489fadc6143c719ee019a6f01cae2ff0b953809ac2f21. Mar 13 00:50:34.965410 systemd[1]: Started cri-containerd-10854c91debb6db3a003175947493303dc0392c47e269c6dda05a0de145d86f1.scope - libcontainer container 10854c91debb6db3a003175947493303dc0392c47e269c6dda05a0de145d86f1. Mar 13 00:50:34.974606 systemd[1]: Started cri-containerd-debce639540ba24cbb63645183d5f676c5f0a3d1e5b412b9c195ec9ca5c3227d.scope - libcontainer container debce639540ba24cbb63645183d5f676c5f0a3d1e5b412b9c195ec9ca5c3227d. Mar 13 00:50:35.055692 containerd[1556]: time="2026-03-13T00:50:35.055591947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:771bd69946d705c971f849ddf48782b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1b7c4e73cd36cab96b489fadc6143c719ee019a6f01cae2ff0b953809ac2f21\"" Mar 13 00:50:35.058281 kubelet[2405]: E0313 00:50:35.058009 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:35.073307 containerd[1556]: time="2026-03-13T00:50:35.073049452Z" level=info msg="CreateContainer within sandbox \"e1b7c4e73cd36cab96b489fadc6143c719ee019a6f01cae2ff0b953809ac2f21\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:50:35.075309 containerd[1556]: time="2026-03-13T00:50:35.075271637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"10854c91debb6db3a003175947493303dc0392c47e269c6dda05a0de145d86f1\"" Mar 13 00:50:35.077221 kubelet[2405]: E0313 00:50:35.076934 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:35.084120 containerd[1556]: time="2026-03-13T00:50:35.083991114Z" level=info msg="CreateContainer within sandbox \"10854c91debb6db3a003175947493303dc0392c47e269c6dda05a0de145d86f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:50:35.090838 containerd[1556]: time="2026-03-13T00:50:35.090046518Z" level=info msg="Container 8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:35.103214 containerd[1556]: time="2026-03-13T00:50:35.102544583Z" level=info msg="CreateContainer within sandbox \"e1b7c4e73cd36cab96b489fadc6143c719ee019a6f01cae2ff0b953809ac2f21\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7\"" Mar 13 00:50:35.105844 containerd[1556]: time="2026-03-13T00:50:35.105821066Z" level=info msg="StartContainer for \"8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7\"" Mar 13 00:50:35.107009 containerd[1556]: time="2026-03-13T00:50:35.106928422Z" level=info msg="connecting to shim 8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7" address="unix:///run/containerd/s/dcd40019962f88fe49c307b247ba28c84ad65c878730bcc1f178e2011697fae9" protocol=ttrpc version=3 Mar 13 00:50:35.109823 containerd[1556]: time="2026-03-13T00:50:35.109687024Z" level=info msg="Container e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:35.118392 containerd[1556]: time="2026-03-13T00:50:35.118304723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"debce639540ba24cbb63645183d5f676c5f0a3d1e5b412b9c195ec9ca5c3227d\"" Mar 13 00:50:35.119616 kubelet[2405]: E0313 00:50:35.119595 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:35.123222 containerd[1556]: time="2026-03-13T00:50:35.123055764Z" level=info msg="CreateContainer within sandbox \"10854c91debb6db3a003175947493303dc0392c47e269c6dda05a0de145d86f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092\"" Mar 13 00:50:35.124655 containerd[1556]: time="2026-03-13T00:50:35.124517118Z" level=info msg="StartContainer for \"e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092\"" Mar 13 00:50:35.124894 containerd[1556]: time="2026-03-13T00:50:35.124733910Z" level=info msg="CreateContainer within sandbox \"debce639540ba24cbb63645183d5f676c5f0a3d1e5b412b9c195ec9ca5c3227d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:50:35.125958 containerd[1556]: time="2026-03-13T00:50:35.125935712Z" level=info msg="connecting to shim e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092" address="unix:///run/containerd/s/8a0f30ec8bf57d37ce6f6ea397adf7c4cec7cde50f9d13672d9d0160a1b5edc9" protocol=ttrpc version=3 Mar 13 00:50:35.128745 kubelet[2405]: I0313 00:50:35.128726 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:50:35.133256 kubelet[2405]: E0313 00:50:35.132016 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" Mar 13 00:50:35.134630 systemd[1]: Started cri-containerd-8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7.scope - libcontainer container 8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7. Mar 13 00:50:35.147704 containerd[1556]: time="2026-03-13T00:50:35.147678888Z" level=info msg="Container 92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:35.159398 containerd[1556]: time="2026-03-13T00:50:35.159372168Z" level=info msg="CreateContainer within sandbox \"debce639540ba24cbb63645183d5f676c5f0a3d1e5b412b9c195ec9ca5c3227d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557\"" Mar 13 00:50:35.160514 containerd[1556]: time="2026-03-13T00:50:35.160496044Z" level=info msg="StartContainer for \"92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557\"" Mar 13 00:50:35.161767 containerd[1556]: time="2026-03-13T00:50:35.161743751Z" level=info msg="connecting to shim 92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557" address="unix:///run/containerd/s/ecc70c7e194dd5aed3f2fa2c1d5a1b963784d34de13282d0a2fc765bdfe2becf" protocol=ttrpc version=3 Mar 13 00:50:35.165360 systemd[1]: Started cri-containerd-e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092.scope - libcontainer container e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092. Mar 13 00:50:35.171424 kubelet[2405]: E0313 00:50:35.171243 2405 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:50:35.203221 systemd[1]: Started cri-containerd-92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557.scope - libcontainer container 92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557. Mar 13 00:50:35.237565 containerd[1556]: time="2026-03-13T00:50:35.237413791Z" level=info msg="StartContainer for \"8f7569d07fc1d6c8c4989cf079fbcf610f773532ac8607895a2234fae41f13f7\" returns successfully" Mar 13 00:50:35.258291 containerd[1556]: time="2026-03-13T00:50:35.258120726Z" level=info msg="StartContainer for \"e5c7e0a6963f671f04f571d5fa856d64e1f7f993d67705f39749df3565d18092\" returns successfully" Mar 13 00:50:35.318041 containerd[1556]: time="2026-03-13T00:50:35.317892672Z" level=info msg="StartContainer for \"92404bce64ea376aa3a13b9ca8a95ccf5b4cd491cdb127591dbeab77eb679557\" returns successfully" Mar 13 00:50:35.408319 kubelet[2405]: E0313 00:50:35.407983 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:35.410509 kubelet[2405]: E0313 00:50:35.410430 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:35.416266 kubelet[2405]: E0313 00:50:35.415489 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:35.416266 kubelet[2405]: E0313 00:50:35.415617 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:35.422592 kubelet[2405]: E0313 00:50:35.422504 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:35.423038 kubelet[2405]: E0313 00:50:35.422943 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:35.935258 kubelet[2405]: I0313 00:50:35.935031 2405 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:50:36.430300 kubelet[2405]: E0313 00:50:36.430065 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:36.430752 kubelet[2405]: E0313 00:50:36.430518 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:36.432820 kubelet[2405]: E0313 00:50:36.432747 2405 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:50:36.433577 kubelet[2405]: E0313 00:50:36.433457 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:36.753624 kubelet[2405]: E0313 00:50:36.753427 2405 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 13 00:50:36.836490 kubelet[2405]: I0313 00:50:36.836389 2405 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:50:36.836490 kubelet[2405]: E0313 00:50:36.836436 2405 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 13 00:50:36.931551 kubelet[2405]: I0313 00:50:36.931254 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:36.944586 kubelet[2405]: E0313 00:50:36.944466 2405 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:36.944586 kubelet[2405]: I0313 00:50:36.944530 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:36.947020 kubelet[2405]: E0313 00:50:36.946940 2405 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:36.947020 kubelet[2405]: I0313 00:50:36.946998 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:36.949422 kubelet[2405]: E0313 00:50:36.949324 2405 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:37.303806 kubelet[2405]: I0313 00:50:37.303746 2405 apiserver.go:52] "Watching apiserver" Mar 13 00:50:37.330917 kubelet[2405]: I0313 00:50:37.330695 2405 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:50:37.927422 kubelet[2405]: I0313 00:50:37.927381 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:37.936950 kubelet[2405]: E0313 00:50:37.936765 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:38.432565 kubelet[2405]: E0313 00:50:38.432372 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:38.464465 kubelet[2405]: I0313 00:50:38.464351 2405 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:38.473277 kubelet[2405]: E0313 00:50:38.472988 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:39.134723 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-9.scope)... Mar 13 00:50:39.134747 systemd[1]: Reloading... Mar 13 00:50:39.282006 zram_generator::config[2735]: No configuration found. Mar 13 00:50:39.435385 kubelet[2405]: E0313 00:50:39.435210 2405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:39.579703 systemd[1]: Reloading finished in 444 ms. Mar 13 00:50:39.629340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:39.648060 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:50:39.648558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:39.648644 systemd[1]: kubelet.service: Consumed 1.851s CPU time, 130.6M memory peak. Mar 13 00:50:39.651962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:50:39.921302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:50:39.935727 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:50:40.026673 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:40.026673 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:50:40.026673 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:50:40.028354 kubelet[2781]: I0313 00:50:40.026467 2781 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:50:40.043028 kubelet[2781]: I0313 00:50:40.042933 2781 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:50:40.043028 kubelet[2781]: I0313 00:50:40.042995 2781 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:50:40.043424 kubelet[2781]: I0313 00:50:40.043359 2781 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:50:40.044847 kubelet[2781]: I0313 00:50:40.044736 2781 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:50:40.047458 kubelet[2781]: I0313 00:50:40.047387 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:50:40.055711 kubelet[2781]: I0313 00:50:40.055596 2781 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:50:40.072039 kubelet[2781]: I0313 00:50:40.072007 2781 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:50:40.072779 kubelet[2781]: I0313 00:50:40.072631 2781 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:50:40.072935 kubelet[2781]: I0313 00:50:40.072712 2781 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:50:40.072935 kubelet[2781]: I0313 00:50:40.072905 2781 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:50:40.072935 kubelet[2781]: I0313 00:50:40.072918 2781 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:50:40.073416 kubelet[2781]: I0313 00:50:40.072984 2781 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:40.073557 kubelet[2781]: I0313 00:50:40.073498 2781 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:50:40.073557 kubelet[2781]: I0313 00:50:40.073517 2781 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:50:40.073557 kubelet[2781]: I0313 00:50:40.073552 2781 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:50:40.073681 kubelet[2781]: I0313 00:50:40.073576 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:50:40.075584 kubelet[2781]: I0313 00:50:40.075351 2781 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:50:40.076012 kubelet[2781]: I0313 00:50:40.075959 2781 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:50:40.086336 kubelet[2781]: I0313 00:50:40.085407 2781 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:50:40.086336 kubelet[2781]: I0313 00:50:40.085937 2781 server.go:1289] "Started kubelet" Mar 13 00:50:40.089686 kubelet[2781]: I0313 00:50:40.089666 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:50:40.096402 kubelet[2781]: I0313 00:50:40.096272 2781 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:50:40.097483 kubelet[2781]: I0313 00:50:40.097410 2781 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:50:40.102127 kubelet[2781]: I0313 00:50:40.101792 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:50:40.102127 kubelet[2781]: I0313 00:50:40.102052 2781 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:50:40.102996 kubelet[2781]: I0313 00:50:40.102779 2781 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:50:40.102996 kubelet[2781]: I0313 00:50:40.102911 2781 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:50:40.107245 kubelet[2781]: I0313 00:50:40.105537 2781 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:50:40.108995 kubelet[2781]: I0313 00:50:40.108716 2781 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:50:40.108995 kubelet[2781]: I0313 00:50:40.108875 2781 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:50:40.109649 kubelet[2781]: I0313 00:50:40.109600 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:50:40.115258 kubelet[2781]: E0313 00:50:40.114673 2781 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:50:40.124585 kubelet[2781]: I0313 00:50:40.124519 2781 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:50:40.149838 sudo[2809]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 00:50:40.150441 sudo[2809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 00:50:40.168055 kubelet[2781]: I0313 00:50:40.167705 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:50:40.176020 kubelet[2781]: I0313 00:50:40.175474 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:50:40.176020 kubelet[2781]: I0313 00:50:40.175497 2781 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:50:40.176020 kubelet[2781]: I0313 00:50:40.175517 2781 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:50:40.176020 kubelet[2781]: I0313 00:50:40.175525 2781 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:50:40.176020 kubelet[2781]: E0313 00:50:40.175570 2781 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:50:40.221318 kubelet[2781]: I0313 00:50:40.221277 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221554 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221662 2781 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221858 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221871 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221893 2781 policy_none.go:49] "None policy: Start" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221904 2781 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.221917 2781 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:50:40.223000 kubelet[2781]: I0313 00:50:40.222003 2781 state_mem.go:75] "Updated machine memory state" Mar 13 00:50:40.230033 kubelet[2781]: E0313 00:50:40.230013 2781 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:50:40.230924 kubelet[2781]: I0313 00:50:40.230908 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:50:40.231018 kubelet[2781]: I0313 00:50:40.230989 2781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:50:40.231385 kubelet[2781]: I0313 00:50:40.231370 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:50:40.233857 kubelet[2781]: E0313 00:50:40.233837 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:50:40.277241 kubelet[2781]: I0313 00:50:40.276949 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:40.277825 kubelet[2781]: I0313 00:50:40.277691 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:40.278835 kubelet[2781]: I0313 00:50:40.278819 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:40.289639 kubelet[2781]: E0313 00:50:40.289284 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:40.291120 kubelet[2781]: E0313 00:50:40.291012 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:40.346739 kubelet[2781]: I0313 00:50:40.346568 2781 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:50:40.359193 kubelet[2781]: I0313 00:50:40.358995 2781 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 13 00:50:40.359322 kubelet[2781]: I0313 00:50:40.359240 2781 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:50:40.410762 kubelet[2781]: I0313 00:50:40.410486 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:40.410762 kubelet[2781]: I0313 00:50:40.410555 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/771bd69946d705c971f849ddf48782b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"771bd69946d705c971f849ddf48782b0\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:40.410762 kubelet[2781]: I0313 00:50:40.410587 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/771bd69946d705c971f849ddf48782b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"771bd69946d705c971f849ddf48782b0\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:40.410762 kubelet[2781]: I0313 00:50:40.410611 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:40.410762 kubelet[2781]: I0313 00:50:40.410638 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:50:40.411232 kubelet[2781]: I0313 00:50:40.410658 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/771bd69946d705c971f849ddf48782b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"771bd69946d705c971f849ddf48782b0\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:50:40.411521 kubelet[2781]: I0313 00:50:40.411416 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:40.411521 kubelet[2781]: I0313 00:50:40.411451 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:40.411521 kubelet[2781]: I0313 00:50:40.411473 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:50:40.605602 kubelet[2781]: E0313 00:50:40.599923 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:40.629501 kubelet[2781]: E0313 00:50:40.627233 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:40.651916 kubelet[2781]: E0313 00:50:40.651546 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:41.075562 kubelet[2781]: I0313 00:50:41.074804 2781 apiserver.go:52] "Watching apiserver" Mar 13 00:50:41.110964 kubelet[2781]: I0313 00:50:41.110520 2781 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:50:41.313250 kubelet[2781]: E0313 00:50:41.303768 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:41.313250 kubelet[2781]: E0313 00:50:41.310818 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:41.315011 kubelet[2781]: E0313 00:50:41.313701 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:41.981606 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1120023657 wd_nsec: 1120023137 Mar 13 00:50:42.048547 kubelet[2781]: I0313 00:50:42.048420 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.048368892 podStartE2EDuration="5.048368892s" podCreationTimestamp="2026-03-13 00:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:42.045648982 +0000 UTC m=+2.102734375" watchObservedRunningTime="2026-03-13 00:50:42.048368892 +0000 UTC m=+2.105454285" Mar 13 00:50:42.102468 kubelet[2781]: I0313 00:50:42.101019 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.100998955 podStartE2EDuration="2.100998955s" podCreationTimestamp="2026-03-13 00:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:42.098610962 +0000 UTC m=+2.155696346" watchObservedRunningTime="2026-03-13 00:50:42.100998955 +0000 UTC m=+2.158084328" Mar 13 00:50:42.104694 sudo[2809]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:42.146842 kubelet[2781]: I0313 00:50:42.146691 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.146666897 podStartE2EDuration="4.146666897s" podCreationTimestamp="2026-03-13 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:42.125558474 +0000 UTC m=+2.182643877" watchObservedRunningTime="2026-03-13 00:50:42.146666897 +0000 UTC m=+2.203752260" Mar 13 00:50:42.246359 kubelet[2781]: E0313 00:50:42.245775 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:42.247759 kubelet[2781]: E0313 00:50:42.245061 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:43.248240 kubelet[2781]: E0313 00:50:43.247974 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:43.449401 kubelet[2781]: E0313 00:50:43.449353 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:43.887267 kubelet[2781]: I0313 00:50:43.887068 2781 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:50:43.887720 containerd[1556]: time="2026-03-13T00:50:43.887663783Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:50:43.888492 kubelet[2781]: I0313 00:50:43.887929 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:50:44.204670 sudo[1781]: pam_unix(sudo:session): session closed for user root Mar 13 00:50:44.207309 sshd[1780]: Connection closed by 10.0.0.1 port 60540 Mar 13 00:50:44.208304 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Mar 13 00:50:44.214635 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:60540.service: Deactivated successfully. Mar 13 00:50:44.217747 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:50:44.218340 systemd[1]: session-9.scope: Consumed 9.273s CPU time, 272.2M memory peak. Mar 13 00:50:44.220789 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:50:44.223870 systemd-logind[1542]: Removed session 9. Mar 13 00:50:44.954897 systemd[1]: Created slice kubepods-besteffort-pod9f0970e0_9477_48ee_858a_783d6cb1f6d1.slice - libcontainer container kubepods-besteffort-pod9f0970e0_9477_48ee_858a_783d6cb1f6d1.slice. Mar 13 00:50:44.971893 systemd[1]: Created slice kubepods-burstable-podb9b1ee93_12e6_4ac4_bc2c_e59e89a37fcc.slice - libcontainer container kubepods-burstable-podb9b1ee93_12e6_4ac4_bc2c_e59e89a37fcc.slice. Mar 13 00:50:45.064655 kubelet[2781]: I0313 00:50:45.064533 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-net\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065488 kubelet[2781]: I0313 00:50:45.064661 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cni-path\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065488 kubelet[2781]: I0313 00:50:45.064700 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-config-path\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065488 kubelet[2781]: I0313 00:50:45.064726 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-kernel\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065488 kubelet[2781]: I0313 00:50:45.064754 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgl5n\" (UniqueName: \"kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-kube-api-access-kgl5n\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065488 kubelet[2781]: I0313 00:50:45.064844 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-cgroup\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065618 kubelet[2781]: I0313 00:50:45.064891 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-xtables-lock\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065618 kubelet[2781]: I0313 00:50:45.064907 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hubble-tls\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065618 kubelet[2781]: I0313 00:50:45.064930 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f0970e0-9477-48ee-858a-783d6cb1f6d1-lib-modules\") pod \"kube-proxy-sgkxg\" (UID: \"9f0970e0-9477-48ee-858a-783d6cb1f6d1\") " pod="kube-system/kube-proxy-sgkxg" Mar 13 00:50:45.065618 kubelet[2781]: I0313 00:50:45.064946 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt6pl\" (UniqueName: \"kubernetes.io/projected/9f0970e0-9477-48ee-858a-783d6cb1f6d1-kube-api-access-nt6pl\") pod \"kube-proxy-sgkxg\" (UID: \"9f0970e0-9477-48ee-858a-783d6cb1f6d1\") " pod="kube-system/kube-proxy-sgkxg" Mar 13 00:50:45.065618 kubelet[2781]: I0313 00:50:45.064961 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-bpf-maps\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065618 kubelet[2781]: I0313 00:50:45.064975 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hostproc\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065748 kubelet[2781]: I0313 00:50:45.064990 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-etc-cni-netd\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065748 kubelet[2781]: I0313 00:50:45.065011 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f0970e0-9477-48ee-858a-783d6cb1f6d1-kube-proxy\") pod \"kube-proxy-sgkxg\" (UID: \"9f0970e0-9477-48ee-858a-783d6cb1f6d1\") " pod="kube-system/kube-proxy-sgkxg" Mar 13 00:50:45.065748 kubelet[2781]: I0313 00:50:45.065025 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f0970e0-9477-48ee-858a-783d6cb1f6d1-xtables-lock\") pod \"kube-proxy-sgkxg\" (UID: \"9f0970e0-9477-48ee-858a-783d6cb1f6d1\") " pod="kube-system/kube-proxy-sgkxg" Mar 13 00:50:45.065748 kubelet[2781]: I0313 00:50:45.065052 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-run\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065748 kubelet[2781]: I0313 00:50:45.065070 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-lib-modules\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.065748 kubelet[2781]: I0313 00:50:45.065326 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-clustermesh-secrets\") pod \"cilium-kzqn9\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " pod="kube-system/cilium-kzqn9" Mar 13 00:50:45.107420 systemd[1]: Created slice kubepods-besteffort-podc4415ec1_cbda_47b6_8d7a_81dd462e6a84.slice - libcontainer container kubepods-besteffort-podc4415ec1_cbda_47b6_8d7a_81dd462e6a84.slice. Mar 13 00:50:45.167026 kubelet[2781]: I0313 00:50:45.166413 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zfbmh\" (UID: \"c4415ec1-cbda-47b6-8d7a-81dd462e6a84\") " pod="kube-system/cilium-operator-6c4d7847fc-zfbmh" Mar 13 00:50:45.167026 kubelet[2781]: I0313 00:50:45.166793 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xk7\" (UniqueName: \"kubernetes.io/projected/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-kube-api-access-45xk7\") pod \"cilium-operator-6c4d7847fc-zfbmh\" (UID: \"c4415ec1-cbda-47b6-8d7a-81dd462e6a84\") " pod="kube-system/cilium-operator-6c4d7847fc-zfbmh" Mar 13 00:50:45.268310 kubelet[2781]: E0313 00:50:45.266928 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:45.268420 containerd[1556]: time="2026-03-13T00:50:45.267919159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgkxg,Uid:9f0970e0-9477-48ee-858a-783d6cb1f6d1,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:45.281341 kubelet[2781]: E0313 00:50:45.280935 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:45.283906 containerd[1556]: time="2026-03-13T00:50:45.283721041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzqn9,Uid:b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:45.322232 containerd[1556]: time="2026-03-13T00:50:45.321774442Z" level=info msg="connecting to shim bca9930ec5ad16e3ba6832e152cce0dbba28e45b7323a85053df3e81a4cc1c6f" address="unix:///run/containerd/s/d5fd6a5a3778333d05cf93d433c4921d2272a079ff66b566ff279791510c2810" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:45.328599 containerd[1556]: time="2026-03-13T00:50:45.328475232Z" level=info msg="connecting to shim e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8" address="unix:///run/containerd/s/099c9025fecff52e4ebbdb24d34c047c9c88f7c7782118ada7ce0888994cf010" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:45.372582 systemd[1]: Started cri-containerd-bca9930ec5ad16e3ba6832e152cce0dbba28e45b7323a85053df3e81a4cc1c6f.scope - libcontainer container bca9930ec5ad16e3ba6832e152cce0dbba28e45b7323a85053df3e81a4cc1c6f. Mar 13 00:50:45.405505 systemd[1]: Started cri-containerd-e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8.scope - libcontainer container e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8. Mar 13 00:50:45.415854 kubelet[2781]: E0313 00:50:45.415818 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:45.416739 containerd[1556]: time="2026-03-13T00:50:45.416627843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zfbmh,Uid:c4415ec1-cbda-47b6-8d7a-81dd462e6a84,Namespace:kube-system,Attempt:0,}" Mar 13 00:50:45.444248 containerd[1556]: time="2026-03-13T00:50:45.444213654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgkxg,Uid:9f0970e0-9477-48ee-858a-783d6cb1f6d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"bca9930ec5ad16e3ba6832e152cce0dbba28e45b7323a85053df3e81a4cc1c6f\"" Mar 13 00:50:45.447732 kubelet[2781]: E0313 00:50:45.447709 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:45.461136 containerd[1556]: time="2026-03-13T00:50:45.461056244Z" level=info msg="CreateContainer within sandbox \"bca9930ec5ad16e3ba6832e152cce0dbba28e45b7323a85053df3e81a4cc1c6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:50:45.474570 containerd[1556]: time="2026-03-13T00:50:45.474352698Z" level=info msg="connecting to shim 48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35" address="unix:///run/containerd/s/5996d65d092f9d884d0d83f7f526c39e32fa517077509d81e6a6e544d19c53a6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:50:45.507303 containerd[1556]: time="2026-03-13T00:50:45.507067504Z" level=info msg="Container 377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:50:45.519844 containerd[1556]: time="2026-03-13T00:50:45.519615178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzqn9,Uid:b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\"" Mar 13 00:50:45.522475 containerd[1556]: time="2026-03-13T00:50:45.522383221Z" level=info msg="CreateContainer within sandbox \"bca9930ec5ad16e3ba6832e152cce0dbba28e45b7323a85053df3e81a4cc1c6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30\"" Mar 13 00:50:45.523838 kubelet[2781]: E0313 00:50:45.523810 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:45.523990 containerd[1556]: time="2026-03-13T00:50:45.523823832Z" level=info msg="StartContainer for \"377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30\"" Mar 13 00:50:45.525573 containerd[1556]: time="2026-03-13T00:50:45.525382603Z" level=info msg="connecting to shim 377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30" address="unix:///run/containerd/s/d5fd6a5a3778333d05cf93d433c4921d2272a079ff66b566ff279791510c2810" protocol=ttrpc version=3 Mar 13 00:50:45.530572 containerd[1556]: time="2026-03-13T00:50:45.529378840Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 00:50:45.537357 systemd[1]: Started cri-containerd-48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35.scope - libcontainer container 48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35. Mar 13 00:50:45.566551 systemd[1]: Started cri-containerd-377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30.scope - libcontainer container 377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30. Mar 13 00:50:45.633518 containerd[1556]: time="2026-03-13T00:50:45.633407211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zfbmh,Uid:c4415ec1-cbda-47b6-8d7a-81dd462e6a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\"" Mar 13 00:50:45.635343 kubelet[2781]: E0313 00:50:45.635233 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:45.693970 containerd[1556]: time="2026-03-13T00:50:45.693793951Z" level=info msg="StartContainer for \"377208fd63f245a59c694622bd679cf58b5018f34bd8e0c7a6f1785995802d30\" returns successfully" Mar 13 00:50:46.264244 kubelet[2781]: E0313 00:50:46.264008 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:46.302589 kubelet[2781]: I0313 00:50:46.302472 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sgkxg" podStartSLOduration=2.302446346 podStartE2EDuration="2.302446346s" podCreationTimestamp="2026-03-13 00:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:50:46.301643474 +0000 UTC m=+6.358728837" watchObservedRunningTime="2026-03-13 00:50:46.302446346 +0000 UTC m=+6.359531710" Mar 13 00:50:47.189065 kubelet[2781]: E0313 00:50:47.189024 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:47.266461 kubelet[2781]: E0313 00:50:47.266259 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:48.682315 kubelet[2781]: E0313 00:50:48.678633 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:52.815816 kubelet[2781]: E0313 00:50:52.811002 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:53.049894 kubelet[2781]: E0313 00:50:53.049688 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:53.460684 kubelet[2781]: E0313 00:50:53.460579 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:50:59.068697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957166447.mount: Deactivated successfully. Mar 13 00:51:02.250056 containerd[1556]: time="2026-03-13T00:51:02.249804258Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:02.251612 containerd[1556]: time="2026-03-13T00:51:02.251414621Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 00:51:02.253502 containerd[1556]: time="2026-03-13T00:51:02.253446882Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:02.255945 containerd[1556]: time="2026-03-13T00:51:02.255895679Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.726410441s" Mar 13 00:51:02.257239 containerd[1556]: time="2026-03-13T00:51:02.256321252Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 00:51:02.262702 containerd[1556]: time="2026-03-13T00:51:02.262660511Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 00:51:02.273794 containerd[1556]: time="2026-03-13T00:51:02.273763017Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:51:02.292062 containerd[1556]: time="2026-03-13T00:51:02.291877186Z" level=info msg="Container f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:02.301887 containerd[1556]: time="2026-03-13T00:51:02.301769945Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\"" Mar 13 00:51:02.303022 containerd[1556]: time="2026-03-13T00:51:02.302982043Z" level=info msg="StartContainer for \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\"" Mar 13 00:51:02.305015 containerd[1556]: time="2026-03-13T00:51:02.304754879Z" level=info msg="connecting to shim f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4" address="unix:///run/containerd/s/099c9025fecff52e4ebbdb24d34c047c9c88f7c7782118ada7ce0888994cf010" protocol=ttrpc version=3 Mar 13 00:51:02.440978 systemd[1]: Started cri-containerd-f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4.scope - libcontainer container f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4. Mar 13 00:51:02.548836 containerd[1556]: time="2026-03-13T00:51:02.547463657Z" level=info msg="StartContainer for \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\" returns successfully" Mar 13 00:51:02.580042 systemd[1]: cri-containerd-f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4.scope: Deactivated successfully. Mar 13 00:51:02.587654 containerd[1556]: time="2026-03-13T00:51:02.587562002Z" level=info msg="received container exit event container_id:\"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\" id:\"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\" pid:3212 exited_at:{seconds:1773363062 nanos:586536316}" Mar 13 00:51:02.624739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4-rootfs.mount: Deactivated successfully. Mar 13 00:51:03.086497 kubelet[2781]: E0313 00:51:03.085495 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:03.097309 containerd[1556]: time="2026-03-13T00:51:03.096860337Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:51:03.129932 containerd[1556]: time="2026-03-13T00:51:03.129781947Z" level=info msg="Container 773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:03.139733 containerd[1556]: time="2026-03-13T00:51:03.139702927Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\"" Mar 13 00:51:03.151603 containerd[1556]: time="2026-03-13T00:51:03.151445590Z" level=info msg="StartContainer for \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\"" Mar 13 00:51:03.155988 containerd[1556]: time="2026-03-13T00:51:03.155804659Z" level=info msg="connecting to shim 773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1" address="unix:///run/containerd/s/099c9025fecff52e4ebbdb24d34c047c9c88f7c7782118ada7ce0888994cf010" protocol=ttrpc version=3 Mar 13 00:51:03.220657 systemd[1]: Started cri-containerd-773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1.scope - libcontainer container 773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1. Mar 13 00:51:03.341520 containerd[1556]: time="2026-03-13T00:51:03.341038625Z" level=info msg="StartContainer for \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\" returns successfully" Mar 13 00:51:03.393908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:51:03.394513 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:51:03.398836 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:51:03.402791 containerd[1556]: time="2026-03-13T00:51:03.402543275Z" level=info msg="received container exit event container_id:\"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\" id:\"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\" pid:3269 exited_at:{seconds:1773363063 nanos:401733580}" Mar 13 00:51:03.404662 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:51:03.407954 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:51:03.409047 systemd[1]: cri-containerd-773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1.scope: Deactivated successfully. Mar 13 00:51:03.482929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1-rootfs.mount: Deactivated successfully. Mar 13 00:51:03.517010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:51:04.052389 containerd[1556]: time="2026-03-13T00:51:04.051998734Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:04.053361 containerd[1556]: time="2026-03-13T00:51:04.053109143Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 00:51:04.054969 containerd[1556]: time="2026-03-13T00:51:04.054837634Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:51:04.056579 containerd[1556]: time="2026-03-13T00:51:04.056495699Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.793800623s" Mar 13 00:51:04.056579 containerd[1556]: time="2026-03-13T00:51:04.056531665Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 00:51:04.065739 containerd[1556]: time="2026-03-13T00:51:04.065577737Z" level=info msg="CreateContainer within sandbox \"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 00:51:04.082471 containerd[1556]: time="2026-03-13T00:51:04.082394169Z" level=info msg="Container e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:04.092368 kubelet[2781]: E0313 00:51:04.092061 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:04.102819 containerd[1556]: time="2026-03-13T00:51:04.102745708Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:51:04.102819 containerd[1556]: time="2026-03-13T00:51:04.102776328Z" level=info msg="CreateContainer within sandbox \"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\"" Mar 13 00:51:04.106003 containerd[1556]: time="2026-03-13T00:51:04.105385564Z" level=info msg="StartContainer for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\"" Mar 13 00:51:04.106748 containerd[1556]: time="2026-03-13T00:51:04.106673136Z" level=info msg="connecting to shim e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f" address="unix:///run/containerd/s/5996d65d092f9d884d0d83f7f526c39e32fa517077509d81e6a6e544d19c53a6" protocol=ttrpc version=3 Mar 13 00:51:04.125049 containerd[1556]: time="2026-03-13T00:51:04.124891902Z" level=info msg="Container 81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:04.143903 containerd[1556]: time="2026-03-13T00:51:04.143798930Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\"" Mar 13 00:51:04.145297 containerd[1556]: time="2026-03-13T00:51:04.145266225Z" level=info msg="StartContainer for \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\"" Mar 13 00:51:04.148073 containerd[1556]: time="2026-03-13T00:51:04.147966901Z" level=info msg="connecting to shim 81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678" address="unix:///run/containerd/s/099c9025fecff52e4ebbdb24d34c047c9c88f7c7782118ada7ce0888994cf010" protocol=ttrpc version=3 Mar 13 00:51:04.160463 systemd[1]: Started cri-containerd-e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f.scope - libcontainer container e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f. Mar 13 00:51:04.194646 systemd[1]: Started cri-containerd-81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678.scope - libcontainer container 81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678. Mar 13 00:51:04.231108 containerd[1556]: time="2026-03-13T00:51:04.230905874Z" level=info msg="StartContainer for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" returns successfully" Mar 13 00:51:04.295509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882092379.mount: Deactivated successfully. Mar 13 00:51:04.319081 containerd[1556]: time="2026-03-13T00:51:04.318882756Z" level=info msg="StartContainer for \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\" returns successfully" Mar 13 00:51:04.319829 systemd[1]: cri-containerd-81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678.scope: Deactivated successfully. Mar 13 00:51:04.323354 containerd[1556]: time="2026-03-13T00:51:04.323111664Z" level=info msg="received container exit event container_id:\"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\" id:\"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\" pid:3341 exited_at:{seconds:1773363064 nanos:322728493}" Mar 13 00:51:04.375770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678-rootfs.mount: Deactivated successfully. Mar 13 00:51:05.104263 kubelet[2781]: E0313 00:51:05.104071 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:05.119589 kubelet[2781]: E0313 00:51:05.119532 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:05.126322 containerd[1556]: time="2026-03-13T00:51:05.125928645Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:51:05.161495 containerd[1556]: time="2026-03-13T00:51:05.160331688Z" level=info msg="Container 95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:05.188842 containerd[1556]: time="2026-03-13T00:51:05.188722853Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\"" Mar 13 00:51:05.190976 containerd[1556]: time="2026-03-13T00:51:05.190883956Z" level=info msg="StartContainer for \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\"" Mar 13 00:51:05.209780 containerd[1556]: time="2026-03-13T00:51:05.209669553Z" level=info msg="connecting to shim 95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464" address="unix:///run/containerd/s/099c9025fecff52e4ebbdb24d34c047c9c88f7c7782118ada7ce0888994cf010" protocol=ttrpc version=3 Mar 13 00:51:05.220365 kubelet[2781]: I0313 00:51:05.214483 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zfbmh" podStartSLOduration=1.789482167 podStartE2EDuration="20.211010159s" podCreationTimestamp="2026-03-13 00:50:45 +0000 UTC" firstStartedPulling="2026-03-13 00:50:45.636495509 +0000 UTC m=+5.693580872" lastFinishedPulling="2026-03-13 00:51:04.058023501 +0000 UTC m=+24.115108864" observedRunningTime="2026-03-13 00:51:05.12856935 +0000 UTC m=+25.185654713" watchObservedRunningTime="2026-03-13 00:51:05.211010159 +0000 UTC m=+25.268095542" Mar 13 00:51:05.258464 systemd[1]: Started cri-containerd-95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464.scope - libcontainer container 95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464. Mar 13 00:51:05.327569 systemd[1]: cri-containerd-95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464.scope: Deactivated successfully. Mar 13 00:51:05.330102 containerd[1556]: time="2026-03-13T00:51:05.329753007Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b1ee93_12e6_4ac4_bc2c_e59e89a37fcc.slice/cri-containerd-95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464.scope/memory.events\": no such file or directory" Mar 13 00:51:05.338886 containerd[1556]: time="2026-03-13T00:51:05.338653703Z" level=info msg="StartContainer for \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\" returns successfully" Mar 13 00:51:05.342585 containerd[1556]: time="2026-03-13T00:51:05.342526736Z" level=info msg="received container exit event container_id:\"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\" id:\"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\" pid:3399 exited_at:{seconds:1773363065 nanos:328925122}" Mar 13 00:51:05.380717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464-rootfs.mount: Deactivated successfully. Mar 13 00:51:06.132052 kubelet[2781]: E0313 00:51:06.131808 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:06.132052 kubelet[2781]: E0313 00:51:06.132549 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:06.147833 containerd[1556]: time="2026-03-13T00:51:06.147713007Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:51:06.171754 containerd[1556]: time="2026-03-13T00:51:06.170129921Z" level=info msg="Container 2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:06.183284 containerd[1556]: time="2026-03-13T00:51:06.183037350Z" level=info msg="CreateContainer within sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\"" Mar 13 00:51:06.184351 containerd[1556]: time="2026-03-13T00:51:06.184320717Z" level=info msg="StartContainer for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\"" Mar 13 00:51:06.186514 containerd[1556]: time="2026-03-13T00:51:06.185879453Z" level=info msg="connecting to shim 2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38" address="unix:///run/containerd/s/099c9025fecff52e4ebbdb24d34c047c9c88f7c7782118ada7ce0888994cf010" protocol=ttrpc version=3 Mar 13 00:51:06.230448 systemd[1]: Started cri-containerd-2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38.scope - libcontainer container 2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38. Mar 13 00:51:06.339940 containerd[1556]: time="2026-03-13T00:51:06.339854651Z" level=info msg="StartContainer for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" returns successfully" Mar 13 00:51:06.682971 kubelet[2781]: I0313 00:51:06.682724 2781 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:51:06.769598 systemd[1]: Created slice kubepods-burstable-pode639cffd_08cb_4277_a489_c17579072eb0.slice - libcontainer container kubepods-burstable-pode639cffd_08cb_4277_a489_c17579072eb0.slice. Mar 13 00:51:06.779939 systemd[1]: Created slice kubepods-burstable-podb14dcf3c_e8d9_43f9_a17a_cc5f6f5dd9b3.slice - libcontainer container kubepods-burstable-podb14dcf3c_e8d9_43f9_a17a_cc5f6f5dd9b3.slice. Mar 13 00:51:06.851479 kubelet[2781]: I0313 00:51:06.851390 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e639cffd-08cb-4277-a489-c17579072eb0-config-volume\") pod \"coredns-674b8bbfcf-77slj\" (UID: \"e639cffd-08cb-4277-a489-c17579072eb0\") " pod="kube-system/coredns-674b8bbfcf-77slj" Mar 13 00:51:06.851618 kubelet[2781]: I0313 00:51:06.851513 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7w4\" (UniqueName: \"kubernetes.io/projected/b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3-kube-api-access-5m7w4\") pod \"coredns-674b8bbfcf-wbn8b\" (UID: \"b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3\") " pod="kube-system/coredns-674b8bbfcf-wbn8b" Mar 13 00:51:06.852327 kubelet[2781]: I0313 00:51:06.852005 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3-config-volume\") pod \"coredns-674b8bbfcf-wbn8b\" (UID: \"b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3\") " pod="kube-system/coredns-674b8bbfcf-wbn8b" Mar 13 00:51:06.852327 kubelet[2781]: I0313 00:51:06.852296 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq684\" (UniqueName: \"kubernetes.io/projected/e639cffd-08cb-4277-a489-c17579072eb0-kube-api-access-hq684\") pod \"coredns-674b8bbfcf-77slj\" (UID: \"e639cffd-08cb-4277-a489-c17579072eb0\") " pod="kube-system/coredns-674b8bbfcf-77slj" Mar 13 00:51:07.078720 kubelet[2781]: E0313 00:51:07.077931 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:07.079987 containerd[1556]: time="2026-03-13T00:51:07.079865094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-77slj,Uid:e639cffd-08cb-4277-a489-c17579072eb0,Namespace:kube-system,Attempt:0,}" Mar 13 00:51:07.086679 kubelet[2781]: E0313 00:51:07.086446 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:07.089001 containerd[1556]: time="2026-03-13T00:51:07.088932485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wbn8b,Uid:b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3,Namespace:kube-system,Attempt:0,}" Mar 13 00:51:07.160799 kubelet[2781]: E0313 00:51:07.160744 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:07.209019 kubelet[2781]: I0313 00:51:07.208875 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kzqn9" podStartSLOduration=6.4744516579999996 podStartE2EDuration="23.20886127s" podCreationTimestamp="2026-03-13 00:50:44 +0000 UTC" firstStartedPulling="2026-03-13 00:50:45.527942166 +0000 UTC m=+5.585027529" lastFinishedPulling="2026-03-13 00:51:02.262351778 +0000 UTC m=+22.319437141" observedRunningTime="2026-03-13 00:51:07.208589293 +0000 UTC m=+27.265674656" watchObservedRunningTime="2026-03-13 00:51:07.20886127 +0000 UTC m=+27.265946633" Mar 13 00:51:08.163921 kubelet[2781]: E0313 00:51:08.163748 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:08.973636 systemd-networkd[1456]: cilium_host: Link UP Mar 13 00:51:08.973933 systemd-networkd[1456]: cilium_net: Link UP Mar 13 00:51:08.975919 systemd-networkd[1456]: cilium_net: Gained carrier Mar 13 00:51:08.976535 systemd-networkd[1456]: cilium_host: Gained carrier Mar 13 00:51:09.168500 kubelet[2781]: E0313 00:51:09.168404 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:09.191545 systemd-networkd[1456]: cilium_vxlan: Link UP Mar 13 00:51:09.191600 systemd-networkd[1456]: cilium_vxlan: Gained carrier Mar 13 00:51:09.519387 kernel: NET: Registered PF_ALG protocol family Mar 13 00:51:09.844082 systemd-networkd[1456]: cilium_net: Gained IPv6LL Mar 13 00:51:09.907445 systemd-networkd[1456]: cilium_host: Gained IPv6LL Mar 13 00:51:10.290509 systemd-networkd[1456]: cilium_vxlan: Gained IPv6LL Mar 13 00:51:10.583575 systemd-networkd[1456]: lxc_health: Link UP Mar 13 00:51:10.590484 systemd-networkd[1456]: lxc_health: Gained carrier Mar 13 00:51:10.735308 kernel: eth0: renamed from tmpcc194 Mar 13 00:51:10.737126 systemd-networkd[1456]: lxc4cb0de4d6081: Link UP Mar 13 00:51:10.738651 systemd-networkd[1456]: lxc4cb0de4d6081: Gained carrier Mar 13 00:51:11.185125 systemd-networkd[1456]: lxc7621220f2026: Link UP Mar 13 00:51:11.204318 kernel: eth0: renamed from tmp3f58d Mar 13 00:51:11.211794 systemd-networkd[1456]: lxc7621220f2026: Gained carrier Mar 13 00:51:11.288907 kubelet[2781]: E0313 00:51:11.285368 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:12.177557 kubelet[2781]: E0313 00:51:12.177443 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:12.210631 systemd-networkd[1456]: lxc4cb0de4d6081: Gained IPv6LL Mar 13 00:51:12.530674 systemd-networkd[1456]: lxc_health: Gained IPv6LL Mar 13 00:51:12.978860 systemd-networkd[1456]: lxc7621220f2026: Gained IPv6LL Mar 13 00:51:13.179758 kubelet[2781]: E0313 00:51:13.179542 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:15.530662 containerd[1556]: time="2026-03-13T00:51:15.529996504Z" level=info msg="connecting to shim cc194b12e696bec8c6491fe8e777c910616d24e4c9a0426cc145c2c3bdfeddb2" address="unix:///run/containerd/s/e195d93b07028ab1ea9b9c3eadc33be2c4afbd53324b28a4dd50a37ee68804a7" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:15.568721 containerd[1556]: time="2026-03-13T00:51:15.568538336Z" level=info msg="connecting to shim 3f58d64a7e3a21c60fc9f451793f6167846712ff2cb2a9b538c9c4bfbbfebf02" address="unix:///run/containerd/s/32438a4f33cdb0ca1890fe7dd3c9eaea9ab17ed91ed347deaf88dd8f01b45139" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:51:15.616620 systemd[1]: Started cri-containerd-cc194b12e696bec8c6491fe8e777c910616d24e4c9a0426cc145c2c3bdfeddb2.scope - libcontainer container cc194b12e696bec8c6491fe8e777c910616d24e4c9a0426cc145c2c3bdfeddb2. Mar 13 00:51:15.651534 systemd[1]: Started cri-containerd-3f58d64a7e3a21c60fc9f451793f6167846712ff2cb2a9b538c9c4bfbbfebf02.scope - libcontainer container 3f58d64a7e3a21c60fc9f451793f6167846712ff2cb2a9b538c9c4bfbbfebf02. Mar 13 00:51:15.665758 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:51:15.686772 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:51:15.762803 containerd[1556]: time="2026-03-13T00:51:15.762630861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wbn8b,Uid:b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc194b12e696bec8c6491fe8e777c910616d24e4c9a0426cc145c2c3bdfeddb2\"" Mar 13 00:51:15.770655 kubelet[2781]: E0313 00:51:15.770572 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:15.782393 containerd[1556]: time="2026-03-13T00:51:15.781899382Z" level=info msg="CreateContainer within sandbox \"cc194b12e696bec8c6491fe8e777c910616d24e4c9a0426cc145c2c3bdfeddb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:51:15.785265 containerd[1556]: time="2026-03-13T00:51:15.784934975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-77slj,Uid:e639cffd-08cb-4277-a489-c17579072eb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f58d64a7e3a21c60fc9f451793f6167846712ff2cb2a9b538c9c4bfbbfebf02\"" Mar 13 00:51:15.804444 kubelet[2781]: E0313 00:51:15.804091 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:15.806244 containerd[1556]: time="2026-03-13T00:51:15.805904059Z" level=info msg="Container df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:15.817642 containerd[1556]: time="2026-03-13T00:51:15.817581121Z" level=info msg="CreateContainer within sandbox \"3f58d64a7e3a21c60fc9f451793f6167846712ff2cb2a9b538c9c4bfbbfebf02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:51:15.822214 containerd[1556]: time="2026-03-13T00:51:15.822039780Z" level=info msg="CreateContainer within sandbox \"cc194b12e696bec8c6491fe8e777c910616d24e4c9a0426cc145c2c3bdfeddb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2\"" Mar 13 00:51:15.823896 containerd[1556]: time="2026-03-13T00:51:15.823137959Z" level=info msg="StartContainer for \"df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2\"" Mar 13 00:51:15.824741 containerd[1556]: time="2026-03-13T00:51:15.824699432Z" level=info msg="connecting to shim df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2" address="unix:///run/containerd/s/e195d93b07028ab1ea9b9c3eadc33be2c4afbd53324b28a4dd50a37ee68804a7" protocol=ttrpc version=3 Mar 13 00:51:15.830951 containerd[1556]: time="2026-03-13T00:51:15.830094543Z" level=info msg="Container f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:51:15.840852 containerd[1556]: time="2026-03-13T00:51:15.840812501Z" level=info msg="CreateContainer within sandbox \"3f58d64a7e3a21c60fc9f451793f6167846712ff2cb2a9b538c9c4bfbbfebf02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b\"" Mar 13 00:51:15.842386 containerd[1556]: time="2026-03-13T00:51:15.842269238Z" level=info msg="StartContainer for \"f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b\"" Mar 13 00:51:15.843413 containerd[1556]: time="2026-03-13T00:51:15.843337732Z" level=info msg="connecting to shim f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b" address="unix:///run/containerd/s/32438a4f33cdb0ca1890fe7dd3c9eaea9ab17ed91ed347deaf88dd8f01b45139" protocol=ttrpc version=3 Mar 13 00:51:15.874370 systemd[1]: Started cri-containerd-df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2.scope - libcontainer container df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2. Mar 13 00:51:15.888703 systemd[1]: Started cri-containerd-f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b.scope - libcontainer container f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b. Mar 13 00:51:15.930457 containerd[1556]: time="2026-03-13T00:51:15.929658397Z" level=info msg="StartContainer for \"df774a7dbb694876ea61aee01ed85347a5f7dd5995e6c7cf3fd0fc3c6349b3b2\" returns successfully" Mar 13 00:51:15.945995 containerd[1556]: time="2026-03-13T00:51:15.945705246Z" level=info msg="StartContainer for \"f380e19e5b23ff85cf473613e227190368348f1b6d617815742c17555e942e7b\" returns successfully" Mar 13 00:51:16.192602 kubelet[2781]: E0313 00:51:16.192556 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:16.199093 kubelet[2781]: E0313 00:51:16.198916 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:16.211286 kubelet[2781]: I0313 00:51:16.211114 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-77slj" podStartSLOduration=31.211096088 podStartE2EDuration="31.211096088s" podCreationTimestamp="2026-03-13 00:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:51:16.210542189 +0000 UTC m=+36.267627552" watchObservedRunningTime="2026-03-13 00:51:16.211096088 +0000 UTC m=+36.268181451" Mar 13 00:51:17.201239 kubelet[2781]: E0313 00:51:17.201070 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:17.201850 kubelet[2781]: E0313 00:51:17.201418 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:18.202680 kubelet[2781]: E0313 00:51:18.202622 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:18.203090 kubelet[2781]: E0313 00:51:18.202751 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:51:30.539040 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:46128.service - OpenSSH per-connection server daemon (10.0.0.1:46128). Mar 13 00:51:30.624345 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 46128 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:30.626660 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:30.635272 systemd-logind[1542]: New session 10 of user core. Mar 13 00:51:30.642809 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:51:30.782587 sshd[4120]: Connection closed by 10.0.0.1 port 46128 Mar 13 00:51:30.783111 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:30.788057 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:46128.service: Deactivated successfully. Mar 13 00:51:30.791028 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:51:30.792608 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:51:30.794629 systemd-logind[1542]: Removed session 10. Mar 13 00:51:35.800342 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). Mar 13 00:51:35.860868 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:35.862808 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:35.869869 systemd-logind[1542]: New session 11 of user core. Mar 13 00:51:35.879411 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:51:35.973995 sshd[4140]: Connection closed by 10.0.0.1 port 46130 Mar 13 00:51:35.974506 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:35.979777 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:46130.service: Deactivated successfully. Mar 13 00:51:35.982391 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:51:35.983932 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:51:35.985686 systemd-logind[1542]: Removed session 11. Mar 13 00:51:40.988100 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). Mar 13 00:51:41.047996 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 60544 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:41.049832 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:41.056341 systemd-logind[1542]: New session 12 of user core. Mar 13 00:51:41.065365 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:51:41.153684 sshd[4160]: Connection closed by 10.0.0.1 port 60544 Mar 13 00:51:41.154122 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:41.158846 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:60544.service: Deactivated successfully. Mar 13 00:51:41.160974 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:51:41.162084 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:51:41.164122 systemd-logind[1542]: Removed session 12. Mar 13 00:51:46.176679 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:60560.service - OpenSSH per-connection server daemon (10.0.0.1:60560). Mar 13 00:51:46.238355 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 60560 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:46.239926 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:46.245795 systemd-logind[1542]: New session 13 of user core. Mar 13 00:51:46.257339 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:51:46.343485 sshd[4180]: Connection closed by 10.0.0.1 port 60560 Mar 13 00:51:46.344042 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:46.354106 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:60560.service: Deactivated successfully. Mar 13 00:51:46.356911 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:51:46.358554 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:51:46.363127 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:60566.service - OpenSSH per-connection server daemon (10.0.0.1:60566). Mar 13 00:51:46.364017 systemd-logind[1542]: Removed session 13. Mar 13 00:51:46.418601 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 60566 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:46.420437 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:46.426510 systemd-logind[1542]: New session 14 of user core. Mar 13 00:51:46.440407 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:51:46.580805 sshd[4197]: Connection closed by 10.0.0.1 port 60566 Mar 13 00:51:46.582679 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:46.598479 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:60566.service: Deactivated successfully. Mar 13 00:51:46.602789 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:51:46.606336 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:51:46.611777 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:60576.service - OpenSSH per-connection server daemon (10.0.0.1:60576). Mar 13 00:51:46.614586 systemd-logind[1542]: Removed session 14. Mar 13 00:51:46.662081 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 60576 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:46.663553 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:46.669428 systemd-logind[1542]: New session 15 of user core. Mar 13 00:51:46.687415 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:51:46.760009 sshd[4211]: Connection closed by 10.0.0.1 port 60576 Mar 13 00:51:46.760355 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:46.764740 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:60576.service: Deactivated successfully. Mar 13 00:51:46.766969 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:51:46.768193 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:51:46.770231 systemd-logind[1542]: Removed session 15. Mar 13 00:51:51.772107 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:45892.service - OpenSSH per-connection server daemon (10.0.0.1:45892). Mar 13 00:51:51.826477 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 45892 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:51.827878 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:51.834530 systemd-logind[1542]: New session 16 of user core. Mar 13 00:51:51.845399 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:51:51.924863 sshd[4227]: Connection closed by 10.0.0.1 port 45892 Mar 13 00:51:51.926786 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:51.931660 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:45892.service: Deactivated successfully. Mar 13 00:51:51.934002 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:51:51.935307 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:51:51.937364 systemd-logind[1542]: Removed session 16. Mar 13 00:51:52.805053 update_engine[1548]: I20260313 00:51:52.804781 1548 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 13 00:51:52.805053 update_engine[1548]: I20260313 00:51:52.804991 1548 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 13 00:51:52.805845 update_engine[1548]: I20260313 00:51:52.805646 1548 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 13 00:51:52.806625 update_engine[1548]: I20260313 00:51:52.806550 1548 omaha_request_params.cc:62] Current group set to stable Mar 13 00:51:52.808615 update_engine[1548]: I20260313 00:51:52.808547 1548 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 13 00:51:52.808615 update_engine[1548]: I20260313 00:51:52.808600 1548 update_attempter.cc:643] Scheduling an action processor start. Mar 13 00:51:52.808687 update_engine[1548]: I20260313 00:51:52.808629 1548 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:51:52.808768 update_engine[1548]: I20260313 00:51:52.808705 1548 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 13 00:51:52.808922 update_engine[1548]: I20260313 00:51:52.808851 1548 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:51:52.808922 update_engine[1548]: I20260313 00:51:52.808899 1548 omaha_request_action.cc:272] Request: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: Mar 13 00:51:52.808922 update_engine[1548]: I20260313 00:51:52.808915 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:51:52.814539 locksmithd[1579]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 13 00:51:52.818280 update_engine[1548]: I20260313 00:51:52.818046 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:51:52.819296 update_engine[1548]: I20260313 00:51:52.819085 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:51:52.838957 update_engine[1548]: E20260313 00:51:52.838852 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:51:52.839230 update_engine[1548]: I20260313 00:51:52.839059 1548 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 13 00:51:56.943819 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:45904.service - OpenSSH per-connection server daemon (10.0.0.1:45904). Mar 13 00:51:57.006824 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 45904 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:57.008655 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:57.014763 systemd-logind[1542]: New session 17 of user core. Mar 13 00:51:57.028442 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:51:57.118606 sshd[4243]: Connection closed by 10.0.0.1 port 45904 Mar 13 00:51:57.118990 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:57.127977 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:45904.service: Deactivated successfully. Mar 13 00:51:57.130001 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:51:57.131310 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:51:57.134393 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:45910.service - OpenSSH per-connection server daemon (10.0.0.1:45910). Mar 13 00:51:57.135988 systemd-logind[1542]: Removed session 17. Mar 13 00:51:57.189643 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 45910 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:57.190989 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:57.196559 systemd-logind[1542]: New session 18 of user core. Mar 13 00:51:57.216377 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:51:57.456777 sshd[4260]: Connection closed by 10.0.0.1 port 45910 Mar 13 00:51:57.457065 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:57.467241 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:45910.service: Deactivated successfully. Mar 13 00:51:57.469655 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:51:57.471089 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:51:57.474644 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:45926.service - OpenSSH per-connection server daemon (10.0.0.1:45926). Mar 13 00:51:57.475931 systemd-logind[1542]: Removed session 18. Mar 13 00:51:57.542708 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 45926 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:57.544067 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:57.549901 systemd-logind[1542]: New session 19 of user core. Mar 13 00:51:57.557367 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:51:57.991908 sshd[4274]: Connection closed by 10.0.0.1 port 45926 Mar 13 00:51:57.992731 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:58.007359 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:45926.service: Deactivated successfully. Mar 13 00:51:58.013396 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:51:58.014760 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:51:58.018366 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:45932.service - OpenSSH per-connection server daemon (10.0.0.1:45932). Mar 13 00:51:58.019972 systemd-logind[1542]: Removed session 19. Mar 13 00:51:58.071658 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 45932 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:58.073315 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:58.078865 systemd-logind[1542]: New session 20 of user core. Mar 13 00:51:58.087367 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:51:58.314397 sshd[4296]: Connection closed by 10.0.0.1 port 45932 Mar 13 00:51:58.316516 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:58.326553 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:45932.service: Deactivated successfully. Mar 13 00:51:58.329924 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:51:58.331978 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:51:58.337772 systemd-logind[1542]: Removed session 20. Mar 13 00:51:58.340512 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:45942.service - OpenSSH per-connection server daemon (10.0.0.1:45942). Mar 13 00:51:58.414823 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 45942 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:51:58.416612 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:51:58.422846 systemd-logind[1542]: New session 21 of user core. Mar 13 00:51:58.436418 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:51:58.516733 sshd[4310]: Connection closed by 10.0.0.1 port 45942 Mar 13 00:51:58.518468 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Mar 13 00:51:58.523372 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:45942.service: Deactivated successfully. Mar 13 00:51:58.525838 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:51:58.527422 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:51:58.529072 systemd-logind[1542]: Removed session 21. Mar 13 00:52:01.177024 kubelet[2781]: E0313 00:52:01.176926 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:02.804990 update_engine[1548]: I20260313 00:52:02.804836 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:52:02.804990 update_engine[1548]: I20260313 00:52:02.804976 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:52:02.805591 update_engine[1548]: I20260313 00:52:02.805569 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:52:02.837887 update_engine[1548]: E20260313 00:52:02.837785 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:52:02.837982 update_engine[1548]: I20260313 00:52:02.837909 1548 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 13 00:52:03.528976 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:57090.service - OpenSSH per-connection server daemon (10.0.0.1:57090). Mar 13 00:52:03.580736 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:03.582120 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:03.587562 systemd-logind[1542]: New session 22 of user core. Mar 13 00:52:03.600375 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:52:03.681032 sshd[4329]: Connection closed by 10.0.0.1 port 57090 Mar 13 00:52:03.681458 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:03.685700 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:57090.service: Deactivated successfully. Mar 13 00:52:03.688547 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:52:03.689707 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:52:03.692385 systemd-logind[1542]: Removed session 22. Mar 13 00:52:08.698496 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:57092.service - OpenSSH per-connection server daemon (10.0.0.1:57092). Mar 13 00:52:08.755012 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 57092 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:08.756539 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:08.762073 systemd-logind[1542]: New session 23 of user core. Mar 13 00:52:08.776370 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:52:08.849921 sshd[4346]: Connection closed by 10.0.0.1 port 57092 Mar 13 00:52:08.850352 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:08.854553 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:57092.service: Deactivated successfully. Mar 13 00:52:08.856790 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:52:08.858736 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:52:08.860310 systemd-logind[1542]: Removed session 23. Mar 13 00:52:12.807412 update_engine[1548]: I20260313 00:52:12.807224 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:52:12.807412 update_engine[1548]: I20260313 00:52:12.807389 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:52:12.807900 update_engine[1548]: I20260313 00:52:12.807819 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:52:12.832459 update_engine[1548]: E20260313 00:52:12.832262 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:52:12.832459 update_engine[1548]: I20260313 00:52:12.832437 1548 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 13 00:52:13.867251 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:51192.service - OpenSSH per-connection server daemon (10.0.0.1:51192). Mar 13 00:52:13.924494 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 51192 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:13.926199 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:13.932224 systemd-logind[1542]: New session 24 of user core. Mar 13 00:52:13.945416 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:52:14.020786 sshd[4362]: Connection closed by 10.0.0.1 port 51192 Mar 13 00:52:14.023105 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:14.030061 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:51192.service: Deactivated successfully. Mar 13 00:52:14.032691 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:52:14.033992 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:52:14.037650 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:51200.service - OpenSSH per-connection server daemon (10.0.0.1:51200). Mar 13 00:52:14.038965 systemd-logind[1542]: Removed session 24. Mar 13 00:52:14.098426 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 51200 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:14.100472 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:14.106636 systemd-logind[1542]: New session 25 of user core. Mar 13 00:52:14.118379 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:52:15.418757 kubelet[2781]: I0313 00:52:15.418645 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wbn8b" podStartSLOduration=90.41862926 podStartE2EDuration="1m30.41862926s" podCreationTimestamp="2026-03-13 00:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:51:16.251022798 +0000 UTC m=+36.308108160" watchObservedRunningTime="2026-03-13 00:52:15.41862926 +0000 UTC m=+95.475714624" Mar 13 00:52:15.436090 containerd[1556]: time="2026-03-13T00:52:15.434233248Z" level=info msg="StopContainer for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" with timeout 30 (s)" Mar 13 00:52:15.456897 containerd[1556]: time="2026-03-13T00:52:15.456825111Z" level=info msg="Stop container \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" with signal terminated" Mar 13 00:52:15.475191 containerd[1556]: time="2026-03-13T00:52:15.475067909Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:52:15.485650 containerd[1556]: time="2026-03-13T00:52:15.485495700Z" level=info msg="StopContainer for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" with timeout 2 (s)" Mar 13 00:52:15.485939 containerd[1556]: time="2026-03-13T00:52:15.485884875Z" level=info msg="Stop container \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" with signal terminated" Mar 13 00:52:15.494665 systemd[1]: cri-containerd-e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f.scope: Deactivated successfully. Mar 13 00:52:15.498019 containerd[1556]: time="2026-03-13T00:52:15.497950463Z" level=info msg="received container exit event container_id:\"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" id:\"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" pid:3328 exited_at:{seconds:1773363135 nanos:497546095}" Mar 13 00:52:15.504575 systemd-networkd[1456]: lxc_health: Link DOWN Mar 13 00:52:15.504589 systemd-networkd[1456]: lxc_health: Lost carrier Mar 13 00:52:15.521778 systemd[1]: cri-containerd-2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38.scope: Deactivated successfully. Mar 13 00:52:15.522893 systemd[1]: cri-containerd-2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38.scope: Consumed 9.366s CPU time, 129.5M memory peak, 304K read from disk, 13.3M written to disk. Mar 13 00:52:15.523225 containerd[1556]: time="2026-03-13T00:52:15.523134948Z" level=info msg="received container exit event container_id:\"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" id:\"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" pid:3436 exited_at:{seconds:1773363135 nanos:522751209}" Mar 13 00:52:15.547628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f-rootfs.mount: Deactivated successfully. Mar 13 00:52:15.568574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38-rootfs.mount: Deactivated successfully. Mar 13 00:52:15.574048 containerd[1556]: time="2026-03-13T00:52:15.573924894Z" level=info msg="StopContainer for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" returns successfully" Mar 13 00:52:15.579090 containerd[1556]: time="2026-03-13T00:52:15.578487030Z" level=info msg="StopPodSandbox for \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\"" Mar 13 00:52:15.581606 containerd[1556]: time="2026-03-13T00:52:15.581252695Z" level=info msg="Container to stop \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:52:15.581606 containerd[1556]: time="2026-03-13T00:52:15.581285215Z" level=info msg="Container to stop \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:52:15.581606 containerd[1556]: time="2026-03-13T00:52:15.581352601Z" level=info msg="Container to stop \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:52:15.581606 containerd[1556]: time="2026-03-13T00:52:15.581365995Z" level=info msg="Container to stop \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:52:15.581606 containerd[1556]: time="2026-03-13T00:52:15.581379220Z" level=info msg="Container to stop \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:52:15.587237 containerd[1556]: time="2026-03-13T00:52:15.587132150Z" level=info msg="StopContainer for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" returns successfully" Mar 13 00:52:15.589279 containerd[1556]: time="2026-03-13T00:52:15.589102971Z" level=info msg="StopPodSandbox for \"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\"" Mar 13 00:52:15.589390 containerd[1556]: time="2026-03-13T00:52:15.589333380Z" level=info msg="Container to stop \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:52:15.590739 systemd[1]: cri-containerd-e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8.scope: Deactivated successfully. Mar 13 00:52:15.594228 containerd[1556]: time="2026-03-13T00:52:15.594128703Z" level=info msg="received sandbox exit event container_id:\"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" id:\"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" exit_status:137 exited_at:{seconds:1773363135 nanos:593870274}" monitor_name=podsandbox Mar 13 00:52:15.601604 systemd[1]: cri-containerd-48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35.scope: Deactivated successfully. Mar 13 00:52:15.603933 containerd[1556]: time="2026-03-13T00:52:15.603884913Z" level=info msg="received sandbox exit event container_id:\"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" id:\"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" exit_status:137 exited_at:{seconds:1773363135 nanos:603692819}" monitor_name=podsandbox Mar 13 00:52:15.629753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8-rootfs.mount: Deactivated successfully. Mar 13 00:52:15.636616 containerd[1556]: time="2026-03-13T00:52:15.636455230Z" level=info msg="shim disconnected" id=e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8 namespace=k8s.io Mar 13 00:52:15.636616 containerd[1556]: time="2026-03-13T00:52:15.636517487Z" level=warning msg="cleaning up after shim disconnected" id=e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8 namespace=k8s.io Mar 13 00:52:15.639534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35-rootfs.mount: Deactivated successfully. Mar 13 00:52:15.643695 containerd[1556]: time="2026-03-13T00:52:15.636531032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:52:15.644271 containerd[1556]: time="2026-03-13T00:52:15.644219213Z" level=info msg="shim disconnected" id=48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35 namespace=k8s.io Mar 13 00:52:15.644271 containerd[1556]: time="2026-03-13T00:52:15.644259939Z" level=warning msg="cleaning up after shim disconnected" id=48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35 namespace=k8s.io Mar 13 00:52:15.644424 containerd[1556]: time="2026-03-13T00:52:15.644268936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:52:15.664961 containerd[1556]: time="2026-03-13T00:52:15.664872128Z" level=info msg="TearDown network for sandbox \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" successfully" Mar 13 00:52:15.664961 containerd[1556]: time="2026-03-13T00:52:15.664921469Z" level=info msg="StopPodSandbox for \"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" returns successfully" Mar 13 00:52:15.667847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8-shm.mount: Deactivated successfully. Mar 13 00:52:15.668632 containerd[1556]: time="2026-03-13T00:52:15.668556435Z" level=info msg="TearDown network for sandbox \"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" successfully" Mar 13 00:52:15.668731 containerd[1556]: time="2026-03-13T00:52:15.668714970Z" level=info msg="StopPodSandbox for \"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" returns successfully" Mar 13 00:52:15.682076 containerd[1556]: time="2026-03-13T00:52:15.681920862Z" level=info msg="received sandbox container exit event sandbox_id:\"e32056036477cd9331912df09a8aa9a06dfea833e1ce9f9f5b4600896b338bc8\" exit_status:137 exited_at:{seconds:1773363135 nanos:593870274}" monitor_name=criService Mar 13 00:52:15.682076 containerd[1556]: time="2026-03-13T00:52:15.682049334Z" level=info msg="received sandbox container exit event sandbox_id:\"48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35\" exit_status:137 exited_at:{seconds:1773363135 nanos:603692819}" monitor_name=criService Mar 13 00:52:15.711581 kubelet[2781]: I0313 00:52:15.711507 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hubble-tls\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711581 kubelet[2781]: I0313 00:52:15.711541 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-lib-modules\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711581 kubelet[2781]: I0313 00:52:15.711560 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-net\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711581 kubelet[2781]: I0313 00:52:15.711574 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-xtables-lock\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711581 kubelet[2781]: I0313 00:52:15.711593 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-cilium-config-path\") pod \"c4415ec1-cbda-47b6-8d7a-81dd462e6a84\" (UID: \"c4415ec1-cbda-47b6-8d7a-81dd462e6a84\") " Mar 13 00:52:15.711899 kubelet[2781]: I0313 00:52:15.711608 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgl5n\" (UniqueName: \"kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-kube-api-access-kgl5n\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711899 kubelet[2781]: I0313 00:52:15.711620 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-bpf-maps\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711899 kubelet[2781]: I0313 00:52:15.711634 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-kernel\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711899 kubelet[2781]: I0313 00:52:15.711647 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hostproc\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.711899 kubelet[2781]: I0313 00:52:15.711660 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xk7\" (UniqueName: \"kubernetes.io/projected/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-kube-api-access-45xk7\") pod \"c4415ec1-cbda-47b6-8d7a-81dd462e6a84\" (UID: \"c4415ec1-cbda-47b6-8d7a-81dd462e6a84\") " Mar 13 00:52:15.711899 kubelet[2781]: I0313 00:52:15.711672 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-etc-cni-netd\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.712476 kubelet[2781]: I0313 00:52:15.711688 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-clustermesh-secrets\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.712476 kubelet[2781]: I0313 00:52:15.711753 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cni-path\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.712476 kubelet[2781]: I0313 00:52:15.711769 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-config-path\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.712476 kubelet[2781]: I0313 00:52:15.711784 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-run\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.712476 kubelet[2781]: I0313 00:52:15.711800 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-cgroup\") pod \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\" (UID: \"b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc\") " Mar 13 00:52:15.712476 kubelet[2781]: I0313 00:52:15.711890 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.712615 kubelet[2781]: I0313 00:52:15.711959 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.712615 kubelet[2781]: I0313 00:52:15.711975 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.712615 kubelet[2781]: I0313 00:52:15.711988 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.712615 kubelet[2781]: I0313 00:52:15.712108 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cni-path" (OuterVolumeSpecName: "cni-path") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.712615 kubelet[2781]: I0313 00:52:15.712203 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.713458 kubelet[2781]: I0313 00:52:15.713388 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.713458 kubelet[2781]: I0313 00:52:15.713448 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.716212 kubelet[2781]: I0313 00:52:15.714268 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.716212 kubelet[2781]: I0313 00:52:15.714343 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hostproc" (OuterVolumeSpecName: "hostproc") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:52:15.717815 kubelet[2781]: I0313 00:52:15.717775 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-kube-api-access-kgl5n" (OuterVolumeSpecName: "kube-api-access-kgl5n") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "kube-api-access-kgl5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:52:15.719098 kubelet[2781]: I0313 00:52:15.719043 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:52:15.719647 kubelet[2781]: I0313 00:52:15.719576 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:52:15.720385 kubelet[2781]: I0313 00:52:15.720346 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4415ec1-cbda-47b6-8d7a-81dd462e6a84" (UID: "c4415ec1-cbda-47b6-8d7a-81dd462e6a84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:52:15.722234 kubelet[2781]: I0313 00:52:15.722106 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" (UID: "b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:52:15.722615 kubelet[2781]: I0313 00:52:15.722576 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-kube-api-access-45xk7" (OuterVolumeSpecName: "kube-api-access-45xk7") pod "c4415ec1-cbda-47b6-8d7a-81dd462e6a84" (UID: "c4415ec1-cbda-47b6-8d7a-81dd462e6a84"). InnerVolumeSpecName "kube-api-access-45xk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.811969 2781 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812017 2781 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812027 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812035 2781 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kgl5n\" (UniqueName: \"kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-kube-api-access-kgl5n\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812045 2781 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812053 2781 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812061 2781 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812107 kubelet[2781]: I0313 00:52:15.812071 2781 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45xk7\" (UniqueName: \"kubernetes.io/projected/c4415ec1-cbda-47b6-8d7a-81dd462e6a84-kube-api-access-45xk7\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812081 2781 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812088 2781 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812128 2781 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812137 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812199 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812207 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812215 2781 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:15.812489 kubelet[2781]: I0313 00:52:15.812222 2781 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 13 00:52:16.176865 kubelet[2781]: E0313 00:52:16.176748 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:16.189452 systemd[1]: Removed slice kubepods-besteffort-podc4415ec1_cbda_47b6_8d7a_81dd462e6a84.slice - libcontainer container kubepods-besteffort-podc4415ec1_cbda_47b6_8d7a_81dd462e6a84.slice. Mar 13 00:52:16.191676 systemd[1]: Removed slice kubepods-burstable-podb9b1ee93_12e6_4ac4_bc2c_e59e89a37fcc.slice - libcontainer container kubepods-burstable-podb9b1ee93_12e6_4ac4_bc2c_e59e89a37fcc.slice. Mar 13 00:52:16.191761 systemd[1]: kubepods-burstable-podb9b1ee93_12e6_4ac4_bc2c_e59e89a37fcc.slice: Consumed 9.664s CPU time, 129.8M memory peak, 312K read from disk, 13.3M written to disk. Mar 13 00:52:16.352709 kubelet[2781]: I0313 00:52:16.352486 2781 scope.go:117] "RemoveContainer" containerID="e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f" Mar 13 00:52:16.357375 containerd[1556]: time="2026-03-13T00:52:16.356753647Z" level=info msg="RemoveContainer for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\"" Mar 13 00:52:16.372462 containerd[1556]: time="2026-03-13T00:52:16.372427565Z" level=info msg="RemoveContainer for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" returns successfully" Mar 13 00:52:16.373005 kubelet[2781]: I0313 00:52:16.372920 2781 scope.go:117] "RemoveContainer" containerID="e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f" Mar 13 00:52:16.373277 containerd[1556]: time="2026-03-13T00:52:16.373244736Z" level=error msg="ContainerStatus for \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\": not found" Mar 13 00:52:16.373720 kubelet[2781]: E0313 00:52:16.373669 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\": not found" containerID="e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f" Mar 13 00:52:16.373832 kubelet[2781]: I0313 00:52:16.373712 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f"} err="failed to get container status \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7d2b7e965a3e4822005d76ae4adbcf42f3c49818bc3dff60d77d2ce9db28b2f\": not found" Mar 13 00:52:16.373832 kubelet[2781]: I0313 00:52:16.373799 2781 scope.go:117] "RemoveContainer" containerID="2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38" Mar 13 00:52:16.376357 containerd[1556]: time="2026-03-13T00:52:16.376098122Z" level=info msg="RemoveContainer for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\"" Mar 13 00:52:16.381862 containerd[1556]: time="2026-03-13T00:52:16.381718720Z" level=info msg="RemoveContainer for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" returns successfully" Mar 13 00:52:16.382017 kubelet[2781]: I0313 00:52:16.381958 2781 scope.go:117] "RemoveContainer" containerID="95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464" Mar 13 00:52:16.384281 containerd[1556]: time="2026-03-13T00:52:16.384098615Z" level=info msg="RemoveContainer for \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\"" Mar 13 00:52:16.390643 containerd[1556]: time="2026-03-13T00:52:16.390537442Z" level=info msg="RemoveContainer for \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\" returns successfully" Mar 13 00:52:16.390890 kubelet[2781]: I0313 00:52:16.390821 2781 scope.go:117] "RemoveContainer" containerID="81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678" Mar 13 00:52:16.394503 containerd[1556]: time="2026-03-13T00:52:16.394453868Z" level=info msg="RemoveContainer for \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\"" Mar 13 00:52:16.400948 containerd[1556]: time="2026-03-13T00:52:16.400803813Z" level=info msg="RemoveContainer for \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\" returns successfully" Mar 13 00:52:16.401425 kubelet[2781]: I0313 00:52:16.401398 2781 scope.go:117] "RemoveContainer" containerID="773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1" Mar 13 00:52:16.403327 containerd[1556]: time="2026-03-13T00:52:16.403256663Z" level=info msg="RemoveContainer for \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\"" Mar 13 00:52:16.408396 containerd[1556]: time="2026-03-13T00:52:16.408336167Z" level=info msg="RemoveContainer for \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\" returns successfully" Mar 13 00:52:16.408679 kubelet[2781]: I0313 00:52:16.408628 2781 scope.go:117] "RemoveContainer" containerID="f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4" Mar 13 00:52:16.410384 containerd[1556]: time="2026-03-13T00:52:16.410351849Z" level=info msg="RemoveContainer for \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\"" Mar 13 00:52:16.414953 containerd[1556]: time="2026-03-13T00:52:16.414862092Z" level=info msg="RemoveContainer for \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\" returns successfully" Mar 13 00:52:16.415236 kubelet[2781]: I0313 00:52:16.415117 2781 scope.go:117] "RemoveContainer" containerID="2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38" Mar 13 00:52:16.415696 containerd[1556]: time="2026-03-13T00:52:16.415604455Z" level=error msg="ContainerStatus for \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\": not found" Mar 13 00:52:16.415804 kubelet[2781]: E0313 00:52:16.415750 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\": not found" containerID="2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38" Mar 13 00:52:16.415872 kubelet[2781]: I0313 00:52:16.415798 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38"} err="failed to get container status \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a1eedf4ec68a3fba5ed176e802369cf7c30dd60e462944ec046871e9515da38\": not found" Mar 13 00:52:16.415872 kubelet[2781]: I0313 00:52:16.415815 2781 scope.go:117] "RemoveContainer" containerID="95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464" Mar 13 00:52:16.416213 containerd[1556]: time="2026-03-13T00:52:16.416055870Z" level=error msg="ContainerStatus for \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\": not found" Mar 13 00:52:16.416279 kubelet[2781]: E0313 00:52:16.416250 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\": not found" containerID="95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464" Mar 13 00:52:16.416279 kubelet[2781]: I0313 00:52:16.416271 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464"} err="failed to get container status \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\": rpc error: code = NotFound desc = an error occurred when try to find container \"95d6028a2390acb1678a473928f4a83e8ecd3641751ec59b00644f69135ad464\": not found" Mar 13 00:52:16.416490 kubelet[2781]: I0313 00:52:16.416285 2781 scope.go:117] "RemoveContainer" containerID="81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678" Mar 13 00:52:16.416541 containerd[1556]: time="2026-03-13T00:52:16.416473998Z" level=error msg="ContainerStatus for \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\": not found" Mar 13 00:52:16.416657 kubelet[2781]: E0313 00:52:16.416601 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\": not found" containerID="81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678" Mar 13 00:52:16.416657 kubelet[2781]: I0313 00:52:16.416638 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678"} err="failed to get container status \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\": rpc error: code = NotFound desc = an error occurred when try to find container \"81662e8761b9ceba1c5b984376e7d0c1d65ff41a517eda2b98a5396d02919678\": not found" Mar 13 00:52:16.416657 kubelet[2781]: I0313 00:52:16.416652 2781 scope.go:117] "RemoveContainer" containerID="773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1" Mar 13 00:52:16.416953 containerd[1556]: time="2026-03-13T00:52:16.416890999Z" level=error msg="ContainerStatus for \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\": not found" Mar 13 00:52:16.417464 kubelet[2781]: E0313 00:52:16.417275 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\": not found" containerID="773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1" Mar 13 00:52:16.417464 kubelet[2781]: I0313 00:52:16.417379 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1"} err="failed to get container status \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"773008a7aa510c337aeb6e058c50aa3712c2c88e607393cf2ab036ee9fd729b1\": not found" Mar 13 00:52:16.417464 kubelet[2781]: I0313 00:52:16.417402 2781 scope.go:117] "RemoveContainer" containerID="f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4" Mar 13 00:52:16.417746 containerd[1556]: time="2026-03-13T00:52:16.417617775Z" level=error msg="ContainerStatus for \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\": not found" Mar 13 00:52:16.417979 kubelet[2781]: E0313 00:52:16.417869 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\": not found" containerID="f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4" Mar 13 00:52:16.417979 kubelet[2781]: I0313 00:52:16.417932 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4"} err="failed to get container status \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3ccfce812d37554d9126c26edf662a395fd032b9c73f0d9124786b038e989a4\": not found" Mar 13 00:52:16.546781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48cfd8d4661d4d527bf576644dc87bd0ceae2d76a18d6a036a61fc52bd43fd35-shm.mount: Deactivated successfully. Mar 13 00:52:16.546898 systemd[1]: var-lib-kubelet-pods-c4415ec1\x2dcbda\x2d47b6\x2d8d7a\x2d81dd462e6a84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d45xk7.mount: Deactivated successfully. Mar 13 00:52:16.546985 systemd[1]: var-lib-kubelet-pods-b9b1ee93\x2d12e6\x2d4ac4\x2dbc2c\x2de59e89a37fcc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 00:52:16.547105 systemd[1]: var-lib-kubelet-pods-b9b1ee93\x2d12e6\x2d4ac4\x2dbc2c\x2de59e89a37fcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgl5n.mount: Deactivated successfully. Mar 13 00:52:16.547878 systemd[1]: var-lib-kubelet-pods-b9b1ee93\x2d12e6\x2d4ac4\x2dbc2c\x2de59e89a37fcc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 00:52:17.054570 kubelet[2781]: E0313 00:52:17.054456 2781 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:52:17.384501 sshd[4379]: Connection closed by 10.0.0.1 port 51200 Mar 13 00:52:17.385452 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:17.396529 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:51200.service: Deactivated successfully. Mar 13 00:52:17.398798 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:52:17.400200 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:52:17.404741 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:51208.service - OpenSSH per-connection server daemon (10.0.0.1:51208). Mar 13 00:52:17.405790 systemd-logind[1542]: Removed session 25. Mar 13 00:52:17.474493 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 51208 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:17.476272 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:17.483841 systemd-logind[1542]: New session 26 of user core. Mar 13 00:52:17.497387 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:52:17.937253 sshd[4531]: Connection closed by 10.0.0.1 port 51208 Mar 13 00:52:17.938472 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:17.954763 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:51208.service: Deactivated successfully. Mar 13 00:52:17.960736 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:52:17.963546 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:52:17.970026 systemd[1]: Started sshd@26-10.0.0.136:22-10.0.0.1:51220.service - OpenSSH per-connection server daemon (10.0.0.1:51220). Mar 13 00:52:17.973763 systemd-logind[1542]: Removed session 26. Mar 13 00:52:17.990038 systemd[1]: Created slice kubepods-burstable-podb0b70b63_c330_44e1_9d2e_0cf676be99d4.slice - libcontainer container kubepods-burstable-podb0b70b63_c330_44e1_9d2e_0cf676be99d4.slice. Mar 13 00:52:18.039349 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 51220 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:18.040912 kubelet[2781]: I0313 00:52:18.040878 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-hostproc\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041032 kubelet[2781]: I0313 00:52:18.040915 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-etc-cni-netd\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041032 kubelet[2781]: I0313 00:52:18.040937 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0b70b63-c330-44e1-9d2e-0cf676be99d4-clustermesh-secrets\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041032 kubelet[2781]: I0313 00:52:18.040951 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-host-proc-sys-kernel\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041032 kubelet[2781]: I0313 00:52:18.040974 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-xtables-lock\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041032 kubelet[2781]: I0313 00:52:18.040998 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-cni-path\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041032 kubelet[2781]: I0313 00:52:18.041021 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-host-proc-sys-net\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041251 kubelet[2781]: I0313 00:52:18.041045 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-lib-modules\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041251 kubelet[2781]: I0313 00:52:18.041072 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-cilium-cgroup\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041251 kubelet[2781]: I0313 00:52:18.041102 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmhr\" (UniqueName: \"kubernetes.io/projected/b0b70b63-c330-44e1-9d2e-0cf676be99d4-kube-api-access-zrmhr\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041251 kubelet[2781]: I0313 00:52:18.041130 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-bpf-maps\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041251 kubelet[2781]: I0313 00:52:18.041234 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0b70b63-c330-44e1-9d2e-0cf676be99d4-cilium-config-path\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041079 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:18.041595 kubelet[2781]: I0313 00:52:18.041265 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0b70b63-c330-44e1-9d2e-0cf676be99d4-hubble-tls\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041595 kubelet[2781]: I0313 00:52:18.041291 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0b70b63-c330-44e1-9d2e-0cf676be99d4-cilium-run\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.041595 kubelet[2781]: I0313 00:52:18.041369 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0b70b63-c330-44e1-9d2e-0cf676be99d4-cilium-ipsec-secrets\") pod \"cilium-j6v97\" (UID: \"b0b70b63-c330-44e1-9d2e-0cf676be99d4\") " pod="kube-system/cilium-j6v97" Mar 13 00:52:18.048016 systemd-logind[1542]: New session 27 of user core. Mar 13 00:52:18.058441 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 00:52:18.073734 sshd[4546]: Connection closed by 10.0.0.1 port 51220 Mar 13 00:52:18.074103 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:18.087090 systemd[1]: sshd@26-10.0.0.136:22-10.0.0.1:51220.service: Deactivated successfully. Mar 13 00:52:18.090063 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 00:52:18.092047 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Mar 13 00:52:18.096801 systemd[1]: Started sshd@27-10.0.0.136:22-10.0.0.1:51222.service - OpenSSH per-connection server daemon (10.0.0.1:51222). Mar 13 00:52:18.099105 systemd-logind[1542]: Removed session 27. Mar 13 00:52:18.176016 sshd[4553]: Accepted publickey for core from 10.0.0.1 port 51222 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:52:18.178002 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:52:18.181214 kubelet[2781]: I0313 00:52:18.179845 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc" path="/var/lib/kubelet/pods/b9b1ee93-12e6-4ac4-bc2c-e59e89a37fcc/volumes" Mar 13 00:52:18.181214 kubelet[2781]: I0313 00:52:18.180806 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4415ec1-cbda-47b6-8d7a-81dd462e6a84" path="/var/lib/kubelet/pods/c4415ec1-cbda-47b6-8d7a-81dd462e6a84/volumes" Mar 13 00:52:18.183952 systemd-logind[1542]: New session 28 of user core. Mar 13 00:52:18.197575 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 13 00:52:18.295393 kubelet[2781]: E0313 00:52:18.295266 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:18.296242 containerd[1556]: time="2026-03-13T00:52:18.295965397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6v97,Uid:b0b70b63-c330-44e1-9d2e-0cf676be99d4,Namespace:kube-system,Attempt:0,}" Mar 13 00:52:18.320792 containerd[1556]: time="2026-03-13T00:52:18.320710299Z" level=info msg="connecting to shim 75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6" address="unix:///run/containerd/s/1c6da431a3af480cb0428c3f851ca4be22faac25f1d8e527968719c9aa7e945e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:52:18.357409 systemd[1]: Started cri-containerd-75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6.scope - libcontainer container 75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6. Mar 13 00:52:18.408063 containerd[1556]: time="2026-03-13T00:52:18.407976825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6v97,Uid:b0b70b63-c330-44e1-9d2e-0cf676be99d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\"" Mar 13 00:52:18.410476 kubelet[2781]: E0313 00:52:18.410282 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:18.418927 containerd[1556]: time="2026-03-13T00:52:18.418879072Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:52:18.428248 containerd[1556]: time="2026-03-13T00:52:18.428198658Z" level=info msg="Container b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:52:18.436728 containerd[1556]: time="2026-03-13T00:52:18.436650172Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd\"" Mar 13 00:52:18.437421 containerd[1556]: time="2026-03-13T00:52:18.437392941Z" level=info msg="StartContainer for \"b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd\"" Mar 13 00:52:18.438675 containerd[1556]: time="2026-03-13T00:52:18.438603154Z" level=info msg="connecting to shim b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd" address="unix:///run/containerd/s/1c6da431a3af480cb0428c3f851ca4be22faac25f1d8e527968719c9aa7e945e" protocol=ttrpc version=3 Mar 13 00:52:18.472527 systemd[1]: Started cri-containerd-b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd.scope - libcontainer container b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd. Mar 13 00:52:18.526046 containerd[1556]: time="2026-03-13T00:52:18.525965757Z" level=info msg="StartContainer for \"b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd\" returns successfully" Mar 13 00:52:18.539599 systemd[1]: cri-containerd-b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd.scope: Deactivated successfully. Mar 13 00:52:18.543651 containerd[1556]: time="2026-03-13T00:52:18.543574080Z" level=info msg="received container exit event container_id:\"b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd\" id:\"b7e60f8e58d2495a22a1cac614d136e9d9885360edfdd846dca52afb4d1628dd\" pid:4628 exited_at:{seconds:1773363138 nanos:543079619}" Mar 13 00:52:19.176945 kubelet[2781]: E0313 00:52:19.176852 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wbn8b" podUID="b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3" Mar 13 00:52:19.376717 kubelet[2781]: E0313 00:52:19.376648 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:19.382470 containerd[1556]: time="2026-03-13T00:52:19.382416664Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:52:19.394992 containerd[1556]: time="2026-03-13T00:52:19.394731921Z" level=info msg="Container 8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:52:19.405803 containerd[1556]: time="2026-03-13T00:52:19.405697279Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a\"" Mar 13 00:52:19.406780 containerd[1556]: time="2026-03-13T00:52:19.406646896Z" level=info msg="StartContainer for \"8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a\"" Mar 13 00:52:19.408775 containerd[1556]: time="2026-03-13T00:52:19.408710539Z" level=info msg="connecting to shim 8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a" address="unix:///run/containerd/s/1c6da431a3af480cb0428c3f851ca4be22faac25f1d8e527968719c9aa7e945e" protocol=ttrpc version=3 Mar 13 00:52:19.437397 systemd[1]: Started cri-containerd-8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a.scope - libcontainer container 8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a. Mar 13 00:52:19.497702 containerd[1556]: time="2026-03-13T00:52:19.497628570Z" level=info msg="StartContainer for \"8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a\" returns successfully" Mar 13 00:52:19.509957 systemd[1]: cri-containerd-8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a.scope: Deactivated successfully. Mar 13 00:52:19.511799 containerd[1556]: time="2026-03-13T00:52:19.511753098Z" level=info msg="received container exit event container_id:\"8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a\" id:\"8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a\" pid:4672 exited_at:{seconds:1773363139 nanos:511047775}" Mar 13 00:52:19.547220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d6fbae8ff06ec35ef5dbd2310c04096bd48c7ba40c17877d06bba622bdfe35a-rootfs.mount: Deactivated successfully. Mar 13 00:52:20.177115 kubelet[2781]: E0313 00:52:20.176988 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:20.381588 kubelet[2781]: E0313 00:52:20.381477 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:20.387203 containerd[1556]: time="2026-03-13T00:52:20.386999802Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:52:20.403653 containerd[1556]: time="2026-03-13T00:52:20.402431535Z" level=info msg="Container b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:52:20.414630 containerd[1556]: time="2026-03-13T00:52:20.414555809Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933\"" Mar 13 00:52:20.415488 containerd[1556]: time="2026-03-13T00:52:20.415373962Z" level=info msg="StartContainer for \"b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933\"" Mar 13 00:52:20.417721 containerd[1556]: time="2026-03-13T00:52:20.417580321Z" level=info msg="connecting to shim b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933" address="unix:///run/containerd/s/1c6da431a3af480cb0428c3f851ca4be22faac25f1d8e527968719c9aa7e945e" protocol=ttrpc version=3 Mar 13 00:52:20.446458 systemd[1]: Started cri-containerd-b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933.scope - libcontainer container b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933. Mar 13 00:52:20.563081 containerd[1556]: time="2026-03-13T00:52:20.562925831Z" level=info msg="StartContainer for \"b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933\" returns successfully" Mar 13 00:52:20.567285 systemd[1]: cri-containerd-b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933.scope: Deactivated successfully. Mar 13 00:52:20.570115 containerd[1556]: time="2026-03-13T00:52:20.569977451Z" level=info msg="received container exit event container_id:\"b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933\" id:\"b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933\" pid:4717 exited_at:{seconds:1773363140 nanos:569740559}" Mar 13 00:52:21.176518 kubelet[2781]: E0313 00:52:21.176421 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wbn8b" podUID="b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3" Mar 13 00:52:21.388577 kubelet[2781]: E0313 00:52:21.388543 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:21.396182 containerd[1556]: time="2026-03-13T00:52:21.395567138Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:52:21.400702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0fe7623c95a6539637f8298c8220dcd908503b6473794d1b00ec3e54653e933-rootfs.mount: Deactivated successfully. Mar 13 00:52:21.409115 containerd[1556]: time="2026-03-13T00:52:21.408128788Z" level=info msg="Container ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:52:21.419620 containerd[1556]: time="2026-03-13T00:52:21.419501055Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c\"" Mar 13 00:52:21.420307 containerd[1556]: time="2026-03-13T00:52:21.420233457Z" level=info msg="StartContainer for \"ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c\"" Mar 13 00:52:21.421734 containerd[1556]: time="2026-03-13T00:52:21.421660634Z" level=info msg="connecting to shim ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c" address="unix:///run/containerd/s/1c6da431a3af480cb0428c3f851ca4be22faac25f1d8e527968719c9aa7e945e" protocol=ttrpc version=3 Mar 13 00:52:21.455415 systemd[1]: Started cri-containerd-ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c.scope - libcontainer container ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c. Mar 13 00:52:21.495842 systemd[1]: cri-containerd-ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c.scope: Deactivated successfully. Mar 13 00:52:21.499307 containerd[1556]: time="2026-03-13T00:52:21.498936351Z" level=info msg="received container exit event container_id:\"ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c\" id:\"ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c\" pid:4757 exited_at:{seconds:1773363141 nanos:498210259}" Mar 13 00:52:21.500065 containerd[1556]: time="2026-03-13T00:52:21.500018637Z" level=info msg="StartContainer for \"ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c\" returns successfully" Mar 13 00:52:21.532410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee66a7bbd8737874f4fb55caf9cffbe4174958130bde3f74ba26fc8cd1a90c3c-rootfs.mount: Deactivated successfully. Mar 13 00:52:22.055865 kubelet[2781]: E0313 00:52:22.055813 2781 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:52:22.393750 kubelet[2781]: E0313 00:52:22.393699 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:22.399756 containerd[1556]: time="2026-03-13T00:52:22.399681156Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:52:22.418343 containerd[1556]: time="2026-03-13T00:52:22.418257460Z" level=info msg="Container e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:52:22.420375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613308102.mount: Deactivated successfully. Mar 13 00:52:22.427560 containerd[1556]: time="2026-03-13T00:52:22.427490622Z" level=info msg="CreateContainer within sandbox \"75bd99e17a0cf30f761baf1564263292e0dedcc9ebb7a1e5e5dcefe694c82de6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f\"" Mar 13 00:52:22.428358 containerd[1556]: time="2026-03-13T00:52:22.428296023Z" level=info msg="StartContainer for \"e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f\"" Mar 13 00:52:22.429428 containerd[1556]: time="2026-03-13T00:52:22.429404206Z" level=info msg="connecting to shim e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f" address="unix:///run/containerd/s/1c6da431a3af480cb0428c3f851ca4be22faac25f1d8e527968719c9aa7e945e" protocol=ttrpc version=3 Mar 13 00:52:22.466393 systemd[1]: Started cri-containerd-e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f.scope - libcontainer container e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f. Mar 13 00:52:22.539697 containerd[1556]: time="2026-03-13T00:52:22.539616967Z" level=info msg="StartContainer for \"e6806d327179590052276b030d4a24d553e96b672994df218ff1e737f6e9826f\" returns successfully" Mar 13 00:52:22.803706 update_engine[1548]: I20260313 00:52:22.803267 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:52:22.803706 update_engine[1548]: I20260313 00:52:22.803406 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:52:22.804256 update_engine[1548]: I20260313 00:52:22.803754 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:52:22.820908 update_engine[1548]: E20260313 00:52:22.820832 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:52:22.820989 update_engine[1548]: I20260313 00:52:22.820943 1548 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 00:52:22.820989 update_engine[1548]: I20260313 00:52:22.820957 1548 omaha_request_action.cc:617] Omaha request response: Mar 13 00:52:22.821093 update_engine[1548]: E20260313 00:52:22.821035 1548 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 13 00:52:22.821093 update_engine[1548]: I20260313 00:52:22.821059 1548 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 13 00:52:22.821093 update_engine[1548]: I20260313 00:52:22.821066 1548 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:52:22.821093 update_engine[1548]: I20260313 00:52:22.821074 1548 update_attempter.cc:306] Processing Done. Mar 13 00:52:22.821093 update_engine[1548]: E20260313 00:52:22.821089 1548 update_attempter.cc:619] Update failed. Mar 13 00:52:22.821269 update_engine[1548]: I20260313 00:52:22.821097 1548 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 13 00:52:22.821269 update_engine[1548]: I20260313 00:52:22.821103 1548 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 13 00:52:22.821269 update_engine[1548]: I20260313 00:52:22.821111 1548 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 13 00:52:22.821269 update_engine[1548]: I20260313 00:52:22.821251 1548 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:52:22.821386 update_engine[1548]: I20260313 00:52:22.821276 1548 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:52:22.821386 update_engine[1548]: I20260313 00:52:22.821285 1548 omaha_request_action.cc:272] Request: Mar 13 00:52:22.821386 update_engine[1548]: Mar 13 00:52:22.821386 update_engine[1548]: Mar 13 00:52:22.821386 update_engine[1548]: Mar 13 00:52:22.821386 update_engine[1548]: Mar 13 00:52:22.821386 update_engine[1548]: Mar 13 00:52:22.821386 update_engine[1548]: Mar 13 00:52:22.821386 update_engine[1548]: I20260313 00:52:22.821292 1548 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:52:22.821386 update_engine[1548]: I20260313 00:52:22.821358 1548 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:52:22.821876 update_engine[1548]: I20260313 00:52:22.821674 1548 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:52:22.821922 locksmithd[1579]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 13 00:52:22.839510 update_engine[1548]: E20260313 00:52:22.839435 1548 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839524 1548 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839535 1548 omaha_request_action.cc:617] Omaha request response: Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839545 1548 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839551 1548 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839557 1548 update_attempter.cc:306] Processing Done. Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839567 1548 update_attempter.cc:310] Error event sent. Mar 13 00:52:22.839856 update_engine[1548]: I20260313 00:52:22.839607 1548 update_check_scheduler.cc:74] Next update check in 42m29s Mar 13 00:52:22.840274 locksmithd[1579]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 13 00:52:23.024233 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 13 00:52:23.177120 kubelet[2781]: E0313 00:52:23.176496 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wbn8b" podUID="b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3" Mar 13 00:52:23.177120 kubelet[2781]: E0313 00:52:23.176995 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:23.401239 kubelet[2781]: E0313 00:52:23.401117 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:23.416978 kubelet[2781]: I0313 00:52:23.416577 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j6v97" podStartSLOduration=6.416562574 podStartE2EDuration="6.416562574s" podCreationTimestamp="2026-03-13 00:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:52:23.415686312 +0000 UTC m=+103.472771676" watchObservedRunningTime="2026-03-13 00:52:23.416562574 +0000 UTC m=+103.473647937" Mar 13 00:52:23.841136 kubelet[2781]: I0313 00:52:23.840989 2781 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T00:52:23Z","lastTransitionTime":"2026-03-13T00:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 00:52:24.404738 kubelet[2781]: E0313 00:52:24.404678 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:25.175991 kubelet[2781]: E0313 00:52:25.175866 2781 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wbn8b" podUID="b14dcf3c-e8d9-43f9-a17a-cc5f6f5dd9b3" Mar 13 00:52:26.452023 systemd-networkd[1456]: lxc_health: Link UP Mar 13 00:52:26.454599 systemd-networkd[1456]: lxc_health: Gained carrier Mar 13 00:52:27.177635 kubelet[2781]: E0313 00:52:27.177495 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:27.858558 systemd-networkd[1456]: lxc_health: Gained IPv6LL Mar 13 00:52:28.297607 kubelet[2781]: E0313 00:52:28.297295 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:28.417253 kubelet[2781]: E0313 00:52:28.417119 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:29.420561 kubelet[2781]: E0313 00:52:29.420449 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:52:33.024893 sshd[4560]: Connection closed by 10.0.0.1 port 51222 Mar 13 00:52:33.025544 sshd-session[4553]: pam_unix(sshd:session): session closed for user core Mar 13 00:52:33.030895 systemd[1]: sshd@27-10.0.0.136:22-10.0.0.1:51222.service: Deactivated successfully. Mar 13 00:52:33.033452 systemd[1]: session-28.scope: Deactivated successfully. Mar 13 00:52:33.034884 systemd-logind[1542]: Session 28 logged out. Waiting for processes to exit. Mar 13 00:52:33.037005 systemd-logind[1542]: Removed session 28.