Jul 6 23:53:38.894150 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:53:38.894171 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:53:38.894183 kernel: BIOS-provided physical RAM map: Jul 6 23:53:38.894189 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:53:38.894195 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:53:38.894202 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:53:38.894209 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 6 23:53:38.894215 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 6 23:53:38.894222 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:53:38.894230 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:53:38.894237 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:53:38.894243 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:53:38.894253 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:53:38.894260 kernel: NX (Execute Disable) protection: active Jul 6 23:53:38.894268 kernel: APIC: Static calls initialized Jul 6 23:53:38.894280 kernel: SMBIOS 2.8 present. Jul 6 23:53:38.894287 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 6 23:53:38.894294 kernel: Hypervisor detected: KVM Jul 6 23:53:38.894301 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:53:38.894308 kernel: kvm-clock: using sched offset of 3067665598 cycles Jul 6 23:53:38.894315 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:53:38.894322 kernel: tsc: Detected 2794.748 MHz processor Jul 6 23:53:38.894329 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:53:38.894336 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:53:38.894346 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 6 23:53:38.894353 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:53:38.894360 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:53:38.894367 kernel: Using GB pages for direct mapping Jul 6 23:53:38.894374 kernel: ACPI: Early table checksum verification disabled Jul 6 23:53:38.894381 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 6 23:53:38.894388 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894395 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894402 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894411 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 6 23:53:38.894418 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894425 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894432 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894439 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:53:38.894445 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 6 23:53:38.894453 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 6 23:53:38.894463 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 6 23:53:38.894473 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 6 23:53:38.894480 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 6 23:53:38.894488 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 6 23:53:38.894495 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 6 23:53:38.894502 kernel: No NUMA configuration found Jul 6 23:53:38.894509 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 6 23:53:38.894519 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 6 23:53:38.894526 kernel: Zone ranges: Jul 6 23:53:38.894533 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:53:38.894540 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 6 23:53:38.894547 kernel: Normal empty Jul 6 23:53:38.894554 kernel: Movable zone start for each node Jul 6 23:53:38.894562 kernel: Early memory node ranges Jul 6 23:53:38.894569 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:53:38.894576 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 6 23:53:38.894583 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 6 23:53:38.894593 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:53:38.894602 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:53:38.894609 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:53:38.894617 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:53:38.894624 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:53:38.894631 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:53:38.894638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:53:38.894645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:53:38.894652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:53:38.894662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:53:38.894670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:53:38.894677 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:53:38.894684 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:53:38.894691 kernel: TSC deadline timer available Jul 6 23:53:38.894699 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:53:38.894714 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:53:38.894722 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:53:38.894731 kernel: kvm-guest: setup PV sched yield Jul 6 23:53:38.894741 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:53:38.894749 kernel: Booting paravirtualized kernel on KVM Jul 6 23:53:38.894756 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:53:38.894764 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:53:38.894771 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:53:38.894778 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:53:38.894785 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:53:38.894792 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:53:38.894800 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:53:38.894811 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:53:38.894818 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:53:38.894826 kernel: random: crng init done Jul 6 23:53:38.894833 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:53:38.894840 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:53:38.894847 kernel: Fallback order for Node 0: 0 Jul 6 23:53:38.894854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 6 23:53:38.894861 kernel: Policy zone: DMA32 Jul 6 23:53:38.894971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:53:38.894979 kernel: Memory: 2434584K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 136908K reserved, 0K cma-reserved) Jul 6 23:53:38.894987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:53:38.894994 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:53:38.895001 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:53:38.895009 kernel: Dynamic Preempt: voluntary Jul 6 23:53:38.895016 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:53:38.895024 kernel: rcu: RCU event tracing is enabled. Jul 6 23:53:38.895031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:53:38.895042 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:53:38.895050 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:53:38.895057 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:53:38.895064 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:53:38.895075 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:53:38.895082 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:53:38.895089 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:53:38.895096 kernel: Console: colour VGA+ 80x25 Jul 6 23:53:38.895104 kernel: printk: console [ttyS0] enabled Jul 6 23:53:38.895113 kernel: ACPI: Core revision 20230628 Jul 6 23:53:38.895121 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:53:38.895128 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:53:38.895135 kernel: x2apic enabled Jul 6 23:53:38.895142 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:53:38.895149 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:53:38.895157 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:53:38.895165 kernel: kvm-guest: setup PV IPIs Jul 6 23:53:38.895183 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:53:38.895191 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:53:38.895198 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 6 23:53:38.895206 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:53:38.895216 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:53:38.895224 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:53:38.895231 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:53:38.895239 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:53:38.895247 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:53:38.895257 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:53:38.895265 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:53:38.895275 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:53:38.895283 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:53:38.895290 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:53:38.895298 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:53:38.895306 kernel: x86/bugs: return thunk changed Jul 6 23:53:38.895313 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:53:38.895323 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:53:38.895331 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:53:38.895339 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:53:38.895346 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:53:38.895354 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:53:38.895362 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:53:38.895370 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:53:38.895377 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:53:38.895385 kernel: landlock: Up and running. Jul 6 23:53:38.895395 kernel: SELinux: Initializing. Jul 6 23:53:38.895402 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:53:38.895410 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:53:38.895418 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:53:38.895425 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:53:38.895433 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:53:38.895440 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:53:38.895448 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:53:38.895457 kernel: ... version: 0 Jul 6 23:53:38.895468 kernel: ... bit width: 48 Jul 6 23:53:38.895475 kernel: ... generic registers: 6 Jul 6 23:53:38.895483 kernel: ... value mask: 0000ffffffffffff Jul 6 23:53:38.895491 kernel: ... max period: 00007fffffffffff Jul 6 23:53:38.895498 kernel: ... fixed-purpose events: 0 Jul 6 23:53:38.895506 kernel: ... event mask: 000000000000003f Jul 6 23:53:38.895513 kernel: signal: max sigframe size: 1776 Jul 6 23:53:38.895521 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:53:38.895529 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:53:38.895539 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:53:38.895546 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:53:38.895554 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:53:38.895562 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:53:38.895569 kernel: smpboot: Max logical packages: 1 Jul 6 23:53:38.895577 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 6 23:53:38.895584 kernel: devtmpfs: initialized Jul 6 23:53:38.895592 kernel: x86/mm: Memory block size: 128MB Jul 6 23:53:38.895600 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:53:38.895610 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:53:38.895618 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:53:38.895625 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:53:38.895633 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:53:38.895641 kernel: audit: type=2000 audit(1751846018.361:1): state=initialized audit_enabled=0 res=1 Jul 6 23:53:38.895648 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:53:38.895656 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:53:38.895663 kernel: cpuidle: using governor menu Jul 6 23:53:38.895671 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:53:38.895682 kernel: dca service started, version 1.12.1 Jul 6 23:53:38.895689 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:53:38.895697 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:53:38.895705 kernel: PCI: Using configuration type 1 for base access Jul 6 23:53:38.895720 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:53:38.895728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:53:38.895735 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:53:38.895743 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:53:38.895751 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:53:38.895761 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:53:38.895768 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:53:38.895777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:53:38.895784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:53:38.895791 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:53:38.895799 kernel: ACPI: Interpreter enabled Jul 6 23:53:38.895806 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:53:38.895814 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:53:38.895821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:53:38.895832 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:53:38.895839 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:53:38.895847 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:53:38.896073 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:53:38.896220 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:53:38.896352 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:53:38.896362 kernel: PCI host bridge to bus 0000:00 Jul 6 23:53:38.896512 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:53:38.896632 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:53:38.896760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:53:38.896893 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:53:38.897014 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:53:38.897131 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:53:38.897247 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:53:38.897411 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:53:38.897563 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:53:38.897691 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 6 23:53:38.897827 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 6 23:53:38.898004 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 6 23:53:38.898137 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:53:38.898295 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:53:38.898434 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 6 23:53:38.898566 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 6 23:53:38.898694 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 6 23:53:38.898854 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:53:38.899004 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:53:38.899135 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 6 23:53:38.899270 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 6 23:53:38.899430 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:53:38.899561 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 6 23:53:38.899689 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 6 23:53:38.899837 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 6 23:53:38.899988 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 6 23:53:38.900132 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:53:38.900268 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:53:38.900416 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:53:38.900545 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 6 23:53:38.900677 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 6 23:53:38.900831 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:53:38.900985 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:53:38.900997 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:53:38.901010 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:53:38.901018 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:53:38.901026 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:53:38.901033 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:53:38.901041 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:53:38.901051 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:53:38.901062 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:53:38.901071 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:53:38.901081 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:53:38.901095 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:53:38.901105 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:53:38.901113 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:53:38.901121 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:53:38.901128 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:53:38.901136 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:53:38.901144 kernel: iommu: Default domain type: Translated Jul 6 23:53:38.901151 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:53:38.901159 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:53:38.901170 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:53:38.901177 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:53:38.901185 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 6 23:53:38.901318 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:53:38.901446 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:53:38.901573 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:53:38.901584 kernel: vgaarb: loaded Jul 6 23:53:38.901591 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:53:38.901604 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:53:38.901611 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:53:38.901619 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:53:38.901628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:53:38.901636 kernel: pnp: PnP ACPI init Jul 6 23:53:38.901799 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:53:38.901812 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:53:38.901820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:53:38.901832 kernel: NET: Registered PF_INET protocol family Jul 6 23:53:38.901839 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:53:38.901847 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:53:38.901855 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:53:38.901863 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:53:38.901885 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:53:38.901892 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:53:38.901900 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:53:38.901908 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:53:38.901919 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:53:38.901927 kernel: NET: Registered PF_XDP protocol family Jul 6 23:53:38.902049 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:53:38.902169 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:53:38.902285 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:53:38.902400 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:53:38.902540 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:53:38.902780 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:53:38.902798 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:53:38.902806 kernel: Initialise system trusted keyrings Jul 6 23:53:38.902814 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:53:38.902822 kernel: Key type asymmetric registered Jul 6 23:53:38.902830 kernel: Asymmetric key parser 'x509' registered Jul 6 23:53:38.902838 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:53:38.902846 kernel: io scheduler mq-deadline registered Jul 6 23:53:38.902853 kernel: io scheduler kyber registered Jul 6 23:53:38.902861 kernel: io scheduler bfq registered Jul 6 23:53:38.902940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:53:38.902953 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:53:38.902962 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:53:38.902969 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:53:38.902977 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:53:38.902985 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:53:38.902993 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:53:38.903001 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:53:38.903008 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:53:38.903150 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:53:38.903166 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:53:38.903286 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:53:38.903405 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:53:38 UTC (1751846018) Jul 6 23:53:38.903523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:53:38.903533 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:53:38.903541 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:53:38.903549 kernel: Segment Routing with IPv6 Jul 6 23:53:38.903561 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:53:38.903569 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:53:38.903576 kernel: Key type dns_resolver registered Jul 6 23:53:38.903584 kernel: IPI shorthand broadcast: enabled Jul 6 23:53:38.903592 kernel: sched_clock: Marking stable (813003557, 102027390)->(942439661, -27408714) Jul 6 23:53:38.903600 kernel: registered taskstats version 1 Jul 6 23:53:38.903607 kernel: Loading compiled-in X.509 certificates Jul 6 23:53:38.903615 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:53:38.903622 kernel: Key type .fscrypt registered Jul 6 23:53:38.903630 kernel: Key type fscrypt-provisioning registered Jul 6 23:53:38.903641 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:53:38.903649 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:53:38.903656 kernel: ima: No architecture policies found Jul 6 23:53:38.903664 kernel: clk: Disabling unused clocks Jul 6 23:53:38.903671 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:53:38.903679 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:53:38.903687 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:53:38.903695 kernel: Run /init as init process Jul 6 23:53:38.903713 kernel: with arguments: Jul 6 23:53:38.903721 kernel: /init Jul 6 23:53:38.903729 kernel: with environment: Jul 6 23:53:38.903736 kernel: HOME=/ Jul 6 23:53:38.903744 kernel: TERM=linux Jul 6 23:53:38.903751 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:53:38.903761 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:53:38.903772 systemd[1]: Detected virtualization kvm. Jul 6 23:53:38.903784 systemd[1]: Detected architecture x86-64. Jul 6 23:53:38.903792 systemd[1]: Running in initrd. Jul 6 23:53:38.903800 systemd[1]: No hostname configured, using default hostname. Jul 6 23:53:38.903809 systemd[1]: Hostname set to . Jul 6 23:53:38.903817 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:53:38.903826 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:53:38.903834 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:53:38.903843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:53:38.903855 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:53:38.903863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:53:38.903898 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:53:38.903909 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:53:38.903920 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:53:38.903931 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:53:38.903940 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:53:38.903948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:53:38.903956 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:53:38.903969 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:53:38.903984 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:53:38.904003 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:53:38.904023 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:53:38.904046 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:53:38.904066 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:53:38.904081 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:53:38.904097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:53:38.904114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:53:38.904129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:53:38.904149 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:53:38.904161 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:53:38.904170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:53:38.904182 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:53:38.904190 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:53:38.904199 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:53:38.904207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:53:38.904216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:53:38.904224 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:53:38.904232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:53:38.904241 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:53:38.904274 systemd-journald[193]: Collecting audit messages is disabled. Jul 6 23:53:38.904297 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:53:38.904306 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:53:38.904315 systemd-journald[193]: Journal started Jul 6 23:53:38.904338 systemd-journald[193]: Runtime Journal (/run/log/journal/937dd5ec89fc46ba88ef5f6edb626dfe) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:53:38.904463 systemd-modules-load[194]: Inserted module 'overlay' Jul 6 23:53:38.933563 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:53:38.934893 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:53:38.937579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:53:38.939834 kernel: Bridge firewalling registered Jul 6 23:53:38.938770 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 6 23:53:38.949013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:53:38.949856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:53:38.953757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:53:38.954101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:53:38.956660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:53:38.967101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:53:38.969270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:53:38.974467 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:53:38.977236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:53:38.985001 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:53:38.987251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:53:38.997650 dracut-cmdline[228]: dracut-dracut-053 Jul 6 23:53:39.005787 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:53:39.040217 systemd-resolved[230]: Positive Trust Anchors: Jul 6 23:53:39.040233 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:53:39.040264 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:53:39.042960 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 6 23:53:39.044450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:53:39.050437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:53:39.106922 kernel: SCSI subsystem initialized Jul 6 23:53:39.117908 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:53:39.128898 kernel: iscsi: registered transport (tcp) Jul 6 23:53:39.151909 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:53:39.151944 kernel: QLogic iSCSI HBA Driver Jul 6 23:53:39.207935 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:53:39.215014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:53:39.240488 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:53:39.240552 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:53:39.240565 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:53:39.283928 kernel: raid6: avx2x4 gen() 28645 MB/s Jul 6 23:53:39.300919 kernel: raid6: avx2x2 gen() 29195 MB/s Jul 6 23:53:39.317961 kernel: raid6: avx2x1 gen() 25999 MB/s Jul 6 23:53:39.318061 kernel: raid6: using algorithm avx2x2 gen() 29195 MB/s Jul 6 23:53:39.335938 kernel: raid6: .... xor() 19973 MB/s, rmw enabled Jul 6 23:53:39.335989 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:53:39.356915 kernel: xor: automatically using best checksumming function avx Jul 6 23:53:39.512926 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:53:39.529076 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:53:39.543095 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:53:39.555368 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 6 23:53:39.560256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:53:39.575041 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:53:39.592122 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jul 6 23:53:39.627292 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:53:39.642135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:53:39.707497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:53:39.715070 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:53:39.732790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:53:39.735207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:53:39.738601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:53:39.741212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:53:39.751929 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:53:39.752141 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:53:39.762003 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:53:39.765156 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:53:39.773724 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:53:39.780906 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:53:39.780934 kernel: AES CTR mode by8 optimization enabled Jul 6 23:53:39.781549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:53:39.781774 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:53:39.784466 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:53:39.795651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:53:39.795668 kernel: GPT:9289727 != 19775487 Jul 6 23:53:39.795678 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:53:39.795699 kernel: GPT:9289727 != 19775487 Jul 6 23:53:39.795714 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:53:39.795724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:53:39.785798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:53:39.797332 kernel: libata version 3.00 loaded. Jul 6 23:53:39.786257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:53:39.793317 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:53:39.800127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:53:39.805887 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:53:39.806087 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:53:39.809695 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:53:39.810942 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:53:39.817972 kernel: scsi host0: ahci Jul 6 23:53:39.818208 kernel: scsi host1: ahci Jul 6 23:53:39.819892 kernel: scsi host2: ahci Jul 6 23:53:39.822908 kernel: scsi host3: ahci Jul 6 23:53:39.823928 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (461) Jul 6 23:53:39.826893 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (479) Jul 6 23:53:39.830921 kernel: scsi host4: ahci Jul 6 23:53:39.831112 kernel: scsi host5: ahci Jul 6 23:53:39.831267 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 6 23:53:39.831279 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 6 23:53:39.831290 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 6 23:53:39.831300 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 6 23:53:39.831310 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 6 23:53:39.831326 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 6 23:53:39.831962 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:53:39.863616 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:53:39.865098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:53:39.879116 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:53:39.880344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:53:39.888365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:53:39.902010 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:53:39.903830 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:53:39.912892 disk-uuid[560]: Primary Header is updated. Jul 6 23:53:39.912892 disk-uuid[560]: Secondary Entries is updated. Jul 6 23:53:39.912892 disk-uuid[560]: Secondary Header is updated. Jul 6 23:53:39.917905 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:53:39.921905 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:53:39.926280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:53:40.141899 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:53:40.141960 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:53:40.142894 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:53:40.142915 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:53:40.143902 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:53:40.144916 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:53:40.145000 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:53:40.145979 kernel: ata3.00: applying bridge limits Jul 6 23:53:40.146896 kernel: ata3.00: configured for UDMA/100 Jul 6 23:53:40.146908 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:53:40.202911 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:53:40.203199 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:53:40.220896 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:53:40.923745 disk-uuid[561]: The operation has completed successfully. Jul 6 23:53:40.925083 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:53:40.953349 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:53:40.953480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:53:40.974040 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:53:40.979924 sh[592]: Success Jul 6 23:53:40.993364 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:53:41.029368 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:53:41.041427 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:53:41.044532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:53:41.059329 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:53:41.059358 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:53:41.059369 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:53:41.060297 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:53:41.061007 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:53:41.065765 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:53:41.068079 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:53:41.085016 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:53:41.085780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:53:41.099548 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:53:41.099579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:53:41.099590 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:53:41.102908 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:53:41.112573 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:53:41.114277 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:53:41.124094 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:53:41.132063 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:53:41.185521 ignition[688]: Ignition 2.19.0 Jul 6 23:53:41.185532 ignition[688]: Stage: fetch-offline Jul 6 23:53:41.185570 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:53:41.185580 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:53:41.185673 ignition[688]: parsed url from cmdline: "" Jul 6 23:53:41.185677 ignition[688]: no config URL provided Jul 6 23:53:41.185682 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:53:41.185691 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:53:41.185726 ignition[688]: op(1): [started] loading QEMU firmware config module Jul 6 23:53:41.185732 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:53:41.193899 ignition[688]: op(1): [finished] loading QEMU firmware config module Jul 6 23:53:41.193929 ignition[688]: QEMU firmware config was not found. Ignoring... Jul 6 23:53:41.221948 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:53:41.229119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:53:41.236788 ignition[688]: parsing config with SHA512: 20471237ce2c313275f8e1315048e383b9e2f3781f57923782f72bed3d3b379da7e3cc7a72446a19b6a04b1afd454828007b734c3cc4bcc9009e43c2c042a2c2 Jul 6 23:53:41.240985 unknown[688]: fetched base config from "system" Jul 6 23:53:41.241107 unknown[688]: fetched user config from "qemu" Jul 6 23:53:41.241692 ignition[688]: fetch-offline: fetch-offline passed Jul 6 23:53:41.241770 ignition[688]: Ignition finished successfully Jul 6 23:53:41.246293 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:53:41.252555 systemd-networkd[780]: lo: Link UP Jul 6 23:53:41.252567 systemd-networkd[780]: lo: Gained carrier Jul 6 23:53:41.254211 systemd-networkd[780]: Enumeration completed Jul 6 23:53:41.254372 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:53:41.254612 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:53:41.254616 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:53:41.255724 systemd-networkd[780]: eth0: Link UP Jul 6 23:53:41.255728 systemd-networkd[780]: eth0: Gained carrier Jul 6 23:53:41.255735 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:53:41.256610 systemd[1]: Reached target network.target - Network. Jul 6 23:53:41.258338 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:53:41.268015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:53:41.279793 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:53:41.284505 ignition[783]: Ignition 2.19.0 Jul 6 23:53:41.284517 ignition[783]: Stage: kargs Jul 6 23:53:41.284743 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:53:41.284756 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:53:41.285856 ignition[783]: kargs: kargs passed Jul 6 23:53:41.285947 ignition[783]: Ignition finished successfully Jul 6 23:53:41.290180 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:53:41.299047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:53:41.314789 ignition[791]: Ignition 2.19.0 Jul 6 23:53:41.314801 ignition[791]: Stage: disks Jul 6 23:53:41.314994 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:53:41.315007 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:53:41.318698 ignition[791]: disks: disks passed Jul 6 23:53:41.318754 ignition[791]: Ignition finished successfully Jul 6 23:53:41.322269 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:53:41.323504 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:53:41.325286 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:53:41.326505 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:53:41.328439 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:53:41.328496 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:53:41.340096 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:53:41.352808 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:53:41.359097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:53:41.364965 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:53:41.451894 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:53:41.452758 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:53:41.455054 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:53:41.467963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:53:41.469709 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:53:41.470046 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:53:41.470095 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:53:41.477884 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 6 23:53:41.477911 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:53:41.470121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:53:41.481469 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:53:41.481496 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:53:41.482895 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:53:41.484925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:53:41.489842 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:53:41.491671 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:53:41.532548 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:53:41.539202 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:53:41.545470 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:53:41.552001 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:53:41.652191 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:53:41.665968 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:53:41.668267 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:53:41.675909 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:53:41.699024 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:53:41.705339 ignition[923]: INFO : Ignition 2.19.0 Jul 6 23:53:41.705339 ignition[923]: INFO : Stage: mount Jul 6 23:53:41.706978 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:53:41.706978 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:53:41.709765 ignition[923]: INFO : mount: mount passed Jul 6 23:53:41.710587 ignition[923]: INFO : Ignition finished successfully Jul 6 23:53:41.713764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:53:41.725959 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:53:42.058971 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:53:42.070214 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:53:42.077900 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Jul 6 23:53:42.077973 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:53:42.079576 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:53:42.079600 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:53:42.082906 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:53:42.084398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:53:42.163151 ignition[953]: INFO : Ignition 2.19.0 Jul 6 23:53:42.163151 ignition[953]: INFO : Stage: files Jul 6 23:53:42.165002 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:53:42.165002 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:53:42.165002 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:53:42.168276 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:53:42.168276 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:53:42.171584 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:53:42.172891 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:53:42.174603 unknown[953]: wrote ssh authorized keys file for user: core Jul 6 23:53:42.175679 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:53:42.177992 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:53:42.179712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:53:42.179712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:53:42.179712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:53:42.215792 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:53:42.307901 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:53:42.309979 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:53:42.309979 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:53:42.650753 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 6 23:53:42.775948 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:53:42.777740 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:53:43.079078 systemd-networkd[780]: eth0: Gained IPv6LL Jul 6 23:53:43.291901 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 6 23:53:43.953154 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:53:43.953154 ignition[953]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 6 23:53:43.957443 ignition[953]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:53:44.067088 ignition[953]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:53:44.073820 ignition[953]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:53:44.075435 ignition[953]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:53:44.075435 ignition[953]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:53:44.075435 ignition[953]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:53:44.075435 ignition[953]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:53:44.075435 ignition[953]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:53:44.075435 ignition[953]: INFO : files: files passed Jul 6 23:53:44.075435 ignition[953]: INFO : Ignition finished successfully Jul 6 23:53:44.076761 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:53:44.096037 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:53:44.098830 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:53:44.100680 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:53:44.100791 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:53:44.108726 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:53:44.111559 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:53:44.113240 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:53:44.115973 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:53:44.114330 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:53:44.116162 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:53:44.129111 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:53:44.156663 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:53:44.156811 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:53:44.159098 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:53:44.161117 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:53:44.163122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:53:44.172014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:53:44.187221 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:53:44.195114 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:53:44.204180 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:53:44.206439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:53:44.208747 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:53:44.210524 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:53:44.211510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:53:44.214023 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:53:44.216025 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:53:44.217799 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:53:44.219935 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:53:44.222175 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:53:44.224347 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:53:44.226366 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:53:44.228771 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:53:44.230782 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:53:44.232756 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:53:44.234346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:53:44.235341 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:53:44.237569 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:53:44.239686 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:53:44.241953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:53:44.242892 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:53:44.245391 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:53:44.246369 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:53:44.248549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:53:44.249613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:53:44.251919 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:53:44.253622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:53:44.257942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:53:44.258140 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:53:44.260564 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:53:44.262185 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:53:44.262291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:53:44.263846 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:53:44.263958 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:53:44.264371 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:53:44.264528 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:53:44.267123 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:53:44.267233 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:53:44.280007 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:53:44.280783 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:53:44.281819 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:53:44.281952 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:53:44.282388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:53:44.282487 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:53:44.292845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:53:44.293832 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:53:44.307915 ignition[1008]: INFO : Ignition 2.19.0 Jul 6 23:53:44.307915 ignition[1008]: INFO : Stage: umount Jul 6 23:53:44.307915 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:53:44.307915 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:53:44.311834 ignition[1008]: INFO : umount: umount passed Jul 6 23:53:44.311834 ignition[1008]: INFO : Ignition finished successfully Jul 6 23:53:44.310716 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:53:44.310858 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:53:44.314147 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:53:44.315027 systemd[1]: Stopped target network.target - Network. Jul 6 23:53:44.316512 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:53:44.316591 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:53:44.318528 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:53:44.318590 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:53:44.320539 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:53:44.320602 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:53:44.322402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:53:44.322462 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:53:44.324587 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:53:44.326385 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:53:44.330922 systemd-networkd[780]: eth0: DHCPv6 lease lost Jul 6 23:53:44.333110 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:53:44.333283 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:53:44.334990 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:53:44.335038 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:53:44.343997 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:53:44.346117 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:53:44.346211 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:53:44.348665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:53:44.351264 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:53:44.351585 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:53:44.362330 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:53:44.362537 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:53:44.365239 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:53:44.365324 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:53:44.366834 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:53:44.366889 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:53:44.368773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:53:44.368827 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:53:44.372032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:53:44.372093 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:53:44.372698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:53:44.372749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:53:44.374321 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:53:44.378890 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:53:44.378947 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:53:44.379393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:53:44.379444 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:53:44.379717 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:53:44.379763 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:53:44.380210 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:53:44.380261 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:53:44.380528 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:53:44.380584 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:53:44.380903 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:53:44.380948 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:53:44.381416 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:53:44.381460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:53:44.382179 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:53:44.382296 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:53:44.395008 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:53:44.395112 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:53:44.514079 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:53:44.514229 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:53:44.516504 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:53:44.517158 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:53:44.517212 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:53:44.536018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:53:44.542749 systemd[1]: Switching root. Jul 6 23:53:44.574273 systemd-journald[193]: Journal stopped Jul 6 23:53:45.776072 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 6 23:53:45.776155 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:53:45.776177 kernel: SELinux: policy capability open_perms=1 Jul 6 23:53:45.776193 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:53:45.776221 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:53:45.776233 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:53:45.776245 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:53:45.776262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:53:45.776273 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:53:45.776286 kernel: audit: type=1403 audit(1751846025.014:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:53:45.776298 systemd[1]: Successfully loaded SELinux policy in 44.640ms. Jul 6 23:53:45.776324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.907ms. Jul 6 23:53:45.776342 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:53:45.776360 systemd[1]: Detected virtualization kvm. Jul 6 23:53:45.776373 systemd[1]: Detected architecture x86-64. Jul 6 23:53:45.776385 systemd[1]: Detected first boot. Jul 6 23:53:45.776397 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:53:45.776409 zram_generator::config[1069]: No configuration found. Jul 6 23:53:45.776422 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:53:45.776434 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:53:45.776446 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:53:45.776465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:53:45.776478 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:53:45.776490 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:53:45.776502 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:53:45.776515 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:53:45.776527 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:53:45.776546 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:53:45.776560 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:53:45.776573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:53:45.776592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:53:45.776605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:53:45.776617 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:53:45.776630 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:53:45.776642 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:53:45.776653 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:53:45.776665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:53:45.776677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:53:45.776690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:53:45.776707 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:53:45.776720 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:53:45.776732 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:53:45.776744 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:53:45.776756 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:53:45.776768 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:53:45.776808 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:53:45.776820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:53:45.776838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:53:45.776850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:53:45.776864 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:53:45.776892 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:53:45.776904 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:53:45.776916 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:53:45.776929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:45.776941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:53:45.776953 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:53:45.776968 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:53:45.776981 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:53:45.776993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:53:45.777005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:53:45.777017 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:53:45.777029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:53:45.777041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:53:45.777053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:53:45.777065 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:53:45.777080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:53:45.777093 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:53:45.777105 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 6 23:53:45.777118 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 6 23:53:45.777130 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:53:45.777144 kernel: fuse: init (API version 7.39) Jul 6 23:53:45.777156 kernel: loop: module loaded Jul 6 23:53:45.777168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:53:45.777183 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:53:45.777195 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:53:45.777228 systemd-journald[1154]: Collecting audit messages is disabled. Jul 6 23:53:45.777250 kernel: ACPI: bus type drm_connector registered Jul 6 23:53:45.777262 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:53:45.777275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:45.777287 systemd-journald[1154]: Journal started Jul 6 23:53:45.777312 systemd-journald[1154]: Runtime Journal (/run/log/journal/937dd5ec89fc46ba88ef5f6edb626dfe) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:53:45.783489 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:53:45.784901 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:53:45.786128 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:53:45.787788 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:53:45.788933 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:53:45.790249 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:53:45.791471 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:53:45.792924 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:53:45.794557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:53:45.796132 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:53:45.796361 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:53:45.797911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:53:45.798133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:53:45.799689 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:53:45.799922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:53:45.801500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:53:45.801753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:53:45.803351 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:53:45.803579 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:53:45.805007 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:53:45.805240 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:53:45.807228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:53:45.808939 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:53:45.810931 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:53:45.827374 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:53:45.834950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:53:45.837360 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:53:45.838529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:53:45.842038 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:53:45.845075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:53:45.847942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:53:45.851719 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:53:45.852056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:53:45.854160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:53:45.861065 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:53:45.872569 systemd-journald[1154]: Time spent on flushing to /var/log/journal/937dd5ec89fc46ba88ef5f6edb626dfe is 18.317ms for 945 entries. Jul 6 23:53:45.872569 systemd-journald[1154]: System Journal (/var/log/journal/937dd5ec89fc46ba88ef5f6edb626dfe) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:53:45.915475 systemd-journald[1154]: Received client request to flush runtime journal. Jul 6 23:53:45.868325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:53:45.869693 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:53:45.881402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:53:45.885385 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:53:45.887682 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:53:45.897770 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jul 6 23:53:45.897784 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jul 6 23:53:45.899241 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:53:45.901087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:53:45.907112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:53:45.916088 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:53:45.917729 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:53:45.922747 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:53:45.942565 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:53:45.950057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:53:45.967212 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jul 6 23:53:45.967234 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jul 6 23:53:45.973193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:53:46.516809 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:53:46.534138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:53:46.559995 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Jul 6 23:53:46.577205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:53:46.588077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:53:46.605009 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:53:46.638897 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1254) Jul 6 23:53:46.657104 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:53:46.674065 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 6 23:53:46.744038 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:53:46.751893 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:53:46.766919 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:53:46.775146 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:53:46.777249 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:53:46.777409 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:53:46.781137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:53:46.811674 systemd-networkd[1238]: lo: Link UP Jul 6 23:53:46.811687 systemd-networkd[1238]: lo: Gained carrier Jul 6 23:53:46.815482 systemd-networkd[1238]: Enumeration completed Jul 6 23:53:46.816027 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:53:46.816039 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:53:46.818114 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:53:46.820035 systemd-networkd[1238]: eth0: Link UP Jul 6 23:53:46.820048 systemd-networkd[1238]: eth0: Gained carrier Jul 6 23:53:46.820062 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:53:46.825897 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:53:46.829768 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:53:46.836154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:53:46.880137 systemd-networkd[1238]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:53:46.894031 kernel: kvm_amd: TSC scaling supported Jul 6 23:53:46.894124 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:53:46.894160 kernel: kvm_amd: Nested Paging enabled Jul 6 23:53:46.895266 kernel: kvm_amd: LBR virtualization supported Jul 6 23:53:46.895306 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:53:46.896243 kernel: kvm_amd: Virtual GIF supported Jul 6 23:53:46.919950 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:53:46.954455 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:53:46.973049 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:53:46.974756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:53:46.984708 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:53:47.022540 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:53:47.024187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:53:47.036009 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:53:47.041650 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:53:47.082274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:53:47.083821 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:53:47.085112 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:53:47.085140 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:53:47.086215 systemd[1]: Reached target machines.target - Containers. Jul 6 23:53:47.088343 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:53:47.105069 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:53:47.107794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:53:47.109134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:53:47.110186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:53:47.113818 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:53:47.119382 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:53:47.121803 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:53:47.130404 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:53:47.134270 kernel: loop0: detected capacity change from 0 to 142488 Jul 6 23:53:47.148908 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:53:47.149805 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:53:47.160899 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:53:47.195904 kernel: loop1: detected capacity change from 0 to 221472 Jul 6 23:53:47.232905 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:53:47.357904 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:53:47.373897 kernel: loop4: detected capacity change from 0 to 221472 Jul 6 23:53:47.383221 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:53:47.394371 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:53:47.395224 (sd-merge)[1305]: Merged extensions into '/usr'. Jul 6 23:53:47.413377 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:53:47.413398 systemd[1]: Reloading... Jul 6 23:53:47.514895 zram_generator::config[1333]: No configuration found. Jul 6 23:53:47.574501 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:53:47.702025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:53:47.775430 systemd[1]: Reloading finished in 361 ms. Jul 6 23:53:47.797416 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:53:47.800773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:53:47.811058 systemd[1]: Starting ensure-sysext.service... Jul 6 23:53:47.813647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:53:47.819600 systemd[1]: Reloading requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:53:47.819616 systemd[1]: Reloading... Jul 6 23:53:47.859212 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:53:47.859640 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:53:47.861302 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:53:47.861678 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jul 6 23:53:47.861842 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jul 6 23:53:47.865483 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:53:47.867062 systemd-tmpfiles[1378]: Skipping /boot Jul 6 23:53:47.881900 zram_generator::config[1412]: No configuration found. Jul 6 23:53:47.886565 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:53:47.886694 systemd-tmpfiles[1378]: Skipping /boot Jul 6 23:53:48.020401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:53:48.071109 systemd-networkd[1238]: eth0: Gained IPv6LL Jul 6 23:53:48.089769 systemd[1]: Reloading finished in 269 ms. Jul 6 23:53:48.117160 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:53:48.128990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:53:48.140765 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:53:48.143928 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:53:48.147040 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:53:48.151134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:53:48.156180 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:53:48.160737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:48.161068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:53:48.166096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:53:48.171485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:53:48.183145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:53:48.184239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:53:48.184521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:48.185830 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:53:48.188720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:53:48.188995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:53:48.190777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:53:48.191076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:53:48.193411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:53:48.193705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:53:48.202526 augenrules[1484]: No rules Jul 6 23:53:48.202861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:48.203237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:53:48.209849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:53:48.213130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:53:48.216207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:53:48.220011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:53:48.223588 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:53:48.224637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:48.227453 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:53:48.229769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:53:48.231980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:53:48.232216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:53:48.234124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:53:48.234349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:53:48.236197 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:53:48.237176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:53:48.240293 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:53:48.245187 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:53:48.254003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:48.254226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:53:48.263190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:53:48.265624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:53:48.270089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:53:48.274328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:53:48.275576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:53:48.275737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:53:48.275830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:53:48.278024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:53:48.278347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:53:48.280421 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:53:48.286404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:53:48.288254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:53:48.288500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:53:48.290379 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:53:48.290641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:53:48.291028 systemd-resolved[1457]: Positive Trust Anchors: Jul 6 23:53:48.291045 systemd-resolved[1457]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:53:48.291081 systemd-resolved[1457]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:53:48.296265 systemd[1]: Finished ensure-sysext.service. Jul 6 23:53:48.297840 systemd-resolved[1457]: Defaulting to hostname 'linux'. Jul 6 23:53:48.300817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:53:48.302146 systemd[1]: Reached target network.target - Network. Jul 6 23:53:48.303038 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:53:48.304077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:53:48.305238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:53:48.305321 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:53:48.324174 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:53:48.392752 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:53:48.394141 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:53:49.012892 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:53:49.012939 systemd-timesyncd[1525]: Initial clock synchronization to Sun 2025-07-06 23:53:49.012775 UTC. Jul 6 23:53:49.013368 systemd-resolved[1457]: Clock change detected. Flushing caches. Jul 6 23:53:49.013475 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:53:49.014714 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:53:49.015954 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:53:49.017195 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:53:49.017224 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:53:49.018096 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:53:49.019241 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:53:49.020348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:53:49.021542 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:53:49.023218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:53:49.026354 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:53:49.028801 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:53:49.032249 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:53:49.033320 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:53:49.034254 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:53:49.035329 systemd[1]: System is tainted: cgroupsv1 Jul 6 23:53:49.035369 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:53:49.035396 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:53:49.036844 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:53:49.039004 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:53:49.041214 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:53:49.045991 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:53:49.049994 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:53:49.051001 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:53:49.053160 jq[1532]: false Jul 6 23:53:49.056874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:53:49.060034 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:53:49.062373 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:53:49.065937 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:53:49.072320 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:53:49.076969 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:53:49.077942 extend-filesystems[1535]: Found loop3 Jul 6 23:53:49.077942 extend-filesystems[1535]: Found loop4 Jul 6 23:53:49.077942 extend-filesystems[1535]: Found loop5 Jul 6 23:53:49.077942 extend-filesystems[1535]: Found sr0 Jul 6 23:53:49.077942 extend-filesystems[1535]: Found vda Jul 6 23:53:49.077942 extend-filesystems[1535]: Found vda1 Jul 6 23:53:49.077942 extend-filesystems[1535]: Found vda2 Jul 6 23:53:49.077942 extend-filesystems[1535]: Found vda3 Jul 6 23:53:49.092255 extend-filesystems[1535]: Found usr Jul 6 23:53:49.092255 extend-filesystems[1535]: Found vda4 Jul 6 23:53:49.092255 extend-filesystems[1535]: Found vda6 Jul 6 23:53:49.092255 extend-filesystems[1535]: Found vda7 Jul 6 23:53:49.092255 extend-filesystems[1535]: Found vda9 Jul 6 23:53:49.092255 extend-filesystems[1535]: Checking size of /dev/vda9 Jul 6 23:53:49.085126 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:53:49.090113 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:53:49.093990 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:53:49.095650 dbus-daemon[1531]: [system] SELinux support is enabled Jul 6 23:53:49.097913 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:53:49.101754 extend-filesystems[1535]: Resized partition /dev/vda9 Jul 6 23:53:49.099097 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:53:49.104358 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:53:49.116850 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:53:49.114387 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:53:49.114755 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:53:49.123878 update_engine[1558]: I20250706 23:53:49.122337 1558 main.cc:92] Flatcar Update Engine starting Jul 6 23:53:49.119247 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:53:49.124284 update_engine[1558]: I20250706 23:53:49.124043 1558 update_check_scheduler.cc:74] Next update check in 10m56s Jul 6 23:53:49.119589 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:53:49.122348 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:53:49.126965 jq[1560]: true Jul 6 23:53:49.129353 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:53:49.129688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:53:49.135283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1253) Jul 6 23:53:49.209852 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:53:49.219423 jq[1577]: true Jul 6 23:53:49.221084 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:53:49.231366 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:53:49.231727 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:53:49.242296 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:53:49.242296 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:53:49.242296 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:53:49.250481 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jul 6 23:53:49.247741 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:53:49.248071 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:53:49.256747 tar[1574]: linux-amd64/helm Jul 6 23:53:49.267766 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:53:49.269723 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:53:49.270016 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:53:49.270046 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:53:49.270118 systemd-logind[1550]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:53:49.270140 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:53:49.272949 systemd-logind[1550]: New seat seat0. Jul 6 23:53:49.274345 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:53:49.274367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:53:49.276291 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:53:49.283973 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:53:49.285229 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:53:49.417745 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:53:49.413144 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:53:49.415655 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:53:49.443178 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:53:49.519184 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:53:49.542424 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:53:49.551491 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:53:49.617953 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:53:49.618327 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:53:49.629334 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:53:49.703662 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:53:49.717165 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:53:49.722069 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:53:49.723362 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:53:49.799551 containerd[1578]: time="2025-07-06T23:53:49.799305008Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:53:49.834567 containerd[1578]: time="2025-07-06T23:53:49.834429129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.836832 containerd[1578]: time="2025-07-06T23:53:49.836730094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:53:49.836832 containerd[1578]: time="2025-07-06T23:53:49.836758216Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:53:49.836832 containerd[1578]: time="2025-07-06T23:53:49.836775569Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:53:49.837332 containerd[1578]: time="2025-07-06T23:53:49.837086141Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:53:49.837332 containerd[1578]: time="2025-07-06T23:53:49.837106840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837332 containerd[1578]: time="2025-07-06T23:53:49.837185137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837332 containerd[1578]: time="2025-07-06T23:53:49.837197400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837527 containerd[1578]: time="2025-07-06T23:53:49.837498314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837527 containerd[1578]: time="2025-07-06T23:53:49.837520797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837581 containerd[1578]: time="2025-07-06T23:53:49.837535314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837581 containerd[1578]: time="2025-07-06T23:53:49.837545984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.837682 containerd[1578]: time="2025-07-06T23:53:49.837663304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.838602 containerd[1578]: time="2025-07-06T23:53:49.838036544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:53:49.838602 containerd[1578]: time="2025-07-06T23:53:49.838208496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:53:49.838602 containerd[1578]: time="2025-07-06T23:53:49.838221911Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:53:49.838602 containerd[1578]: time="2025-07-06T23:53:49.838518518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:53:49.838602 containerd[1578]: time="2025-07-06T23:53:49.838590092Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:53:49.845103 containerd[1578]: time="2025-07-06T23:53:49.844993470Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:53:49.845264 containerd[1578]: time="2025-07-06T23:53:49.845238640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:53:49.845287 containerd[1578]: time="2025-07-06T23:53:49.845271302Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:53:49.845308 containerd[1578]: time="2025-07-06T23:53:49.845293313Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:53:49.845328 containerd[1578]: time="2025-07-06T23:53:49.845316035Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:53:49.845658 containerd[1578]: time="2025-07-06T23:53:49.845616268Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:53:49.846382 containerd[1578]: time="2025-07-06T23:53:49.846333764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:53:49.846639 containerd[1578]: time="2025-07-06T23:53:49.846560259Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:53:49.846639 containerd[1578]: time="2025-07-06T23:53:49.846593261Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:53:49.846639 containerd[1578]: time="2025-07-06T23:53:49.846613609Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:53:49.846639 containerd[1578]: time="2025-07-06T23:53:49.846632013Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846746 containerd[1578]: time="2025-07-06T23:53:49.846648825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846746 containerd[1578]: time="2025-07-06T23:53:49.846683189Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846746 containerd[1578]: time="2025-07-06T23:53:49.846703427Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846746 containerd[1578]: time="2025-07-06T23:53:49.846723114Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846746 containerd[1578]: time="2025-07-06T23:53:49.846739655Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846861 containerd[1578]: time="2025-07-06T23:53:49.846759833Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846861 containerd[1578]: time="2025-07-06T23:53:49.846780091Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:53:49.846861 containerd[1578]: time="2025-07-06T23:53:49.846806400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846861 containerd[1578]: time="2025-07-06T23:53:49.846842959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846861 containerd[1578]: time="2025-07-06T23:53:49.846858899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846962 containerd[1578]: time="2025-07-06T23:53:49.846888605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846962 containerd[1578]: time="2025-07-06T23:53:49.846906137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846962 containerd[1578]: time="2025-07-06T23:53:49.846924632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846962 containerd[1578]: time="2025-07-06T23:53:49.846940883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.846962 containerd[1578]: time="2025-07-06T23:53:49.846960069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847055 containerd[1578]: time="2025-07-06T23:53:49.846979084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847055 containerd[1578]: time="2025-07-06T23:53:49.846999382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847055 containerd[1578]: time="2025-07-06T23:53:49.847017486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847055 containerd[1578]: time="2025-07-06T23:53:49.847034158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847055 containerd[1578]: time="2025-07-06T23:53:49.847050238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847151 containerd[1578]: time="2025-07-06T23:53:49.847085414Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:53:49.847151 containerd[1578]: time="2025-07-06T23:53:49.847115520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847151 containerd[1578]: time="2025-07-06T23:53:49.847133313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847151 containerd[1578]: time="2025-07-06T23:53:49.847149634Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847215347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847242478Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847255072Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847269208Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847280550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847295207Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847315064Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:53:49.847431 containerd[1578]: time="2025-07-06T23:53:49.847327077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:53:49.847780 containerd[1578]: time="2025-07-06T23:53:49.847716958Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:53:49.847780 containerd[1578]: time="2025-07-06T23:53:49.847782771Z" level=info msg="Connect containerd service" Jul 6 23:53:49.848068 containerd[1578]: time="2025-07-06T23:53:49.847947580Z" level=info msg="using legacy CRI server" Jul 6 23:53:49.848068 containerd[1578]: time="2025-07-06T23:53:49.848063257Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:53:49.848292 containerd[1578]: time="2025-07-06T23:53:49.848256390Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:53:49.854500 containerd[1578]: time="2025-07-06T23:53:49.854362350Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:53:49.854761 containerd[1578]: time="2025-07-06T23:53:49.854672031Z" level=info msg="Start subscribing containerd event" Jul 6 23:53:49.854906 containerd[1578]: time="2025-07-06T23:53:49.854793379Z" level=info msg="Start recovering state" Jul 6 23:53:49.854950 containerd[1578]: time="2025-07-06T23:53:49.854931658Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:53:49.854973 containerd[1578]: time="2025-07-06T23:53:49.854954962Z" level=info msg="Start event monitor" Jul 6 23:53:49.854993 containerd[1578]: time="2025-07-06T23:53:49.854978486Z" level=info msg="Start snapshots syncer" Jul 6 23:53:49.855014 containerd[1578]: time="2025-07-06T23:53:49.854996820Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:53:49.855034 containerd[1578]: time="2025-07-06T23:53:49.855008242Z" level=info msg="Start streaming server" Jul 6 23:53:49.855392 containerd[1578]: time="2025-07-06T23:53:49.854998814Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:53:49.855475 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:53:49.855667 containerd[1578]: time="2025-07-06T23:53:49.855629898Z" level=info msg="containerd successfully booted in 0.057591s" Jul 6 23:53:50.044216 tar[1574]: linux-amd64/LICENSE Jul 6 23:53:50.044424 tar[1574]: linux-amd64/README.md Jul 6 23:53:50.062114 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:53:50.912264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:53:50.913985 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:53:50.915332 systemd[1]: Startup finished in 7.258s (kernel) + 5.325s (userspace) = 12.583s. Jul 6 23:53:50.937262 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:53:51.344636 kubelet[1665]: E0706 23:53:51.344481 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:53:51.348581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:53:51.348876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:53:52.261481 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:53:52.275188 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:35022.service - OpenSSH per-connection server daemon (10.0.0.1:35022). Jul 6 23:53:52.313169 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 35022 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:52.315500 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:52.325928 systemd-logind[1550]: New session 1 of user core. Jul 6 23:53:52.327264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:53:52.337045 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:53:52.349707 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:53:52.352735 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:53:52.360667 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:53:52.470923 systemd[1684]: Queued start job for default target default.target. Jul 6 23:53:52.471307 systemd[1684]: Created slice app.slice - User Application Slice. Jul 6 23:53:52.471326 systemd[1684]: Reached target paths.target - Paths. Jul 6 23:53:52.471350 systemd[1684]: Reached target timers.target - Timers. Jul 6 23:53:52.479921 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:53:52.487306 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:53:52.487409 systemd[1684]: Reached target sockets.target - Sockets. Jul 6 23:53:52.487427 systemd[1684]: Reached target basic.target - Basic System. Jul 6 23:53:52.487477 systemd[1684]: Reached target default.target - Main User Target. Jul 6 23:53:52.487524 systemd[1684]: Startup finished in 120ms. Jul 6 23:53:52.488058 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:53:52.489701 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:53:52.552151 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:35026.service - OpenSSH per-connection server daemon (10.0.0.1:35026). Jul 6 23:53:52.583970 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:52.585657 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:52.590215 systemd-logind[1550]: New session 2 of user core. Jul 6 23:53:52.600119 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:53:52.656452 sshd[1696]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:52.666092 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:35038.service - OpenSSH per-connection server daemon (10.0.0.1:35038). Jul 6 23:53:52.666581 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:35026.service: Deactivated successfully. Jul 6 23:53:52.669183 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:53:52.670594 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:53:52.671726 systemd-logind[1550]: Removed session 2. Jul 6 23:53:52.695909 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 35038 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:52.697651 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:52.702092 systemd-logind[1550]: New session 3 of user core. Jul 6 23:53:52.712120 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:53:52.763139 sshd[1701]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:52.773177 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:35046.service - OpenSSH per-connection server daemon (10.0.0.1:35046). Jul 6 23:53:52.773796 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:35038.service: Deactivated successfully. Jul 6 23:53:52.775865 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:53:52.776639 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:53:52.778130 systemd-logind[1550]: Removed session 3. Jul 6 23:53:52.803204 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 35046 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:52.805015 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:52.809649 systemd-logind[1550]: New session 4 of user core. Jul 6 23:53:52.819177 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:53:52.874672 sshd[1709]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:52.890129 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Jul 6 23:53:52.890810 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:35046.service: Deactivated successfully. Jul 6 23:53:52.894130 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:53:52.895563 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:53:52.896857 systemd-logind[1550]: Removed session 4. Jul 6 23:53:52.920082 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:52.921637 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:52.925938 systemd-logind[1550]: New session 5 of user core. Jul 6 23:53:52.938086 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:53:52.998577 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:53:52.998947 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:53:53.027563 sudo[1724]: pam_unix(sudo:session): session closed for user root Jul 6 23:53:53.029621 sshd[1717]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:53.039098 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:35068.service - OpenSSH per-connection server daemon (10.0.0.1:35068). Jul 6 23:53:53.039573 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:35052.service: Deactivated successfully. Jul 6 23:53:53.042417 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:53:53.043303 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:53:53.045209 systemd-logind[1550]: Removed session 5. Jul 6 23:53:53.069400 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 35068 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:53.071050 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:53.074986 systemd-logind[1550]: New session 6 of user core. Jul 6 23:53:53.081082 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:53:53.136522 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:53:53.136903 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:53:53.141246 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 6 23:53:53.150054 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:53:53.150516 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:53:53.172045 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:53:53.174006 auditctl[1737]: No rules Jul 6 23:53:53.175533 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:53:53.175908 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:53:53.177929 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:53:53.219277 augenrules[1756]: No rules Jul 6 23:53:53.221457 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:53:53.223308 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 6 23:53:53.225603 sshd[1726]: pam_unix(sshd:session): session closed for user core Jul 6 23:53:53.236044 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:35072.service - OpenSSH per-connection server daemon (10.0.0.1:35072). Jul 6 23:53:53.236558 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:35068.service: Deactivated successfully. Jul 6 23:53:53.239356 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:53:53.240473 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:53:53.241656 systemd-logind[1550]: Removed session 6. Jul 6 23:53:53.265839 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 35072 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:53:53.267548 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:53:53.272029 systemd-logind[1550]: New session 7 of user core. Jul 6 23:53:53.285102 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:53:53.340838 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:53:53.341180 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:53:53.938036 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:53:53.938258 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:53:54.675681 dockerd[1788]: time="2025-07-06T23:53:54.675583521Z" level=info msg="Starting up" Jul 6 23:53:55.488561 dockerd[1788]: time="2025-07-06T23:53:55.488489881Z" level=info msg="Loading containers: start." Jul 6 23:53:55.617858 kernel: Initializing XFRM netlink socket Jul 6 23:53:55.714362 systemd-networkd[1238]: docker0: Link UP Jul 6 23:53:55.742081 dockerd[1788]: time="2025-07-06T23:53:55.741927437Z" level=info msg="Loading containers: done." Jul 6 23:53:55.761918 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2780969622-merged.mount: Deactivated successfully. Jul 6 23:53:55.763449 dockerd[1788]: time="2025-07-06T23:53:55.763399655Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:53:55.763592 dockerd[1788]: time="2025-07-06T23:53:55.763566608Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:53:55.763761 dockerd[1788]: time="2025-07-06T23:53:55.763737929Z" level=info msg="Daemon has completed initialization" Jul 6 23:53:55.805931 dockerd[1788]: time="2025-07-06T23:53:55.805792486Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:53:55.806128 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:53:56.492826 containerd[1578]: time="2025-07-06T23:53:56.492768622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:53:57.108396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502719033.mount: Deactivated successfully. Jul 6 23:53:57.938036 containerd[1578]: time="2025-07-06T23:53:57.937966033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:57.938706 containerd[1578]: time="2025-07-06T23:53:57.938668430Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 6 23:53:57.939760 containerd[1578]: time="2025-07-06T23:53:57.939725683Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:57.942549 containerd[1578]: time="2025-07-06T23:53:57.942501959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:57.943631 containerd[1578]: time="2025-07-06T23:53:57.943596051Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.45077954s" Jul 6 23:53:57.943631 containerd[1578]: time="2025-07-06T23:53:57.943630235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:53:57.944280 containerd[1578]: time="2025-07-06T23:53:57.944214651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:53:59.066426 containerd[1578]: time="2025-07-06T23:53:59.066353873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:59.067070 containerd[1578]: time="2025-07-06T23:53:59.067024521Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 6 23:53:59.068334 containerd[1578]: time="2025-07-06T23:53:59.068286788Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:59.071179 containerd[1578]: time="2025-07-06T23:53:59.071120692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:53:59.072432 containerd[1578]: time="2025-07-06T23:53:59.072390704Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.128104389s" Jul 6 23:53:59.072476 containerd[1578]: time="2025-07-06T23:53:59.072431060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:53:59.072978 containerd[1578]: time="2025-07-06T23:53:59.072930146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:54:00.524386 containerd[1578]: time="2025-07-06T23:54:00.524303228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:00.570215 containerd[1578]: time="2025-07-06T23:54:00.570085025Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 6 23:54:00.604868 containerd[1578]: time="2025-07-06T23:54:00.604783217Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:00.694549 containerd[1578]: time="2025-07-06T23:54:00.694494007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:00.695761 containerd[1578]: time="2025-07-06T23:54:00.695715868Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.62275255s" Jul 6 23:54:00.695761 containerd[1578]: time="2025-07-06T23:54:00.695748259Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:54:00.696271 containerd[1578]: time="2025-07-06T23:54:00.696224071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:54:01.597355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:54:01.607201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:54:01.820568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:01.825259 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:54:01.934433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699022728.mount: Deactivated successfully. Jul 6 23:54:01.936655 kubelet[2014]: E0706 23:54:01.936614 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:54:01.943043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:54:01.943411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:54:02.559320 containerd[1578]: time="2025-07-06T23:54:02.559239670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:02.560047 containerd[1578]: time="2025-07-06T23:54:02.559959921Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:54:02.561212 containerd[1578]: time="2025-07-06T23:54:02.561168979Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:02.563291 containerd[1578]: time="2025-07-06T23:54:02.563244271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:02.563948 containerd[1578]: time="2025-07-06T23:54:02.563902655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.867641395s" Jul 6 23:54:02.563948 containerd[1578]: time="2025-07-06T23:54:02.563940887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:54:02.564604 containerd[1578]: time="2025-07-06T23:54:02.564557293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:54:03.906198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146951224.mount: Deactivated successfully. Jul 6 23:54:04.770003 containerd[1578]: time="2025-07-06T23:54:04.769947266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:04.770777 containerd[1578]: time="2025-07-06T23:54:04.770746325Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:54:04.772646 containerd[1578]: time="2025-07-06T23:54:04.772614128Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:04.775752 containerd[1578]: time="2025-07-06T23:54:04.775703772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:04.776882 containerd[1578]: time="2025-07-06T23:54:04.776846194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.212242334s" Jul 6 23:54:04.776950 containerd[1578]: time="2025-07-06T23:54:04.776884847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:54:04.777425 containerd[1578]: time="2025-07-06T23:54:04.777397538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:54:05.567268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946651248.mount: Deactivated successfully. Jul 6 23:54:06.048728 containerd[1578]: time="2025-07-06T23:54:06.048563188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:06.049638 containerd[1578]: time="2025-07-06T23:54:06.049586507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:54:06.050987 containerd[1578]: time="2025-07-06T23:54:06.050949022Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:06.084840 containerd[1578]: time="2025-07-06T23:54:06.084743018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:06.085627 containerd[1578]: time="2025-07-06T23:54:06.085581109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.308152814s" Jul 6 23:54:06.085686 containerd[1578]: time="2025-07-06T23:54:06.085632305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:54:06.086126 containerd[1578]: time="2025-07-06T23:54:06.086101455Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:54:06.737486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976429604.mount: Deactivated successfully. Jul 6 23:54:09.490425 containerd[1578]: time="2025-07-06T23:54:09.490204944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:09.491586 containerd[1578]: time="2025-07-06T23:54:09.490766347Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 6 23:54:09.492139 containerd[1578]: time="2025-07-06T23:54:09.492099427Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:09.495145 containerd[1578]: time="2025-07-06T23:54:09.495110855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:09.496354 containerd[1578]: time="2025-07-06T23:54:09.496315574Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.41018837s" Jul 6 23:54:09.496395 containerd[1578]: time="2025-07-06T23:54:09.496363464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:54:11.800480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:11.811121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:54:11.838392 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-7.scope)... Jul 6 23:54:11.838412 systemd[1]: Reloading... Jul 6 23:54:11.973857 zram_generator::config[2216]: No configuration found. Jul 6 23:54:12.171071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:54:12.250311 systemd[1]: Reloading finished in 411 ms. Jul 6 23:54:12.306949 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:54:12.307096 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:54:12.307605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:12.310091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:54:12.478883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:12.483646 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:54:12.528802 kubelet[2267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:54:12.528802 kubelet[2267]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:54:12.528802 kubelet[2267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:54:12.528802 kubelet[2267]: I0706 23:54:12.528197 2267 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:54:12.813568 kubelet[2267]: I0706 23:54:12.813457 2267 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:54:12.813568 kubelet[2267]: I0706 23:54:12.813487 2267 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:54:12.813731 kubelet[2267]: I0706 23:54:12.813716 2267 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:54:12.832691 kubelet[2267]: E0706 23:54:12.832652 2267 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:12.833393 kubelet[2267]: I0706 23:54:12.833365 2267 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:54:12.839456 kubelet[2267]: E0706 23:54:12.839409 2267 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:54:12.839456 kubelet[2267]: I0706 23:54:12.839456 2267 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:54:12.847578 kubelet[2267]: I0706 23:54:12.847531 2267 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:54:12.848388 kubelet[2267]: I0706 23:54:12.848354 2267 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:54:12.848537 kubelet[2267]: I0706 23:54:12.848500 2267 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:54:12.848689 kubelet[2267]: I0706 23:54:12.848530 2267 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:54:12.848808 kubelet[2267]: I0706 23:54:12.848704 2267 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:54:12.848808 kubelet[2267]: I0706 23:54:12.848712 2267 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:54:12.848910 kubelet[2267]: I0706 23:54:12.848859 2267 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:54:12.850739 kubelet[2267]: I0706 23:54:12.850706 2267 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:54:12.850739 kubelet[2267]: I0706 23:54:12.850735 2267 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:54:12.850843 kubelet[2267]: I0706 23:54:12.850776 2267 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:54:12.850843 kubelet[2267]: I0706 23:54:12.850802 2267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:54:12.852440 kubelet[2267]: W0706 23:54:12.852384 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:12.852494 kubelet[2267]: E0706 23:54:12.852445 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:12.853524 kubelet[2267]: I0706 23:54:12.853499 2267 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:54:12.853735 kubelet[2267]: W0706 23:54:12.853688 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:12.853735 kubelet[2267]: E0706 23:54:12.853724 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:12.854027 kubelet[2267]: I0706 23:54:12.854001 2267 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:54:12.854688 kubelet[2267]: W0706 23:54:12.854653 2267 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:54:12.858238 kubelet[2267]: I0706 23:54:12.856737 2267 server.go:1274] "Started kubelet" Jul 6 23:54:12.858238 kubelet[2267]: I0706 23:54:12.856979 2267 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:54:12.858238 kubelet[2267]: I0706 23:54:12.856977 2267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:54:12.858238 kubelet[2267]: I0706 23:54:12.857416 2267 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:54:12.858238 kubelet[2267]: I0706 23:54:12.858132 2267 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:54:12.858440 kubelet[2267]: I0706 23:54:12.858418 2267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:54:12.859296 kubelet[2267]: I0706 23:54:12.859273 2267 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:54:12.861646 kubelet[2267]: I0706 23:54:12.861057 2267 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:54:12.861646 kubelet[2267]: I0706 23:54:12.861168 2267 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:54:12.861646 kubelet[2267]: I0706 23:54:12.861338 2267 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:54:12.862039 kubelet[2267]: W0706 23:54:12.861880 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:12.862039 kubelet[2267]: E0706 23:54:12.861935 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:12.862168 kubelet[2267]: E0706 23:54:12.862145 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:54:12.862251 kubelet[2267]: E0706 23:54:12.862221 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Jul 6 23:54:12.862483 kubelet[2267]: E0706 23:54:12.861353 2267 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fceb411b1aef1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:54:12.856704753 +0000 UTC m=+0.368685938,LastTimestamp:2025-07-06 23:54:12.856704753 +0000 UTC m=+0.368685938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:54:12.864351 kubelet[2267]: E0706 23:54:12.864330 2267 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:54:12.865025 kubelet[2267]: I0706 23:54:12.865009 2267 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:54:12.865118 kubelet[2267]: I0706 23:54:12.865108 2267 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:54:12.865302 kubelet[2267]: I0706 23:54:12.865275 2267 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:54:12.881000 kubelet[2267]: I0706 23:54:12.880954 2267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:54:12.882676 kubelet[2267]: I0706 23:54:12.882651 2267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:54:12.882728 kubelet[2267]: I0706 23:54:12.882694 2267 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:54:12.882728 kubelet[2267]: I0706 23:54:12.882721 2267 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:54:12.882968 kubelet[2267]: E0706 23:54:12.882936 2267 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:54:12.883529 kubelet[2267]: W0706 23:54:12.883501 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:12.883578 kubelet[2267]: E0706 23:54:12.883542 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:12.887538 kubelet[2267]: I0706 23:54:12.887517 2267 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:54:12.887538 kubelet[2267]: I0706 23:54:12.887531 2267 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:54:12.887631 kubelet[2267]: I0706 23:54:12.887546 2267 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:54:12.963153 kubelet[2267]: E0706 23:54:12.963098 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:54:12.983419 kubelet[2267]: E0706 23:54:12.983389 2267 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:54:13.063235 kubelet[2267]: E0706 23:54:13.063193 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:54:13.063516 kubelet[2267]: E0706 23:54:13.063489 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Jul 6 23:54:13.164290 kubelet[2267]: E0706 23:54:13.164129 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:54:13.184396 kubelet[2267]: E0706 23:54:13.184327 2267 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:54:13.246201 kubelet[2267]: I0706 23:54:13.246148 2267 policy_none.go:49] "None policy: Start" Jul 6 23:54:13.247044 kubelet[2267]: I0706 23:54:13.247026 2267 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:54:13.247140 kubelet[2267]: I0706 23:54:13.247056 2267 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:54:13.254483 kubelet[2267]: I0706 23:54:13.254460 2267 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:54:13.254731 kubelet[2267]: I0706 23:54:13.254704 2267 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:54:13.254762 kubelet[2267]: I0706 23:54:13.254731 2267 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:54:13.255739 kubelet[2267]: I0706 23:54:13.255689 2267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:54:13.256374 kubelet[2267]: E0706 23:54:13.256351 2267 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:54:13.356185 kubelet[2267]: I0706 23:54:13.356143 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:54:13.356521 kubelet[2267]: E0706 23:54:13.356479 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 6 23:54:13.464711 kubelet[2267]: E0706 23:54:13.464598 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Jul 6 23:54:13.558100 kubelet[2267]: I0706 23:54:13.558071 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:54:13.558572 kubelet[2267]: E0706 23:54:13.558370 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 6 23:54:13.664875 kubelet[2267]: I0706 23:54:13.664828 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:13.664875 kubelet[2267]: I0706 23:54:13.664868 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:13.664875 kubelet[2267]: I0706 23:54:13.664886 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:13.665119 kubelet[2267]: I0706 23:54:13.664900 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:54:13.665119 kubelet[2267]: I0706 23:54:13.664931 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:13.665119 kubelet[2267]: I0706 23:54:13.664959 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:13.665119 kubelet[2267]: I0706 23:54:13.664981 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:13.665119 kubelet[2267]: I0706 23:54:13.664996 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:13.665231 kubelet[2267]: I0706 23:54:13.665010 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:13.715569 kubelet[2267]: W0706 23:54:13.715447 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:13.715569 kubelet[2267]: E0706 23:54:13.715500 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:13.890003 kubelet[2267]: E0706 23:54:13.889948 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:13.890672 containerd[1578]: time="2025-07-06T23:54:13.890611728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053976a3b4f8563497a8a85e0c894dd8,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:13.891685 kubelet[2267]: E0706 23:54:13.891636 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:13.891903 kubelet[2267]: E0706 23:54:13.891887 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:13.892320 containerd[1578]: time="2025-07-06T23:54:13.891995202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:13.892320 containerd[1578]: time="2025-07-06T23:54:13.892131678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:13.960573 kubelet[2267]: I0706 23:54:13.960516 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:54:13.960917 kubelet[2267]: E0706 23:54:13.960874 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 6 23:54:13.982531 kubelet[2267]: W0706 23:54:13.982398 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:13.982531 kubelet[2267]: E0706 23:54:13.982456 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:14.266084 kubelet[2267]: E0706 23:54:14.265909 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Jul 6 23:54:14.266084 kubelet[2267]: W0706 23:54:14.265896 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:14.266084 kubelet[2267]: E0706 23:54:14.266011 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:14.390028 kubelet[2267]: W0706 23:54:14.389920 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 6 23:54:14.390028 kubelet[2267]: E0706 23:54:14.390017 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:54:14.439465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485508995.mount: Deactivated successfully. Jul 6 23:54:14.447872 containerd[1578]: time="2025-07-06T23:54:14.447802109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:54:14.449067 containerd[1578]: time="2025-07-06T23:54:14.449015765Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:54:14.450052 containerd[1578]: time="2025-07-06T23:54:14.449959004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:54:14.451222 containerd[1578]: time="2025-07-06T23:54:14.451191465Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:54:14.452131 containerd[1578]: time="2025-07-06T23:54:14.452078248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:54:14.452972 containerd[1578]: time="2025-07-06T23:54:14.452910078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:54:14.453961 containerd[1578]: time="2025-07-06T23:54:14.453913329Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:54:14.458320 containerd[1578]: time="2025-07-06T23:54:14.458287643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:54:14.459134 containerd[1578]: time="2025-07-06T23:54:14.459104425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.396826ms" Jul 6 23:54:14.462866 containerd[1578]: time="2025-07-06T23:54:14.462809112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.752955ms" Jul 6 23:54:14.465576 containerd[1578]: time="2025-07-06T23:54:14.465543630Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.359393ms" Jul 6 23:54:14.617613 containerd[1578]: time="2025-07-06T23:54:14.615722439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:14.617613 containerd[1578]: time="2025-07-06T23:54:14.615776571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:14.617613 containerd[1578]: time="2025-07-06T23:54:14.615797730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:14.617613 containerd[1578]: time="2025-07-06T23:54:14.615936691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:14.618258 containerd[1578]: time="2025-07-06T23:54:14.617906114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:14.618258 containerd[1578]: time="2025-07-06T23:54:14.617972268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:14.618258 containerd[1578]: time="2025-07-06T23:54:14.617984682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:14.618258 containerd[1578]: time="2025-07-06T23:54:14.618089258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:14.619147 containerd[1578]: time="2025-07-06T23:54:14.619055760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:14.619147 containerd[1578]: time="2025-07-06T23:54:14.619097358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:14.619147 containerd[1578]: time="2025-07-06T23:54:14.619110734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:14.619426 containerd[1578]: time="2025-07-06T23:54:14.619363207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:14.677561 containerd[1578]: time="2025-07-06T23:54:14.677506697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"64ab3abb1794932a05b5d60f23701dadb5fcc3b794ab64038408b3ea1e2093df\"" Jul 6 23:54:14.678859 kubelet[2267]: E0706 23:54:14.678799 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:14.681057 containerd[1578]: time="2025-07-06T23:54:14.680996652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053976a3b4f8563497a8a85e0c894dd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"eada10951c616c158172ef94e2bdd99a8580faec637844e1cfadf826b2d453de\"" Jul 6 23:54:14.681731 kubelet[2267]: E0706 23:54:14.681712 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:14.681803 containerd[1578]: time="2025-07-06T23:54:14.681035334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fb370b24ef107af2991363a208be0d49592a50117b2c1ea6aabe8e10879d63f\"" Jul 6 23:54:14.681950 containerd[1578]: time="2025-07-06T23:54:14.681399557Z" level=info msg="CreateContainer within sandbox \"64ab3abb1794932a05b5d60f23701dadb5fcc3b794ab64038408b3ea1e2093df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:54:14.682366 kubelet[2267]: E0706 23:54:14.682346 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:14.684246 containerd[1578]: time="2025-07-06T23:54:14.684212913Z" level=info msg="CreateContainer within sandbox \"eada10951c616c158172ef94e2bdd99a8580faec637844e1cfadf826b2d453de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:54:14.684447 containerd[1578]: time="2025-07-06T23:54:14.684289667Z" level=info msg="CreateContainer within sandbox \"3fb370b24ef107af2991363a208be0d49592a50117b2c1ea6aabe8e10879d63f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:54:14.709105 containerd[1578]: time="2025-07-06T23:54:14.709053358Z" level=info msg="CreateContainer within sandbox \"64ab3abb1794932a05b5d60f23701dadb5fcc3b794ab64038408b3ea1e2093df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c068412a1197bda763ae788ce7c00cced4f366f537fc2e9751c18d7cb312768a\"" Jul 6 23:54:14.710048 containerd[1578]: time="2025-07-06T23:54:14.710022676Z" level=info msg="StartContainer for \"c068412a1197bda763ae788ce7c00cced4f366f537fc2e9751c18d7cb312768a\"" Jul 6 23:54:14.715219 containerd[1578]: time="2025-07-06T23:54:14.715164148Z" level=info msg="CreateContainer within sandbox \"3fb370b24ef107af2991363a208be0d49592a50117b2c1ea6aabe8e10879d63f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ef52243717446009c02858e1a7344466f0937fd1ac7f2cbb1e83f925a831450\"" Jul 6 23:54:14.715801 containerd[1578]: time="2025-07-06T23:54:14.715752722Z" level=info msg="StartContainer for \"0ef52243717446009c02858e1a7344466f0937fd1ac7f2cbb1e83f925a831450\"" Jul 6 23:54:14.716576 containerd[1578]: time="2025-07-06T23:54:14.716546641Z" level=info msg="CreateContainer within sandbox \"eada10951c616c158172ef94e2bdd99a8580faec637844e1cfadf826b2d453de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5eeaa78303338a9221a64e6845898d3f8a797b21bdc3b2876c259c0353b48fb\"" Jul 6 23:54:14.717105 containerd[1578]: time="2025-07-06T23:54:14.717076775Z" level=info msg="StartContainer for \"e5eeaa78303338a9221a64e6845898d3f8a797b21bdc3b2876c259c0353b48fb\"" Jul 6 23:54:14.764272 kubelet[2267]: I0706 23:54:14.764164 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:54:14.764621 kubelet[2267]: E0706 23:54:14.764569 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 6 23:54:15.200519 containerd[1578]: time="2025-07-06T23:54:15.200455032Z" level=info msg="StartContainer for \"c068412a1197bda763ae788ce7c00cced4f366f537fc2e9751c18d7cb312768a\" returns successfully" Jul 6 23:54:15.201043 containerd[1578]: time="2025-07-06T23:54:15.200689141Z" level=info msg="StartContainer for \"e5eeaa78303338a9221a64e6845898d3f8a797b21bdc3b2876c259c0353b48fb\" returns successfully" Jul 6 23:54:15.201043 containerd[1578]: time="2025-07-06T23:54:15.200853729Z" level=info msg="StartContainer for \"0ef52243717446009c02858e1a7344466f0937fd1ac7f2cbb1e83f925a831450\" returns successfully" Jul 6 23:54:15.213803 kubelet[2267]: E0706 23:54:15.213675 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:15.214884 kubelet[2267]: E0706 23:54:15.214809 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:15.869507 kubelet[2267]: E0706 23:54:15.869456 2267 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:54:16.042975 kubelet[2267]: E0706 23:54:16.042924 2267 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 6 23:54:16.216143 kubelet[2267]: E0706 23:54:16.216000 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:16.216143 kubelet[2267]: E0706 23:54:16.216000 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:16.366670 kubelet[2267]: I0706 23:54:16.366634 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:54:16.373167 kubelet[2267]: I0706 23:54:16.373144 2267 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:54:16.853794 kubelet[2267]: I0706 23:54:16.853736 2267 apiserver.go:52] "Watching apiserver" Jul 6 23:54:16.862227 kubelet[2267]: I0706 23:54:16.862177 2267 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:54:17.254805 kubelet[2267]: E0706 23:54:17.254631 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:17.925669 systemd[1]: Reloading requested from client PID 2543 ('systemctl') (unit session-7.scope)... Jul 6 23:54:17.925688 systemd[1]: Reloading... Jul 6 23:54:17.992868 zram_generator::config[2582]: No configuration found. Jul 6 23:54:18.143480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:54:18.218457 kubelet[2267]: E0706 23:54:18.218305 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:18.251500 systemd[1]: Reloading finished in 325 ms. Jul 6 23:54:18.299379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:54:18.319551 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:54:18.320248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:18.336338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:54:18.554657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:54:18.561990 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:54:18.615583 kubelet[2637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:54:18.615583 kubelet[2637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:54:18.615583 kubelet[2637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:54:18.616372 kubelet[2637]: I0706 23:54:18.615625 2637 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:54:18.624976 kubelet[2637]: I0706 23:54:18.624937 2637 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:54:18.624976 kubelet[2637]: I0706 23:54:18.624962 2637 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:54:18.625371 kubelet[2637]: I0706 23:54:18.625162 2637 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:54:18.627087 kubelet[2637]: I0706 23:54:18.627057 2637 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:54:18.628996 kubelet[2637]: I0706 23:54:18.628957 2637 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:54:18.634671 kubelet[2637]: E0706 23:54:18.634625 2637 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:54:18.634671 kubelet[2637]: I0706 23:54:18.634652 2637 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:54:18.639709 kubelet[2637]: I0706 23:54:18.639668 2637 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:54:18.640206 kubelet[2637]: I0706 23:54:18.640179 2637 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:54:18.640370 kubelet[2637]: I0706 23:54:18.640318 2637 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:54:18.640905 kubelet[2637]: I0706 23:54:18.640357 2637 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:54:18.640905 kubelet[2637]: I0706 23:54:18.640883 2637 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:54:18.640905 kubelet[2637]: I0706 23:54:18.640894 2637 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:54:18.641034 kubelet[2637]: I0706 23:54:18.640924 2637 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:54:18.641062 kubelet[2637]: I0706 23:54:18.641048 2637 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:54:18.641062 kubelet[2637]: I0706 23:54:18.641061 2637 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:54:18.641124 kubelet[2637]: I0706 23:54:18.641092 2637 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:54:18.641124 kubelet[2637]: I0706 23:54:18.641101 2637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:54:18.642466 kubelet[2637]: I0706 23:54:18.642411 2637 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:54:18.643184 kubelet[2637]: I0706 23:54:18.643145 2637 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:54:18.643766 kubelet[2637]: I0706 23:54:18.643715 2637 server.go:1274] "Started kubelet" Jul 6 23:54:18.646407 kubelet[2637]: I0706 23:54:18.646257 2637 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:54:18.646857 kubelet[2637]: I0706 23:54:18.646808 2637 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:54:18.654428 kubelet[2637]: I0706 23:54:18.654241 2637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:54:18.655648 kubelet[2637]: I0706 23:54:18.655362 2637 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:54:18.655737 kubelet[2637]: I0706 23:54:18.655712 2637 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:54:18.657365 kubelet[2637]: I0706 23:54:18.657345 2637 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:54:18.658244 kubelet[2637]: I0706 23:54:18.658219 2637 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:54:18.658393 kubelet[2637]: E0706 23:54:18.658365 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:54:18.664294 kubelet[2637]: I0706 23:54:18.659813 2637 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:54:18.664294 kubelet[2637]: I0706 23:54:18.660439 2637 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:54:18.664294 kubelet[2637]: I0706 23:54:18.662380 2637 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:54:18.664294 kubelet[2637]: I0706 23:54:18.662471 2637 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:54:18.666247 kubelet[2637]: E0706 23:54:18.664634 2637 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:54:18.666247 kubelet[2637]: I0706 23:54:18.665328 2637 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:54:18.683358 kubelet[2637]: I0706 23:54:18.683207 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:54:18.685866 kubelet[2637]: I0706 23:54:18.685454 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:54:18.685866 kubelet[2637]: I0706 23:54:18.685483 2637 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:54:18.685866 kubelet[2637]: I0706 23:54:18.685501 2637 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:54:18.685866 kubelet[2637]: E0706 23:54:18.685552 2637 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:54:18.731851 kubelet[2637]: I0706 23:54:18.731610 2637 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:54:18.731851 kubelet[2637]: I0706 23:54:18.731634 2637 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:54:18.731851 kubelet[2637]: I0706 23:54:18.731654 2637 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:54:18.731851 kubelet[2637]: I0706 23:54:18.731852 2637 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:54:18.732060 kubelet[2637]: I0706 23:54:18.731866 2637 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:54:18.732060 kubelet[2637]: I0706 23:54:18.731895 2637 policy_none.go:49] "None policy: Start" Jul 6 23:54:18.732583 kubelet[2637]: I0706 23:54:18.732552 2637 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:54:18.732583 kubelet[2637]: I0706 23:54:18.732578 2637 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:54:18.732731 kubelet[2637]: I0706 23:54:18.732718 2637 state_mem.go:75] "Updated machine memory state" Jul 6 23:54:18.734374 kubelet[2637]: I0706 23:54:18.734350 2637 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:54:18.734562 kubelet[2637]: I0706 23:54:18.734540 2637 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:54:18.734597 kubelet[2637]: I0706 23:54:18.734555 2637 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:54:18.734933 kubelet[2637]: I0706 23:54:18.734915 2637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:54:18.793438 kubelet[2637]: E0706 23:54:18.793281 2637 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:54:18.843520 kubelet[2637]: I0706 23:54:18.842236 2637 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:54:18.848350 kubelet[2637]: I0706 23:54:18.848317 2637 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:54:18.848424 kubelet[2637]: I0706 23:54:18.848392 2637 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:54:18.862982 kubelet[2637]: I0706 23:54:18.862941 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:18.862982 kubelet[2637]: I0706 23:54:18.862972 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:18.862982 kubelet[2637]: I0706 23:54:18.862989 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:18.862982 kubelet[2637]: I0706 23:54:18.863005 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:18.863215 kubelet[2637]: I0706 23:54:18.863020 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:18.863215 kubelet[2637]: I0706 23:54:18.863034 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:18.863215 kubelet[2637]: I0706 23:54:18.863069 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:18.863215 kubelet[2637]: I0706 23:54:18.863126 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:54:18.863215 kubelet[2637]: I0706 23:54:18.863151 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:54:18.924307 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:54:18.924878 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:54:19.094791 kubelet[2637]: E0706 23:54:19.094556 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:19.094791 kubelet[2637]: E0706 23:54:19.094593 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:19.095329 kubelet[2637]: E0706 23:54:19.095239 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:19.428268 sudo[2673]: pam_unix(sudo:session): session closed for user root Jul 6 23:54:19.642222 kubelet[2637]: I0706 23:54:19.642149 2637 apiserver.go:52] "Watching apiserver" Jul 6 23:54:19.661512 kubelet[2637]: I0706 23:54:19.661422 2637 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:54:19.707327 kubelet[2637]: E0706 23:54:19.707191 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:19.707544 kubelet[2637]: E0706 23:54:19.707510 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:19.722225 kubelet[2637]: E0706 23:54:19.722172 2637 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:54:19.722617 kubelet[2637]: E0706 23:54:19.722437 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:19.741840 kubelet[2637]: I0706 23:54:19.741716 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7416964 podStartE2EDuration="1.7416964s" podCreationTimestamp="2025-07-06 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:54:19.734557172 +0000 UTC m=+1.165922202" watchObservedRunningTime="2025-07-06 23:54:19.7416964 +0000 UTC m=+1.173061430" Jul 6 23:54:19.748945 kubelet[2637]: I0706 23:54:19.748835 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.748796335 podStartE2EDuration="1.748796335s" podCreationTimestamp="2025-07-06 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:54:19.742038462 +0000 UTC m=+1.173403492" watchObservedRunningTime="2025-07-06 23:54:19.748796335 +0000 UTC m=+1.180161365" Jul 6 23:54:19.749219 kubelet[2637]: I0706 23:54:19.749181 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.749174254 podStartE2EDuration="2.749174254s" podCreationTimestamp="2025-07-06 23:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:54:19.74858037 +0000 UTC m=+1.179945400" watchObservedRunningTime="2025-07-06 23:54:19.749174254 +0000 UTC m=+1.180539284" Jul 6 23:54:20.707995 kubelet[2637]: E0706 23:54:20.707948 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:21.072995 kubelet[2637]: E0706 23:54:21.072854 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:21.089259 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 6 23:54:21.091945 sshd[1762]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:21.096310 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:35072.service: Deactivated successfully. Jul 6 23:54:21.099459 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:54:21.099533 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:54:21.101178 systemd-logind[1550]: Removed session 7. Jul 6 23:54:23.163489 kubelet[2637]: I0706 23:54:23.163451 2637 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:54:23.163994 containerd[1578]: time="2025-07-06T23:54:23.163830249Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:54:23.164286 kubelet[2637]: I0706 23:54:23.163995 2637 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:54:23.989395 kubelet[2637]: I0706 23:54:23.989344 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-bpf-maps\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989395 kubelet[2637]: I0706 23:54:23.989390 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-config-path\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989395 kubelet[2637]: I0706 23:54:23.989412 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-etc-cni-netd\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989395 kubelet[2637]: I0706 23:54:23.989427 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-lib-modules\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989729 kubelet[2637]: I0706 23:54:23.989444 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e44c741-3190-4058-ae1f-c21ff7b892a5-lib-modules\") pod \"kube-proxy-k6w6h\" (UID: \"2e44c741-3190-4058-ae1f-c21ff7b892a5\") " pod="kube-system/kube-proxy-k6w6h" Jul 6 23:54:23.989729 kubelet[2637]: I0706 23:54:23.989467 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hostproc\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989729 kubelet[2637]: I0706 23:54:23.989527 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-xtables-lock\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989729 kubelet[2637]: I0706 23:54:23.989571 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hubble-tls\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989729 kubelet[2637]: I0706 23:54:23.989589 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e44c741-3190-4058-ae1f-c21ff7b892a5-xtables-lock\") pod \"kube-proxy-k6w6h\" (UID: \"2e44c741-3190-4058-ae1f-c21ff7b892a5\") " pod="kube-system/kube-proxy-k6w6h" Jul 6 23:54:23.989729 kubelet[2637]: I0706 23:54:23.989603 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-cgroup\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989969 kubelet[2637]: I0706 23:54:23.989669 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-kernel\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989969 kubelet[2637]: I0706 23:54:23.989687 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cni-path\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989969 kubelet[2637]: I0706 23:54:23.989701 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-run\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.989969 kubelet[2637]: I0706 23:54:23.989716 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pnh\" (UniqueName: \"kubernetes.io/projected/2e44c741-3190-4058-ae1f-c21ff7b892a5-kube-api-access-x2pnh\") pod \"kube-proxy-k6w6h\" (UID: \"2e44c741-3190-4058-ae1f-c21ff7b892a5\") " pod="kube-system/kube-proxy-k6w6h" Jul 6 23:54:23.989969 kubelet[2637]: I0706 23:54:23.989733 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e44c741-3190-4058-ae1f-c21ff7b892a5-kube-proxy\") pod \"kube-proxy-k6w6h\" (UID: \"2e44c741-3190-4058-ae1f-c21ff7b892a5\") " pod="kube-system/kube-proxy-k6w6h" Jul 6 23:54:23.990210 kubelet[2637]: I0706 23:54:23.989748 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-clustermesh-secrets\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.990210 kubelet[2637]: I0706 23:54:23.989774 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-net\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:23.990210 kubelet[2637]: I0706 23:54:23.989788 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbmpb\" (UniqueName: \"kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-kube-api-access-nbmpb\") pod \"cilium-26bg4\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " pod="kube-system/cilium-26bg4" Jul 6 23:54:24.090214 kubelet[2637]: I0706 23:54:24.090155 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnjt\" (UniqueName: \"kubernetes.io/projected/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-kube-api-access-nfnjt\") pod \"cilium-operator-5d85765b45-njsrv\" (UID: \"c4b439a6-370f-4dab-b1d3-dda12b3cb8b6\") " pod="kube-system/cilium-operator-5d85765b45-njsrv" Jul 6 23:54:24.090346 kubelet[2637]: I0706 23:54:24.090305 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-cilium-config-path\") pod \"cilium-operator-5d85765b45-njsrv\" (UID: \"c4b439a6-370f-4dab-b1d3-dda12b3cb8b6\") " pod="kube-system/cilium-operator-5d85765b45-njsrv" Jul 6 23:54:24.209533 kubelet[2637]: E0706 23:54:24.209492 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:24.210179 containerd[1578]: time="2025-07-06T23:54:24.210142445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6w6h,Uid:2e44c741-3190-4058-ae1f-c21ff7b892a5,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:24.214251 kubelet[2637]: E0706 23:54:24.214221 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:24.214847 containerd[1578]: time="2025-07-06T23:54:24.214783336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26bg4,Uid:a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:24.242553 containerd[1578]: time="2025-07-06T23:54:24.242361282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:24.242553 containerd[1578]: time="2025-07-06T23:54:24.242502322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:24.242553 containerd[1578]: time="2025-07-06T23:54:24.242540436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:24.242739 containerd[1578]: time="2025-07-06T23:54:24.242689542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:24.253138 containerd[1578]: time="2025-07-06T23:54:24.253029883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:24.253138 containerd[1578]: time="2025-07-06T23:54:24.253079598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:24.253138 containerd[1578]: time="2025-07-06T23:54:24.253089486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:24.253398 containerd[1578]: time="2025-07-06T23:54:24.253168448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:24.334104 containerd[1578]: time="2025-07-06T23:54:24.334056312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26bg4,Uid:a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\"" Jul 6 23:54:24.334259 containerd[1578]: time="2025-07-06T23:54:24.334137347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6w6h,Uid:2e44c741-3190-4058-ae1f-c21ff7b892a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"27c1f89e2d7565bb5ba5fc41f6061a873490fabe5f4d56b42a804502d4fa123c\"" Jul 6 23:54:24.334937 kubelet[2637]: E0706 23:54:24.334916 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:24.336926 kubelet[2637]: E0706 23:54:24.336868 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:24.338269 containerd[1578]: time="2025-07-06T23:54:24.338243805Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:54:24.338571 containerd[1578]: time="2025-07-06T23:54:24.338482211Z" level=info msg="CreateContainer within sandbox \"27c1f89e2d7565bb5ba5fc41f6061a873490fabe5f4d56b42a804502d4fa123c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:54:24.358784 containerd[1578]: time="2025-07-06T23:54:24.358742137Z" level=info msg="CreateContainer within sandbox \"27c1f89e2d7565bb5ba5fc41f6061a873490fabe5f4d56b42a804502d4fa123c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8024e686702b437b2b552617a13a41edf98f4b0c814b15f1161c67e4f5b9c914\"" Jul 6 23:54:24.359588 containerd[1578]: time="2025-07-06T23:54:24.359551498Z" level=info msg="StartContainer for \"8024e686702b437b2b552617a13a41edf98f4b0c814b15f1161c67e4f5b9c914\"" Jul 6 23:54:24.371295 kubelet[2637]: E0706 23:54:24.371269 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:24.372484 containerd[1578]: time="2025-07-06T23:54:24.372040115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-njsrv,Uid:c4b439a6-370f-4dab-b1d3-dda12b3cb8b6,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:24.399730 containerd[1578]: time="2025-07-06T23:54:24.399559079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:24.399730 containerd[1578]: time="2025-07-06T23:54:24.399619715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:24.399730 containerd[1578]: time="2025-07-06T23:54:24.399644662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:24.400304 containerd[1578]: time="2025-07-06T23:54:24.400103752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:24.438392 containerd[1578]: time="2025-07-06T23:54:24.438342444Z" level=info msg="StartContainer for \"8024e686702b437b2b552617a13a41edf98f4b0c814b15f1161c67e4f5b9c914\" returns successfully" Jul 6 23:54:24.460671 containerd[1578]: time="2025-07-06T23:54:24.460613333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-njsrv,Uid:c4b439a6-370f-4dab-b1d3-dda12b3cb8b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775\"" Jul 6 23:54:24.463322 kubelet[2637]: E0706 23:54:24.463281 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:24.716358 kubelet[2637]: E0706 23:54:24.716176 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:25.174180 kubelet[2637]: E0706 23:54:25.171277 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:25.185312 kubelet[2637]: I0706 23:54:25.185247 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k6w6h" podStartSLOduration=2.185225397 podStartE2EDuration="2.185225397s" podCreationTimestamp="2025-07-06 23:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:54:24.729228253 +0000 UTC m=+6.160593303" watchObservedRunningTime="2025-07-06 23:54:25.185225397 +0000 UTC m=+6.616590427" Jul 6 23:54:25.721023 kubelet[2637]: E0706 23:54:25.720986 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:27.763399 kubelet[2637]: E0706 23:54:27.763353 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:28.085942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116352321.mount: Deactivated successfully. Jul 6 23:54:28.726367 kubelet[2637]: E0706 23:54:28.726322 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:30.192157 containerd[1578]: time="2025-07-06T23:54:30.192090316Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:30.192882 containerd[1578]: time="2025-07-06T23:54:30.192806579Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:54:30.194097 containerd[1578]: time="2025-07-06T23:54:30.194051598Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:30.195798 containerd[1578]: time="2025-07-06T23:54:30.195761952Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.85748287s" Jul 6 23:54:30.195864 containerd[1578]: time="2025-07-06T23:54:30.195797069Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:54:30.199666 containerd[1578]: time="2025-07-06T23:54:30.199620265Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:54:30.207051 containerd[1578]: time="2025-07-06T23:54:30.207018913Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:54:30.218575 containerd[1578]: time="2025-07-06T23:54:30.218521341Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\"" Jul 6 23:54:30.221880 containerd[1578]: time="2025-07-06T23:54:30.221839415Z" level=info msg="StartContainer for \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\"" Jul 6 23:54:30.280983 containerd[1578]: time="2025-07-06T23:54:30.280926471Z" level=info msg="StartContainer for \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\" returns successfully" Jul 6 23:54:30.732132 kubelet[2637]: E0706 23:54:30.732081 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:31.215153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487-rootfs.mount: Deactivated successfully. Jul 6 23:54:31.219214 kubelet[2637]: E0706 23:54:31.219189 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:31.418278 containerd[1578]: time="2025-07-06T23:54:31.418182312Z" level=info msg="shim disconnected" id=0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487 namespace=k8s.io Jul 6 23:54:31.418278 containerd[1578]: time="2025-07-06T23:54:31.418267043Z" level=warning msg="cleaning up after shim disconnected" id=0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487 namespace=k8s.io Jul 6 23:54:31.418278 containerd[1578]: time="2025-07-06T23:54:31.418294666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:54:31.735487 kubelet[2637]: E0706 23:54:31.735452 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:31.737519 containerd[1578]: time="2025-07-06T23:54:31.737439172Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:54:31.791811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556436939.mount: Deactivated successfully. Jul 6 23:54:31.794038 containerd[1578]: time="2025-07-06T23:54:31.793988652Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\"" Jul 6 23:54:31.794399 containerd[1578]: time="2025-07-06T23:54:31.794350741Z" level=info msg="StartContainer for \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\"" Jul 6 23:54:31.855128 containerd[1578]: time="2025-07-06T23:54:31.855055671Z" level=info msg="StartContainer for \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\" returns successfully" Jul 6 23:54:31.868461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:54:31.869175 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:31.869270 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:31.876208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:54:32.051726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:54:32.156167 containerd[1578]: time="2025-07-06T23:54:32.156091530Z" level=info msg="shim disconnected" id=966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec namespace=k8s.io Jul 6 23:54:32.156167 containerd[1578]: time="2025-07-06T23:54:32.156158698Z" level=warning msg="cleaning up after shim disconnected" id=966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec namespace=k8s.io Jul 6 23:54:32.156167 containerd[1578]: time="2025-07-06T23:54:32.156167274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:54:32.215501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec-rootfs.mount: Deactivated successfully. Jul 6 23:54:32.401030 systemd-resolved[1457]: Under memory pressure, flushing caches. Jul 6 23:54:32.401095 systemd-resolved[1457]: Flushed all caches. Jul 6 23:54:32.402851 systemd-journald[1154]: Under memory pressure, flushing caches. Jul 6 23:54:32.741836 kubelet[2637]: E0706 23:54:32.740060 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:32.742457 containerd[1578]: time="2025-07-06T23:54:32.742293001Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:54:32.809372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678287135.mount: Deactivated successfully. Jul 6 23:54:32.834085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52875613.mount: Deactivated successfully. Jul 6 23:54:32.837350 containerd[1578]: time="2025-07-06T23:54:32.837305723Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\"" Jul 6 23:54:32.839091 containerd[1578]: time="2025-07-06T23:54:32.838151038Z" level=info msg="StartContainer for \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\"" Jul 6 23:54:32.906724 containerd[1578]: time="2025-07-06T23:54:32.906417524Z" level=info msg="StartContainer for \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\" returns successfully" Jul 6 23:54:32.969297 containerd[1578]: time="2025-07-06T23:54:32.969213669Z" level=info msg="shim disconnected" id=487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b namespace=k8s.io Jul 6 23:54:32.969854 containerd[1578]: time="2025-07-06T23:54:32.969626694Z" level=warning msg="cleaning up after shim disconnected" id=487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b namespace=k8s.io Jul 6 23:54:32.969854 containerd[1578]: time="2025-07-06T23:54:32.969645980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:54:33.147405 containerd[1578]: time="2025-07-06T23:54:33.147260627Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:33.148193 containerd[1578]: time="2025-07-06T23:54:33.148097716Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:54:33.149325 containerd[1578]: time="2025-07-06T23:54:33.149278697Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:54:33.150615 containerd[1578]: time="2025-07-06T23:54:33.150570248Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.950907242s" Jul 6 23:54:33.150615 containerd[1578]: time="2025-07-06T23:54:33.150606728Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:54:33.154521 containerd[1578]: time="2025-07-06T23:54:33.154477815Z" level=info msg="CreateContainer within sandbox \"0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:54:33.166204 containerd[1578]: time="2025-07-06T23:54:33.166159577Z" level=info msg="CreateContainer within sandbox \"0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\"" Jul 6 23:54:33.166795 containerd[1578]: time="2025-07-06T23:54:33.166717847Z" level=info msg="StartContainer for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\"" Jul 6 23:54:33.356142 containerd[1578]: time="2025-07-06T23:54:33.356021934Z" level=info msg="StartContainer for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" returns successfully" Jul 6 23:54:33.749088 kubelet[2637]: E0706 23:54:33.749032 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:33.753913 kubelet[2637]: E0706 23:54:33.753198 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:33.757499 containerd[1578]: time="2025-07-06T23:54:33.757443198Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:54:33.804581 containerd[1578]: time="2025-07-06T23:54:33.804521134Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\"" Jul 6 23:54:33.805640 containerd[1578]: time="2025-07-06T23:54:33.805606785Z" level=info msg="StartContainer for \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\"" Jul 6 23:54:33.811644 kubelet[2637]: I0706 23:54:33.811584 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-njsrv" podStartSLOduration=1.124469413 podStartE2EDuration="9.811561417s" podCreationTimestamp="2025-07-06 23:54:24 +0000 UTC" firstStartedPulling="2025-07-06 23:54:24.464219051 +0000 UTC m=+5.895584071" lastFinishedPulling="2025-07-06 23:54:33.151311045 +0000 UTC m=+14.582676075" observedRunningTime="2025-07-06 23:54:33.785538214 +0000 UTC m=+15.216903244" watchObservedRunningTime="2025-07-06 23:54:33.811561417 +0000 UTC m=+15.242926447" Jul 6 23:54:33.906839 containerd[1578]: time="2025-07-06T23:54:33.906715243Z" level=info msg="StartContainer for \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\" returns successfully" Jul 6 23:54:33.927833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3-rootfs.mount: Deactivated successfully. Jul 6 23:54:33.935729 containerd[1578]: time="2025-07-06T23:54:33.935559590Z" level=info msg="shim disconnected" id=88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3 namespace=k8s.io Jul 6 23:54:33.935994 containerd[1578]: time="2025-07-06T23:54:33.935733300Z" level=warning msg="cleaning up after shim disconnected" id=88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3 namespace=k8s.io Jul 6 23:54:33.935994 containerd[1578]: time="2025-07-06T23:54:33.935881432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:54:34.346041 update_engine[1558]: I20250706 23:54:34.345907 1558 update_attempter.cc:509] Updating boot flags... Jul 6 23:54:34.382920 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3334) Jul 6 23:54:34.426093 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3336) Jul 6 23:54:34.464859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3336) Jul 6 23:54:34.799170 kubelet[2637]: E0706 23:54:34.798287 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:34.799170 kubelet[2637]: E0706 23:54:34.798480 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:34.801857 containerd[1578]: time="2025-07-06T23:54:34.801390278Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:54:34.819234 containerd[1578]: time="2025-07-06T23:54:34.819173797Z" level=info msg="CreateContainer within sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\"" Jul 6 23:54:34.819839 containerd[1578]: time="2025-07-06T23:54:34.819780507Z" level=info msg="StartContainer for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\"" Jul 6 23:54:34.882253 containerd[1578]: time="2025-07-06T23:54:34.882192519Z" level=info msg="StartContainer for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" returns successfully" Jul 6 23:54:35.001451 kubelet[2637]: I0706 23:54:35.001398 2637 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:54:35.162579 kubelet[2637]: I0706 23:54:35.162444 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6209bbd-4212-492c-8ec6-de952be34f46-config-volume\") pod \"coredns-7c65d6cfc9-9bhtm\" (UID: \"d6209bbd-4212-492c-8ec6-de952be34f46\") " pod="kube-system/coredns-7c65d6cfc9-9bhtm" Jul 6 23:54:35.162579 kubelet[2637]: I0706 23:54:35.162487 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5859x\" (UniqueName: \"kubernetes.io/projected/d6209bbd-4212-492c-8ec6-de952be34f46-kube-api-access-5859x\") pod \"coredns-7c65d6cfc9-9bhtm\" (UID: \"d6209bbd-4212-492c-8ec6-de952be34f46\") " pod="kube-system/coredns-7c65d6cfc9-9bhtm" Jul 6 23:54:35.162579 kubelet[2637]: I0706 23:54:35.162510 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e7bf3e7-e325-41f5-9a61-b1420ba59616-config-volume\") pod \"coredns-7c65d6cfc9-hppvx\" (UID: \"9e7bf3e7-e325-41f5-9a61-b1420ba59616\") " pod="kube-system/coredns-7c65d6cfc9-hppvx" Jul 6 23:54:35.162579 kubelet[2637]: I0706 23:54:35.162527 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7qj\" (UniqueName: \"kubernetes.io/projected/9e7bf3e7-e325-41f5-9a61-b1420ba59616-kube-api-access-9q7qj\") pod \"coredns-7c65d6cfc9-hppvx\" (UID: \"9e7bf3e7-e325-41f5-9a61-b1420ba59616\") " pod="kube-system/coredns-7c65d6cfc9-hppvx" Jul 6 23:54:35.351486 kubelet[2637]: E0706 23:54:35.351447 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:35.352391 kubelet[2637]: E0706 23:54:35.351962 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:35.352490 containerd[1578]: time="2025-07-06T23:54:35.352155505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9bhtm,Uid:d6209bbd-4212-492c-8ec6-de952be34f46,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:35.352674 containerd[1578]: time="2025-07-06T23:54:35.352631527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hppvx,Uid:9e7bf3e7-e325-41f5-9a61-b1420ba59616,Namespace:kube-system,Attempt:0,}" Jul 6 23:54:35.804127 kubelet[2637]: E0706 23:54:35.803448 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:36.805404 kubelet[2637]: E0706 23:54:36.805344 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:37.076632 systemd-networkd[1238]: cilium_host: Link UP Jul 6 23:54:37.078155 systemd-networkd[1238]: cilium_net: Link UP Jul 6 23:54:37.079120 systemd-networkd[1238]: cilium_net: Gained carrier Jul 6 23:54:37.081339 systemd-networkd[1238]: cilium_host: Gained carrier Jul 6 23:54:37.083212 systemd-networkd[1238]: cilium_net: Gained IPv6LL Jul 6 23:54:37.084916 systemd-networkd[1238]: cilium_host: Gained IPv6LL Jul 6 23:54:37.193613 systemd-networkd[1238]: cilium_vxlan: Link UP Jul 6 23:54:37.193627 systemd-networkd[1238]: cilium_vxlan: Gained carrier Jul 6 23:54:37.410855 kernel: NET: Registered PF_ALG protocol family Jul 6 23:54:37.806702 kubelet[2637]: E0706 23:54:37.806661 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:38.120851 systemd-networkd[1238]: lxc_health: Link UP Jul 6 23:54:38.130106 systemd-networkd[1238]: lxc_health: Gained carrier Jul 6 23:54:38.231155 kubelet[2637]: I0706 23:54:38.231072 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-26bg4" podStartSLOduration=9.370012803 podStartE2EDuration="15.231053871s" podCreationTimestamp="2025-07-06 23:54:23 +0000 UTC" firstStartedPulling="2025-07-06 23:54:24.337721444 +0000 UTC m=+5.769086474" lastFinishedPulling="2025-07-06 23:54:30.198762502 +0000 UTC m=+11.630127542" observedRunningTime="2025-07-06 23:54:35.818300676 +0000 UTC m=+17.249665717" watchObservedRunningTime="2025-07-06 23:54:38.231053871 +0000 UTC m=+19.662418901" Jul 6 23:54:38.436220 systemd-networkd[1238]: lxcd21e528a8a9b: Link UP Jul 6 23:54:38.448879 kernel: eth0: renamed from tmp746c0 Jul 6 23:54:38.454998 systemd-networkd[1238]: lxcd21e528a8a9b: Gained carrier Jul 6 23:54:38.471505 systemd-networkd[1238]: lxcb0e83c0c46c1: Link UP Jul 6 23:54:38.479948 kernel: eth0: renamed from tmpbf579 Jul 6 23:54:38.483704 systemd-networkd[1238]: lxcb0e83c0c46c1: Gained carrier Jul 6 23:54:38.609083 systemd-networkd[1238]: cilium_vxlan: Gained IPv6LL Jul 6 23:54:38.808260 kubelet[2637]: E0706 23:54:38.808212 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:39.810935 kubelet[2637]: E0706 23:54:39.810894 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:40.017059 systemd-networkd[1238]: lxc_health: Gained IPv6LL Jul 6 23:54:40.273001 systemd-networkd[1238]: lxcd21e528a8a9b: Gained IPv6LL Jul 6 23:54:40.465101 systemd-networkd[1238]: lxcb0e83c0c46c1: Gained IPv6LL Jul 6 23:54:40.812611 kubelet[2637]: E0706 23:54:40.812540 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:42.013200 containerd[1578]: time="2025-07-06T23:54:42.013103081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:42.013200 containerd[1578]: time="2025-07-06T23:54:42.013155769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:42.013200 containerd[1578]: time="2025-07-06T23:54:42.013170497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:42.014797 containerd[1578]: time="2025-07-06T23:54:42.014480701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:42.016798 containerd[1578]: time="2025-07-06T23:54:42.016706583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:54:42.017210 containerd[1578]: time="2025-07-06T23:54:42.016781505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:54:42.017210 containerd[1578]: time="2025-07-06T23:54:42.017116398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:42.018533 containerd[1578]: time="2025-07-06T23:54:42.018434436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:54:42.048393 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:54:42.049300 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:54:42.079005 containerd[1578]: time="2025-07-06T23:54:42.078959791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hppvx,Uid:9e7bf3e7-e325-41f5-9a61-b1420ba59616,Namespace:kube-system,Attempt:0,} returns sandbox id \"746c0b06c1198f4d5caeb1d3d7feb6baa45918cc667bcdb19d9fcedb78041671\"" Jul 6 23:54:42.079993 kubelet[2637]: E0706 23:54:42.079569 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:42.082869 containerd[1578]: time="2025-07-06T23:54:42.082628156Z" level=info msg="CreateContainer within sandbox \"746c0b06c1198f4d5caeb1d3d7feb6baa45918cc667bcdb19d9fcedb78041671\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:54:42.084369 containerd[1578]: time="2025-07-06T23:54:42.084309031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9bhtm,Uid:d6209bbd-4212-492c-8ec6-de952be34f46,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf57999ce47163529da381319ac823228d677245f583b1ac5981ce6d7cd01feb\"" Jul 6 23:54:42.085199 kubelet[2637]: E0706 23:54:42.085086 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:42.087278 containerd[1578]: time="2025-07-06T23:54:42.087253129Z" level=info msg="CreateContainer within sandbox \"bf57999ce47163529da381319ac823228d677245f583b1ac5981ce6d7cd01feb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:54:42.779942 containerd[1578]: time="2025-07-06T23:54:42.779855325Z" level=info msg="CreateContainer within sandbox \"746c0b06c1198f4d5caeb1d3d7feb6baa45918cc667bcdb19d9fcedb78041671\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2928d864bc1d46b55a058c8d981171f340ad325d51d776b31e77f046615ddf1f\"" Jul 6 23:54:42.780624 containerd[1578]: time="2025-07-06T23:54:42.780568912Z" level=info msg="StartContainer for \"2928d864bc1d46b55a058c8d981171f340ad325d51d776b31e77f046615ddf1f\"" Jul 6 23:54:42.783992 containerd[1578]: time="2025-07-06T23:54:42.783930429Z" level=info msg="CreateContainer within sandbox \"bf57999ce47163529da381319ac823228d677245f583b1ac5981ce6d7cd01feb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b33de7bf4b65fffb0634546807e3725bc065d26cf7f8ac1512fcf2c8f1cb0b05\"" Jul 6 23:54:42.784752 containerd[1578]: time="2025-07-06T23:54:42.784591887Z" level=info msg="StartContainer for \"b33de7bf4b65fffb0634546807e3725bc065d26cf7f8ac1512fcf2c8f1cb0b05\"" Jul 6 23:54:42.846997 containerd[1578]: time="2025-07-06T23:54:42.846938511Z" level=info msg="StartContainer for \"b33de7bf4b65fffb0634546807e3725bc065d26cf7f8ac1512fcf2c8f1cb0b05\" returns successfully" Jul 6 23:54:42.850963 containerd[1578]: time="2025-07-06T23:54:42.850906001Z" level=info msg="StartContainer for \"2928d864bc1d46b55a058c8d981171f340ad325d51d776b31e77f046615ddf1f\" returns successfully" Jul 6 23:54:42.993222 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:47788.service - OpenSSH per-connection server daemon (10.0.0.1:47788). Jul 6 23:54:43.034002 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 47788 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:54:43.035837 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:43.040612 systemd-logind[1550]: New session 8 of user core. Jul 6 23:54:43.054171 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:54:43.508060 sshd[4019]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:43.512177 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:47788.service: Deactivated successfully. Jul 6 23:54:43.514756 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:54:43.515731 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:54:43.516607 systemd-logind[1550]: Removed session 8. Jul 6 23:54:43.822334 kubelet[2637]: E0706 23:54:43.822045 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:43.824199 kubelet[2637]: E0706 23:54:43.824157 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:43.894199 kubelet[2637]: I0706 23:54:43.894128 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9bhtm" podStartSLOduration=19.8941143 podStartE2EDuration="19.8941143s" podCreationTimestamp="2025-07-06 23:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:54:43.893758898 +0000 UTC m=+25.325123928" watchObservedRunningTime="2025-07-06 23:54:43.8941143 +0000 UTC m=+25.325479330" Jul 6 23:54:44.125501 kubelet[2637]: I0706 23:54:44.125328 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hppvx" podStartSLOduration=20.125310082 podStartE2EDuration="20.125310082s" podCreationTimestamp="2025-07-06 23:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:54:44.124959401 +0000 UTC m=+25.556324431" watchObservedRunningTime="2025-07-06 23:54:44.125310082 +0000 UTC m=+25.556675112" Jul 6 23:54:44.825970 kubelet[2637]: E0706 23:54:44.825936 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:44.825970 kubelet[2637]: E0706 23:54:44.825941 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:45.827702 kubelet[2637]: E0706 23:54:45.827668 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:45.828205 kubelet[2637]: E0706 23:54:45.827895 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:54:48.521137 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:47802.service - OpenSSH per-connection server daemon (10.0.0.1:47802). Jul 6 23:54:48.552904 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 47802 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:54:48.554858 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:48.559513 systemd-logind[1550]: New session 9 of user core. Jul 6 23:54:48.578342 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:54:48.713959 sshd[4044]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:48.719143 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:47802.service: Deactivated successfully. Jul 6 23:54:48.721934 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:54:48.722771 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:54:48.723851 systemd-logind[1550]: Removed session 9. Jul 6 23:54:53.725048 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:60568.service - OpenSSH per-connection server daemon (10.0.0.1:60568). Jul 6 23:54:53.757712 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 60568 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:54:53.759488 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:53.763675 systemd-logind[1550]: New session 10 of user core. Jul 6 23:54:53.770096 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:54:53.880629 sshd[4060]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:53.884914 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:60568.service: Deactivated successfully. Jul 6 23:54:53.888148 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:54:53.888927 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:54:53.889786 systemd-logind[1550]: Removed session 10. Jul 6 23:54:58.895098 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). Jul 6 23:54:58.925455 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:54:58.927327 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:58.931350 systemd-logind[1550]: New session 11 of user core. Jul 6 23:54:58.941084 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:54:59.046608 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:59.056090 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:60574.service - OpenSSH per-connection server daemon (10.0.0.1:60574). Jul 6 23:54:59.056912 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:60572.service: Deactivated successfully. Jul 6 23:54:59.059047 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:54:59.060813 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:54:59.061862 systemd-logind[1550]: Removed session 11. Jul 6 23:54:59.083492 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 60574 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:54:59.084951 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:59.088938 systemd-logind[1550]: New session 12 of user core. Jul 6 23:54:59.097087 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:54:59.237290 sshd[4093]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:59.249346 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:60588.service - OpenSSH per-connection server daemon (10.0.0.1:60588). Jul 6 23:54:59.250411 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:60574.service: Deactivated successfully. Jul 6 23:54:59.256407 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:54:59.258639 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:54:59.260164 systemd-logind[1550]: Removed session 12. Jul 6 23:54:59.287099 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 60588 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:54:59.288624 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:54:59.292770 systemd-logind[1550]: New session 13 of user core. Jul 6 23:54:59.310142 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:54:59.427996 sshd[4106]: pam_unix(sshd:session): session closed for user core Jul 6 23:54:59.431678 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:60588.service: Deactivated successfully. Jul 6 23:54:59.434157 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:54:59.434276 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:54:59.435235 systemd-logind[1550]: Removed session 13. Jul 6 23:55:04.439056 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:36304.service - OpenSSH per-connection server daemon (10.0.0.1:36304). Jul 6 23:55:04.468611 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 36304 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:04.470398 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:04.474605 systemd-logind[1550]: New session 14 of user core. Jul 6 23:55:04.485111 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:55:04.596064 sshd[4125]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:04.600907 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:36304.service: Deactivated successfully. Jul 6 23:55:04.604063 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:55:04.605041 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:55:04.606092 systemd-logind[1550]: Removed session 14. Jul 6 23:55:09.610186 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:53420.service - OpenSSH per-connection server daemon (10.0.0.1:53420). Jul 6 23:55:09.639924 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 53420 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:09.641960 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:09.646402 systemd-logind[1550]: New session 15 of user core. Jul 6 23:55:09.658090 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:55:09.769463 sshd[4141]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:09.774036 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:53420.service: Deactivated successfully. Jul 6 23:55:09.776681 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:55:09.777582 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:55:09.778639 systemd-logind[1550]: Removed session 15. Jul 6 23:55:14.789198 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:53428.service - OpenSSH per-connection server daemon (10.0.0.1:53428). Jul 6 23:55:14.821180 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 53428 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:14.823375 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:14.828514 systemd-logind[1550]: New session 16 of user core. Jul 6 23:55:14.838317 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:55:14.951968 sshd[4157]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:14.967154 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:53442.service - OpenSSH per-connection server daemon (10.0.0.1:53442). Jul 6 23:55:14.967775 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:53428.service: Deactivated successfully. Jul 6 23:55:14.970095 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:55:14.972082 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:55:14.973122 systemd-logind[1550]: Removed session 16. Jul 6 23:55:14.997163 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 53442 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:14.998973 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:15.003680 systemd-logind[1550]: New session 17 of user core. Jul 6 23:55:15.017127 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:55:15.241782 sshd[4169]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:15.256118 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). Jul 6 23:55:15.256614 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:53442.service: Deactivated successfully. Jul 6 23:55:15.259488 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:55:15.261513 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:55:15.263005 systemd-logind[1550]: Removed session 17. Jul 6 23:55:15.288436 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:15.290032 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:15.294176 systemd-logind[1550]: New session 18 of user core. Jul 6 23:55:15.308155 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:55:16.630936 sshd[4182]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:16.642307 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:53460.service - OpenSSH per-connection server daemon (10.0.0.1:53460). Jul 6 23:55:16.643110 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:53452.service: Deactivated successfully. Jul 6 23:55:16.648330 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:55:16.650922 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:55:16.654085 systemd-logind[1550]: Removed session 18. Jul 6 23:55:16.677899 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 53460 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:16.679503 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:16.683684 systemd-logind[1550]: New session 19 of user core. Jul 6 23:55:16.693285 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:55:17.026112 sshd[4200]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:17.040201 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:53464.service - OpenSSH per-connection server daemon (10.0.0.1:53464). Jul 6 23:55:17.041031 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:53460.service: Deactivated successfully. Jul 6 23:55:17.045188 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:55:17.047213 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:55:17.049809 systemd-logind[1550]: Removed session 19. Jul 6 23:55:17.070977 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 53464 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:17.072707 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:17.077051 systemd-logind[1550]: New session 20 of user core. Jul 6 23:55:17.089103 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:55:17.309079 sshd[4216]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:17.313033 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:53464.service: Deactivated successfully. Jul 6 23:55:17.315629 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:55:17.316471 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:55:17.317370 systemd-logind[1550]: Removed session 20. Jul 6 23:55:22.323051 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:54094.service - OpenSSH per-connection server daemon (10.0.0.1:54094). Jul 6 23:55:22.351215 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 54094 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:22.353051 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:22.357055 systemd-logind[1550]: New session 21 of user core. Jul 6 23:55:22.368088 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:55:22.472989 sshd[4236]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:22.477521 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:54094.service: Deactivated successfully. Jul 6 23:55:22.480484 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:55:22.480609 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:55:22.481665 systemd-logind[1550]: Removed session 21. Jul 6 23:55:27.484107 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:54108.service - OpenSSH per-connection server daemon (10.0.0.1:54108). Jul 6 23:55:27.512676 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 54108 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:27.514209 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:27.518138 systemd-logind[1550]: New session 22 of user core. Jul 6 23:55:27.528119 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:55:27.673884 sshd[4256]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:27.679218 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:54108.service: Deactivated successfully. Jul 6 23:55:27.681787 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:55:27.681942 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:55:27.682775 systemd-logind[1550]: Removed session 22. Jul 6 23:55:29.686924 kubelet[2637]: E0706 23:55:29.686797 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:32.695280 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:57992.service - OpenSSH per-connection server daemon (10.0.0.1:57992). Jul 6 23:55:32.725542 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 57992 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:32.727664 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:32.732707 systemd-logind[1550]: New session 23 of user core. Jul 6 23:55:32.741252 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:55:32.856263 sshd[4272]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:32.861440 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:57992.service: Deactivated successfully. Jul 6 23:55:32.864214 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:55:32.864945 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:55:32.866205 systemd-logind[1550]: Removed session 23. Jul 6 23:55:37.867076 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:58006.service - OpenSSH per-connection server daemon (10.0.0.1:58006). Jul 6 23:55:37.896767 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 58006 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:37.898358 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:37.902973 systemd-logind[1550]: New session 24 of user core. Jul 6 23:55:37.914124 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:55:38.055313 sshd[4288]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:38.062079 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:58008.service - OpenSSH per-connection server daemon (10.0.0.1:58008). Jul 6 23:55:38.062725 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:58006.service: Deactivated successfully. Jul 6 23:55:38.065903 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:55:38.066591 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:55:38.067376 systemd-logind[1550]: Removed session 24. Jul 6 23:55:38.092125 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 58008 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:38.093952 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:38.097912 systemd-logind[1550]: New session 25 of user core. Jul 6 23:55:38.108072 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:55:39.687343 kubelet[2637]: E0706 23:55:39.687278 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:39.687343 kubelet[2637]: E0706 23:55:39.687328 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:39.884082 containerd[1578]: time="2025-07-06T23:55:39.883994174Z" level=info msg="StopContainer for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" with timeout 30 (s)" Jul 6 23:55:39.884673 containerd[1578]: time="2025-07-06T23:55:39.884384191Z" level=info msg="Stop container \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" with signal terminated" Jul 6 23:55:39.943698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492-rootfs.mount: Deactivated successfully. Jul 6 23:55:40.106593 containerd[1578]: time="2025-07-06T23:55:40.105855967Z" level=info msg="shim disconnected" id=92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492 namespace=k8s.io Jul 6 23:55:40.106593 containerd[1578]: time="2025-07-06T23:55:40.105928956Z" level=warning msg="cleaning up after shim disconnected" id=92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492 namespace=k8s.io Jul 6 23:55:40.106593 containerd[1578]: time="2025-07-06T23:55:40.105939437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:40.111478 containerd[1578]: time="2025-07-06T23:55:40.111413190Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:55:40.113919 containerd[1578]: time="2025-07-06T23:55:40.113876033Z" level=info msg="StopContainer for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" with timeout 2 (s)" Jul 6 23:55:40.114270 containerd[1578]: time="2025-07-06T23:55:40.114221875Z" level=info msg="Stop container \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" with signal terminated" Jul 6 23:55:40.121359 systemd-networkd[1238]: lxc_health: Link DOWN Jul 6 23:55:40.121372 systemd-networkd[1238]: lxc_health: Lost carrier Jul 6 23:55:40.125655 containerd[1578]: time="2025-07-06T23:55:40.125621938Z" level=info msg="StopContainer for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" returns successfully" Jul 6 23:55:40.130050 containerd[1578]: time="2025-07-06T23:55:40.130014233Z" level=info msg="StopPodSandbox for \"0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775\"" Jul 6 23:55:40.130147 containerd[1578]: time="2025-07-06T23:55:40.130068226Z" level=info msg="Container to stop \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:55:40.132493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775-shm.mount: Deactivated successfully. Jul 6 23:55:40.161320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775-rootfs.mount: Deactivated successfully. Jul 6 23:55:40.166304 containerd[1578]: time="2025-07-06T23:55:40.166214461Z" level=info msg="shim disconnected" id=0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775 namespace=k8s.io Jul 6 23:55:40.166304 containerd[1578]: time="2025-07-06T23:55:40.166292189Z" level=warning msg="cleaning up after shim disconnected" id=0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775 namespace=k8s.io Jul 6 23:55:40.166304 containerd[1578]: time="2025-07-06T23:55:40.166305656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:40.175793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad-rootfs.mount: Deactivated successfully. Jul 6 23:55:40.182308 containerd[1578]: time="2025-07-06T23:55:40.181731953Z" level=info msg="shim disconnected" id=77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad namespace=k8s.io Jul 6 23:55:40.182308 containerd[1578]: time="2025-07-06T23:55:40.181813950Z" level=warning msg="cleaning up after shim disconnected" id=77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad namespace=k8s.io Jul 6 23:55:40.182308 containerd[1578]: time="2025-07-06T23:55:40.181847063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:40.184348 containerd[1578]: time="2025-07-06T23:55:40.184304956Z" level=info msg="TearDown network for sandbox \"0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775\" successfully" Jul 6 23:55:40.184348 containerd[1578]: time="2025-07-06T23:55:40.184337098Z" level=info msg="StopPodSandbox for \"0449aa61c48b512b479f6bef3ac8a316b63ca1f735fd4c4df27775abc6a57775\" returns successfully" Jul 6 23:55:40.200783 containerd[1578]: time="2025-07-06T23:55:40.200667165Z" level=info msg="StopContainer for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" returns successfully" Jul 6 23:55:40.201223 containerd[1578]: time="2025-07-06T23:55:40.201181068Z" level=info msg="StopPodSandbox for \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\"" Jul 6 23:55:40.201274 containerd[1578]: time="2025-07-06T23:55:40.201217799Z" level=info msg="Container to stop \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:55:40.201274 containerd[1578]: time="2025-07-06T23:55:40.201232276Z" level=info msg="Container to stop \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:55:40.201274 containerd[1578]: time="2025-07-06T23:55:40.201241815Z" level=info msg="Container to stop \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:55:40.201274 containerd[1578]: time="2025-07-06T23:55:40.201251202Z" level=info msg="Container to stop \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:55:40.201274 containerd[1578]: time="2025-07-06T23:55:40.201260590Z" level=info msg="Container to stop \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:55:40.229688 containerd[1578]: time="2025-07-06T23:55:40.229601880Z" level=info msg="shim disconnected" id=961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc namespace=k8s.io Jul 6 23:55:40.229688 containerd[1578]: time="2025-07-06T23:55:40.229662597Z" level=warning msg="cleaning up after shim disconnected" id=961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc namespace=k8s.io Jul 6 23:55:40.229688 containerd[1578]: time="2025-07-06T23:55:40.229671213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:40.245110 containerd[1578]: time="2025-07-06T23:55:40.245030813Z" level=info msg="TearDown network for sandbox \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" successfully" Jul 6 23:55:40.245110 containerd[1578]: time="2025-07-06T23:55:40.245080438Z" level=info msg="StopPodSandbox for \"961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc\" returns successfully" Jul 6 23:55:40.320981 kubelet[2637]: I0706 23:55:40.320923 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfnjt\" (UniqueName: \"kubernetes.io/projected/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-kube-api-access-nfnjt\") pod \"c4b439a6-370f-4dab-b1d3-dda12b3cb8b6\" (UID: \"c4b439a6-370f-4dab-b1d3-dda12b3cb8b6\") " Jul 6 23:55:40.320981 kubelet[2637]: I0706 23:55:40.320970 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-cilium-config-path\") pod \"c4b439a6-370f-4dab-b1d3-dda12b3cb8b6\" (UID: \"c4b439a6-370f-4dab-b1d3-dda12b3cb8b6\") " Jul 6 23:55:40.324491 kubelet[2637]: I0706 23:55:40.324468 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4b439a6-370f-4dab-b1d3-dda12b3cb8b6" (UID: "c4b439a6-370f-4dab-b1d3-dda12b3cb8b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:55:40.324618 kubelet[2637]: I0706 23:55:40.324586 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-kube-api-access-nfnjt" (OuterVolumeSpecName: "kube-api-access-nfnjt") pod "c4b439a6-370f-4dab-b1d3-dda12b3cb8b6" (UID: "c4b439a6-370f-4dab-b1d3-dda12b3cb8b6"). InnerVolumeSpecName "kube-api-access-nfnjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:55:40.422104 kubelet[2637]: I0706 23:55:40.422032 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hubble-tls\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422104 kubelet[2637]: I0706 23:55:40.422086 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-kernel\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422248 kubelet[2637]: I0706 23:55:40.422112 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-xtables-lock\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422248 kubelet[2637]: I0706 23:55:40.422134 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hostproc\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422248 kubelet[2637]: I0706 23:55:40.422153 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-bpf-maps\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422248 kubelet[2637]: I0706 23:55:40.422173 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cni-path\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422248 kubelet[2637]: I0706 23:55:40.422193 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-net\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422248 kubelet[2637]: I0706 23:55:40.422216 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-clustermesh-secrets\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422453 kubelet[2637]: I0706 23:55:40.422218 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.422453 kubelet[2637]: I0706 23:55:40.422239 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-config-path\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422453 kubelet[2637]: I0706 23:55:40.422259 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-cgroup\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422453 kubelet[2637]: I0706 23:55:40.422272 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.422453 kubelet[2637]: I0706 23:55:40.422276 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-etc-cni-netd\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422638 kubelet[2637]: I0706 23:55:40.422298 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.422638 kubelet[2637]: I0706 23:55:40.422323 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hostproc" (OuterVolumeSpecName: "hostproc") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.422638 kubelet[2637]: I0706 23:55:40.422341 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.422638 kubelet[2637]: I0706 23:55:40.422316 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbmpb\" (UniqueName: \"kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-kube-api-access-nbmpb\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422638 kubelet[2637]: I0706 23:55:40.422368 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cni-path" (OuterVolumeSpecName: "cni-path") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422384 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-lib-modules\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422408 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-run\") pod \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\" (UID: \"a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5\") " Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422445 2637 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422460 2637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfnjt\" (UniqueName: \"kubernetes.io/projected/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-kube-api-access-nfnjt\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422475 2637 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422488 2637 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422500 2637 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.422840 kubelet[2637]: I0706 23:55:40.422513 2637 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.423134 kubelet[2637]: I0706 23:55:40.422524 2637 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.423134 kubelet[2637]: I0706 23:55:40.422538 2637 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.425719 kubelet[2637]: I0706 23:55:40.422388 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.425899 kubelet[2637]: I0706 23:55:40.422566 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.425899 kubelet[2637]: I0706 23:55:40.422891 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.425899 kubelet[2637]: I0706 23:55:40.425682 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:55:40.425899 kubelet[2637]: I0706 23:55:40.425686 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:55:40.425899 kubelet[2637]: I0706 23:55:40.425709 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:55:40.426075 kubelet[2637]: I0706 23:55:40.426038 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-kube-api-access-nbmpb" (OuterVolumeSpecName: "kube-api-access-nbmpb") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "kube-api-access-nbmpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:55:40.426988 kubelet[2637]: I0706 23:55:40.426959 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" (UID: "a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522883 2637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbmpb\" (UniqueName: \"kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-kube-api-access-nbmpb\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522913 2637 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522923 2637 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522932 2637 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522940 2637 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522949 2637 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522957 2637 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.523010 kubelet[2637]: I0706 23:55:40.522964 2637 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:55:40.913055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc-rootfs.mount: Deactivated successfully. Jul 6 23:55:40.913293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-961dc0a7211204a1b6b7776baa144d0525a813099870a58fd0a23e4226c925dc-shm.mount: Deactivated successfully. Jul 6 23:55:40.913484 systemd[1]: var-lib-kubelet-pods-c4b439a6\x2d370f\x2d4dab\x2db1d3\x2ddda12b3cb8b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfnjt.mount: Deactivated successfully. Jul 6 23:55:40.913678 systemd[1]: var-lib-kubelet-pods-a1c61ae6\x2da431\x2d4e8f\x2d9bf3\x2d2dc27f19e6d5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:55:40.913887 systemd[1]: var-lib-kubelet-pods-a1c61ae6\x2da431\x2d4e8f\x2d9bf3\x2d2dc27f19e6d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbmpb.mount: Deactivated successfully. Jul 6 23:55:40.914101 systemd[1]: var-lib-kubelet-pods-a1c61ae6\x2da431\x2d4e8f\x2d9bf3\x2d2dc27f19e6d5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:55:40.934420 kubelet[2637]: I0706 23:55:40.934353 2637 scope.go:117] "RemoveContainer" containerID="92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492" Jul 6 23:55:40.935718 containerd[1578]: time="2025-07-06T23:55:40.935682582Z" level=info msg="RemoveContainer for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\"" Jul 6 23:55:40.941184 containerd[1578]: time="2025-07-06T23:55:40.941137259Z" level=info msg="RemoveContainer for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" returns successfully" Jul 6 23:55:40.941456 kubelet[2637]: I0706 23:55:40.941427 2637 scope.go:117] "RemoveContainer" containerID="92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492" Jul 6 23:55:40.941713 containerd[1578]: time="2025-07-06T23:55:40.941675639Z" level=error msg="ContainerStatus for \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\": not found" Jul 6 23:55:40.955766 kubelet[2637]: E0706 23:55:40.955714 2637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\": not found" containerID="92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492" Jul 6 23:55:40.955935 kubelet[2637]: I0706 23:55:40.955766 2637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492"} err="failed to get container status \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\": rpc error: code = NotFound desc = an error occurred when try to find container \"92f0dba5f18176c32d2f8db3bc5045e35927f6570edd0aec34e978e17d10d492\": not found" Jul 6 23:55:40.955935 kubelet[2637]: I0706 23:55:40.955874 2637 scope.go:117] "RemoveContainer" containerID="77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad" Jul 6 23:55:40.958033 containerd[1578]: time="2025-07-06T23:55:40.957111916Z" level=info msg="RemoveContainer for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\"" Jul 6 23:55:40.963844 containerd[1578]: time="2025-07-06T23:55:40.962467514Z" level=info msg="RemoveContainer for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" returns successfully" Jul 6 23:55:40.964036 kubelet[2637]: I0706 23:55:40.964002 2637 scope.go:117] "RemoveContainer" containerID="88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3" Jul 6 23:55:40.966390 containerd[1578]: time="2025-07-06T23:55:40.966336017Z" level=info msg="RemoveContainer for \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\"" Jul 6 23:55:40.977582 containerd[1578]: time="2025-07-06T23:55:40.977474419Z" level=info msg="RemoveContainer for \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\" returns successfully" Jul 6 23:55:40.978244 kubelet[2637]: I0706 23:55:40.978125 2637 scope.go:117] "RemoveContainer" containerID="487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b" Jul 6 23:55:40.983206 containerd[1578]: time="2025-07-06T23:55:40.983056801Z" level=info msg="RemoveContainer for \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\"" Jul 6 23:55:40.987731 containerd[1578]: time="2025-07-06T23:55:40.987476158Z" level=info msg="RemoveContainer for \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\" returns successfully" Jul 6 23:55:40.987985 kubelet[2637]: I0706 23:55:40.987772 2637 scope.go:117] "RemoveContainer" containerID="966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec" Jul 6 23:55:40.989504 containerd[1578]: time="2025-07-06T23:55:40.989476264Z" level=info msg="RemoveContainer for \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\"" Jul 6 23:55:40.994288 containerd[1578]: time="2025-07-06T23:55:40.994235932Z" level=info msg="RemoveContainer for \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\" returns successfully" Jul 6 23:55:40.994796 kubelet[2637]: I0706 23:55:40.994682 2637 scope.go:117] "RemoveContainer" containerID="0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487" Jul 6 23:55:40.996148 containerd[1578]: time="2025-07-06T23:55:40.996123884Z" level=info msg="RemoveContainer for \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\"" Jul 6 23:55:40.999705 containerd[1578]: time="2025-07-06T23:55:40.999674398Z" level=info msg="RemoveContainer for \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\" returns successfully" Jul 6 23:55:40.999877 kubelet[2637]: I0706 23:55:40.999844 2637 scope.go:117] "RemoveContainer" containerID="77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad" Jul 6 23:55:41.000235 containerd[1578]: time="2025-07-06T23:55:41.000027103Z" level=error msg="ContainerStatus for \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\": not found" Jul 6 23:55:41.001671 kubelet[2637]: E0706 23:55:41.000154 2637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\": not found" containerID="77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad" Jul 6 23:55:41.001671 kubelet[2637]: I0706 23:55:41.000178 2637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad"} err="failed to get container status \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"77224d3208858227724427e4fc18567ae8b9c09e5fa63c8ae89c821c9c1d12ad\": not found" Jul 6 23:55:41.001671 kubelet[2637]: I0706 23:55:41.000200 2637 scope.go:117] "RemoveContainer" containerID="88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3" Jul 6 23:55:41.001671 kubelet[2637]: E0706 23:55:41.000464 2637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\": not found" containerID="88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3" Jul 6 23:55:41.001671 kubelet[2637]: I0706 23:55:41.000479 2637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3"} err="failed to get container status \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\": rpc error: code = NotFound desc = an error occurred when try to find container \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\": not found" Jul 6 23:55:41.001671 kubelet[2637]: I0706 23:55:41.000490 2637 scope.go:117] "RemoveContainer" containerID="487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b" Jul 6 23:55:41.001873 kubelet[2637]: E0706 23:55:41.000714 2637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\": not found" containerID="487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b" Jul 6 23:55:41.001873 kubelet[2637]: I0706 23:55:41.000729 2637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b"} err="failed to get container status \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\": rpc error: code = NotFound desc = an error occurred when try to find container \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\": not found" Jul 6 23:55:41.001873 kubelet[2637]: I0706 23:55:41.000743 2637 scope.go:117] "RemoveContainer" containerID="966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec" Jul 6 23:55:41.001873 kubelet[2637]: E0706 23:55:41.001205 2637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\": not found" containerID="966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec" Jul 6 23:55:41.001873 kubelet[2637]: I0706 23:55:41.001245 2637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec"} err="failed to get container status \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\": rpc error: code = NotFound desc = an error occurred when try to find container \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\": not found" Jul 6 23:55:41.001873 kubelet[2637]: I0706 23:55:41.001274 2637 scope.go:117] "RemoveContainer" containerID="0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487" Jul 6 23:55:41.002032 containerd[1578]: time="2025-07-06T23:55:41.000362966Z" level=error msg="ContainerStatus for \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88f11566515d14fd846c4e9d6debbbee415bf247d83f7df8c998aa06249d5be3\": not found" Jul 6 23:55:41.002032 containerd[1578]: time="2025-07-06T23:55:41.000612252Z" level=error msg="ContainerStatus for \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"487c547c4a0c22a4cc37fb9b8610433de815de9c006962ed5e3b0150e8fe731b\": not found" Jul 6 23:55:41.002032 containerd[1578]: time="2025-07-06T23:55:41.000999975Z" level=error msg="ContainerStatus for \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"966aa953b4a74108263c291c420b921f18db04a9749ddb7c05864bce3ad43dec\": not found" Jul 6 23:55:41.002032 containerd[1578]: time="2025-07-06T23:55:41.001490803Z" level=error msg="ContainerStatus for \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\": not found" Jul 6 23:55:41.002145 kubelet[2637]: E0706 23:55:41.001600 2637 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\": not found" containerID="0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487" Jul 6 23:55:41.002145 kubelet[2637]: I0706 23:55:41.001646 2637 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487"} err="failed to get container status \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\": rpc error: code = NotFound desc = an error occurred when try to find container \"0128e0ae1cdda28f610592f1940664bae466860860b5ccb0924d4bc281c22487\": not found" Jul 6 23:55:41.752790 sshd[4300]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:41.759120 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:49494.service - OpenSSH per-connection server daemon (10.0.0.1:49494). Jul 6 23:55:41.760156 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:58008.service: Deactivated successfully. Jul 6 23:55:41.764358 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:55:41.764692 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:55:41.765941 systemd-logind[1550]: Removed session 25. Jul 6 23:55:41.790069 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 49494 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:41.791767 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:41.795696 systemd-logind[1550]: New session 26 of user core. Jul 6 23:55:41.802062 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:55:42.558182 sshd[4469]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:42.570193 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:49510.service - OpenSSH per-connection server daemon (10.0.0.1:49510). Jul 6 23:55:42.570731 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:49494.service: Deactivated successfully. Jul 6 23:55:42.582312 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:55:42.584142 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:55:42.585596 systemd-logind[1550]: Removed session 26. Jul 6 23:55:42.589838 kubelet[2637]: E0706 23:55:42.587866 2637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" containerName="mount-cgroup" Jul 6 23:55:42.589838 kubelet[2637]: E0706 23:55:42.587896 2637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" containerName="apply-sysctl-overwrites" Jul 6 23:55:42.589838 kubelet[2637]: E0706 23:55:42.587905 2637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" containerName="clean-cilium-state" Jul 6 23:55:42.589838 kubelet[2637]: E0706 23:55:42.587913 2637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" containerName="cilium-agent" Jul 6 23:55:42.589838 kubelet[2637]: E0706 23:55:42.587923 2637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" containerName="mount-bpf-fs" Jul 6 23:55:42.589838 kubelet[2637]: E0706 23:55:42.587930 2637 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b439a6-370f-4dab-b1d3-dda12b3cb8b6" containerName="cilium-operator" Jul 6 23:55:42.589838 kubelet[2637]: I0706 23:55:42.587957 2637 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" containerName="cilium-agent" Jul 6 23:55:42.589838 kubelet[2637]: I0706 23:55:42.587965 2637 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b439a6-370f-4dab-b1d3-dda12b3cb8b6" containerName="cilium-operator" Jul 6 23:55:42.609624 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 49510 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:42.611445 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:42.615992 systemd-logind[1550]: New session 27 of user core. Jul 6 23:55:42.626136 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:55:42.680861 sshd[4483]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:42.688403 systemd[1]: Started sshd@27-10.0.0.81:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Jul 6 23:55:42.688668 kubelet[2637]: I0706 23:55:42.688484 2637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5" path="/var/lib/kubelet/pods/a1c61ae6-a431-4e8f-9bf3-2dc27f19e6d5/volumes" Jul 6 23:55:42.689644 kubelet[2637]: I0706 23:55:42.689522 2637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b439a6-370f-4dab-b1d3-dda12b3cb8b6" path="/var/lib/kubelet/pods/c4b439a6-370f-4dab-b1d3-dda12b3cb8b6/volumes" Jul 6 23:55:42.690258 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:49510.service: Deactivated successfully. Jul 6 23:55:42.693303 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:55:42.694170 systemd-logind[1550]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:55:42.695789 systemd-logind[1550]: Removed session 27. Jul 6 23:55:42.718151 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:42.719986 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:42.725379 systemd-logind[1550]: New session 28 of user core. Jul 6 23:55:42.736174 kubelet[2637]: I0706 23:55:42.736122 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-bpf-maps\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736174 kubelet[2637]: I0706 23:55:42.736173 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-hostproc\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736174 kubelet[2637]: I0706 23:55:42.736187 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-etc-cni-netd\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736317 kubelet[2637]: I0706 23:55:42.736203 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-cilium-config-path\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736317 kubelet[2637]: I0706 23:55:42.736219 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-host-proc-sys-net\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736317 kubelet[2637]: I0706 23:55:42.736233 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-host-proc-sys-kernel\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736317 kubelet[2637]: I0706 23:55:42.736248 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-cilium-run\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736317 kubelet[2637]: I0706 23:55:42.736284 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-xtables-lock\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736435 kubelet[2637]: I0706 23:55:42.736300 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-clustermesh-secrets\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736461 kubelet[2637]: I0706 23:55:42.736413 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-cilium-cgroup\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736484 kubelet[2637]: I0706 23:55:42.736462 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwczf\" (UniqueName: \"kubernetes.io/projected/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-kube-api-access-vwczf\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736514 kubelet[2637]: I0706 23:55:42.736490 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-cilium-ipsec-secrets\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736544 kubelet[2637]: I0706 23:55:42.736517 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-hubble-tls\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736572 kubelet[2637]: I0706 23:55:42.736544 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-cni-path\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.736572 kubelet[2637]: I0706 23:55:42.736566 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2199bdb5-56cc-4b24-82e8-9bdc68d5125b-lib-modules\") pod \"cilium-5g9mg\" (UID: \"2199bdb5-56cc-4b24-82e8-9bdc68d5125b\") " pod="kube-system/cilium-5g9mg" Jul 6 23:55:42.739263 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:55:42.896275 kubelet[2637]: E0706 23:55:42.896126 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:42.897210 containerd[1578]: time="2025-07-06T23:55:42.896779581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5g9mg,Uid:2199bdb5-56cc-4b24-82e8-9bdc68d5125b,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:43.242219 containerd[1578]: time="2025-07-06T23:55:43.241980847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:43.242219 containerd[1578]: time="2025-07-06T23:55:43.242044579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:43.242219 containerd[1578]: time="2025-07-06T23:55:43.242076791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:43.242949 containerd[1578]: time="2025-07-06T23:55:43.242854416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:43.280359 containerd[1578]: time="2025-07-06T23:55:43.280306831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5g9mg,Uid:2199bdb5-56cc-4b24-82e8-9bdc68d5125b,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\"" Jul 6 23:55:43.280961 kubelet[2637]: E0706 23:55:43.280941 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:43.283462 containerd[1578]: time="2025-07-06T23:55:43.283376076Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:55:43.738734 containerd[1578]: time="2025-07-06T23:55:43.738650729Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65b1dd8a03673ab84f093d58d73d7b10b4395a6c0d41bef318289b3e86c9a95c\"" Jul 6 23:55:43.739472 containerd[1578]: time="2025-07-06T23:55:43.739416192Z" level=info msg="StartContainer for \"65b1dd8a03673ab84f093d58d73d7b10b4395a6c0d41bef318289b3e86c9a95c\"" Jul 6 23:55:43.758436 kubelet[2637]: E0706 23:55:43.758366 2637 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:55:43.877264 containerd[1578]: time="2025-07-06T23:55:43.877213032Z" level=info msg="StartContainer for \"65b1dd8a03673ab84f093d58d73d7b10b4395a6c0d41bef318289b3e86c9a95c\" returns successfully" Jul 6 23:55:43.895907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65b1dd8a03673ab84f093d58d73d7b10b4395a6c0d41bef318289b3e86c9a95c-rootfs.mount: Deactivated successfully. Jul 6 23:55:43.944985 kubelet[2637]: E0706 23:55:43.944944 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:44.272411 containerd[1578]: time="2025-07-06T23:55:44.272322009Z" level=info msg="shim disconnected" id=65b1dd8a03673ab84f093d58d73d7b10b4395a6c0d41bef318289b3e86c9a95c namespace=k8s.io Jul 6 23:55:44.272411 containerd[1578]: time="2025-07-06T23:55:44.272384869Z" level=warning msg="cleaning up after shim disconnected" id=65b1dd8a03673ab84f093d58d73d7b10b4395a6c0d41bef318289b3e86c9a95c namespace=k8s.io Jul 6 23:55:44.272411 containerd[1578]: time="2025-07-06T23:55:44.272394077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:44.947721 kubelet[2637]: E0706 23:55:44.947671 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:44.950010 containerd[1578]: time="2025-07-06T23:55:44.949954048Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:55:44.969666 containerd[1578]: time="2025-07-06T23:55:44.969620835Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75108e1f81f617771237266cd1554c3b09a46bbcea2dcc7d8bf0a5786da98544\"" Jul 6 23:55:44.970186 containerd[1578]: time="2025-07-06T23:55:44.970159523Z" level=info msg="StartContainer for \"75108e1f81f617771237266cd1554c3b09a46bbcea2dcc7d8bf0a5786da98544\"" Jul 6 23:55:45.129893 containerd[1578]: time="2025-07-06T23:55:45.129844454Z" level=info msg="StartContainer for \"75108e1f81f617771237266cd1554c3b09a46bbcea2dcc7d8bf0a5786da98544\" returns successfully" Jul 6 23:55:45.147375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75108e1f81f617771237266cd1554c3b09a46bbcea2dcc7d8bf0a5786da98544-rootfs.mount: Deactivated successfully. Jul 6 23:55:45.414538 containerd[1578]: time="2025-07-06T23:55:45.414458682Z" level=info msg="shim disconnected" id=75108e1f81f617771237266cd1554c3b09a46bbcea2dcc7d8bf0a5786da98544 namespace=k8s.io Jul 6 23:55:45.414538 containerd[1578]: time="2025-07-06T23:55:45.414530069Z" level=warning msg="cleaning up after shim disconnected" id=75108e1f81f617771237266cd1554c3b09a46bbcea2dcc7d8bf0a5786da98544 namespace=k8s.io Jul 6 23:55:45.414538 containerd[1578]: time="2025-07-06T23:55:45.414542121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:45.951268 kubelet[2637]: E0706 23:55:45.950886 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:45.952512 containerd[1578]: time="2025-07-06T23:55:45.952473126Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:55:45.976019 containerd[1578]: time="2025-07-06T23:55:45.975974665Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3c697fba2eb8a26f3fd70298711386b7a1be70d7b917c929ad131affa46d46a\"" Jul 6 23:55:45.976599 containerd[1578]: time="2025-07-06T23:55:45.976568629Z" level=info msg="StartContainer for \"c3c697fba2eb8a26f3fd70298711386b7a1be70d7b917c929ad131affa46d46a\"" Jul 6 23:55:46.202925 containerd[1578]: time="2025-07-06T23:55:46.202727908Z" level=info msg="StartContainer for \"c3c697fba2eb8a26f3fd70298711386b7a1be70d7b917c929ad131affa46d46a\" returns successfully" Jul 6 23:55:46.239303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3c697fba2eb8a26f3fd70298711386b7a1be70d7b917c929ad131affa46d46a-rootfs.mount: Deactivated successfully. Jul 6 23:55:46.474436 containerd[1578]: time="2025-07-06T23:55:46.474258590Z" level=info msg="shim disconnected" id=c3c697fba2eb8a26f3fd70298711386b7a1be70d7b917c929ad131affa46d46a namespace=k8s.io Jul 6 23:55:46.474436 containerd[1578]: time="2025-07-06T23:55:46.474321059Z" level=warning msg="cleaning up after shim disconnected" id=c3c697fba2eb8a26f3fd70298711386b7a1be70d7b917c929ad131affa46d46a namespace=k8s.io Jul 6 23:55:46.474436 containerd[1578]: time="2025-07-06T23:55:46.474331209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:46.955365 kubelet[2637]: E0706 23:55:46.955295 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:46.957305 containerd[1578]: time="2025-07-06T23:55:46.957260396Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:55:47.643000 containerd[1578]: time="2025-07-06T23:55:47.642928758Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"444def81da4fbe83427c8d4d283669ef47498c5fd95c882471e90f7ae6f5f53f\"" Jul 6 23:55:47.643661 containerd[1578]: time="2025-07-06T23:55:47.643608295Z" level=info msg="StartContainer for \"444def81da4fbe83427c8d4d283669ef47498c5fd95c882471e90f7ae6f5f53f\"" Jul 6 23:55:47.686034 kubelet[2637]: E0706 23:55:47.686001 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:47.827230 containerd[1578]: time="2025-07-06T23:55:47.827030361Z" level=info msg="StartContainer for \"444def81da4fbe83427c8d4d283669ef47498c5fd95c882471e90f7ae6f5f53f\" returns successfully" Jul 6 23:55:47.960029 kubelet[2637]: E0706 23:55:47.959872 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:47.974296 containerd[1578]: time="2025-07-06T23:55:47.974204037Z" level=info msg="shim disconnected" id=444def81da4fbe83427c8d4d283669ef47498c5fd95c882471e90f7ae6f5f53f namespace=k8s.io Jul 6 23:55:47.974296 containerd[1578]: time="2025-07-06T23:55:47.974260545Z" level=warning msg="cleaning up after shim disconnected" id=444def81da4fbe83427c8d4d283669ef47498c5fd95c882471e90f7ae6f5f53f namespace=k8s.io Jul 6 23:55:47.974296 containerd[1578]: time="2025-07-06T23:55:47.974268870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:55:48.322603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-444def81da4fbe83427c8d4d283669ef47498c5fd95c882471e90f7ae6f5f53f-rootfs.mount: Deactivated successfully. Jul 6 23:55:48.759159 kubelet[2637]: E0706 23:55:48.759021 2637 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:55:48.963753 kubelet[2637]: E0706 23:55:48.963707 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:48.966902 containerd[1578]: time="2025-07-06T23:55:48.966462784Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:55:49.461353 containerd[1578]: time="2025-07-06T23:55:49.461289457Z" level=info msg="CreateContainer within sandbox \"eb7bdcd38a2678a0ca8837423487b12358e3745ff31add85bbeba05aa188da58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"573b312dd97e7e97ab043ed002fda471cc1e16d0458b2bca0ad3674b5bf6e996\"" Jul 6 23:55:49.462026 containerd[1578]: time="2025-07-06T23:55:49.461954574Z" level=info msg="StartContainer for \"573b312dd97e7e97ab043ed002fda471cc1e16d0458b2bca0ad3674b5bf6e996\"" Jul 6 23:55:49.523208 containerd[1578]: time="2025-07-06T23:55:49.523163270Z" level=info msg="StartContainer for \"573b312dd97e7e97ab043ed002fda471cc1e16d0458b2bca0ad3674b5bf6e996\" returns successfully" Jul 6 23:55:49.944850 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:55:49.971184 kubelet[2637]: E0706 23:55:49.970895 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:50.024686 kubelet[2637]: I0706 23:55:50.024616 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5g9mg" podStartSLOduration=8.024580383 podStartE2EDuration="8.024580383s" podCreationTimestamp="2025-07-06 23:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:50.023381439 +0000 UTC m=+91.454746469" watchObservedRunningTime="2025-07-06 23:55:50.024580383 +0000 UTC m=+91.455945413" Jul 6 23:55:50.728451 kubelet[2637]: I0706 23:55:50.728391 2637 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:55:50Z","lastTransitionTime":"2025-07-06T23:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:55:50.973753 kubelet[2637]: E0706 23:55:50.973698 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:53.239595 systemd-networkd[1238]: lxc_health: Link UP Jul 6 23:55:53.252793 systemd-networkd[1238]: lxc_health: Gained carrier Jul 6 23:55:54.577107 systemd-networkd[1238]: lxc_health: Gained IPv6LL Jul 6 23:55:54.897837 kubelet[2637]: E0706 23:55:54.897649 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:54.982001 kubelet[2637]: E0706 23:55:54.981963 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:55.988064 kubelet[2637]: E0706 23:55:55.987798 2637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:00.264643 sshd[4492]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:00.269200 systemd[1]: sshd@27-10.0.0.81:22-10.0.0.1:49526.service: Deactivated successfully. Jul 6 23:56:00.271793 systemd-logind[1550]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:56:00.271932 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:56:00.273160 systemd-logind[1550]: Removed session 28.