Jul 7 06:03:35.862984 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:03:35.863014 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:03:35.863023 kernel: BIOS-provided physical RAM map: Jul 7 06:03:35.863030 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 06:03:35.863036 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 06:03:35.863043 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 06:03:35.863050 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 7 06:03:35.863073 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 7 06:03:35.863083 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 06:03:35.863089 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 06:03:35.863096 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:03:35.863102 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 06:03:35.863109 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:03:35.863116 kernel: NX (Execute Disable) protection: active Jul 7 06:03:35.863127 kernel: APIC: Static calls initialized Jul 7 06:03:35.863134 kernel: SMBIOS 2.8 present. Jul 7 06:03:35.863144 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 7 06:03:35.863151 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:03:35.863158 kernel: Hypervisor detected: KVM Jul 7 06:03:35.863165 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:03:35.863172 kernel: kvm-clock: using sched offset of 4546169619 cycles Jul 7 06:03:35.863180 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:03:35.863188 kernel: tsc: Detected 2794.746 MHz processor Jul 7 06:03:35.863198 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:03:35.863206 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:03:35.863213 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 7 06:03:35.863220 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 06:03:35.863228 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:03:35.863235 kernel: Using GB pages for direct mapping Jul 7 06:03:35.863242 kernel: ACPI: Early table checksum verification disabled Jul 7 06:03:35.863250 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 7 06:03:35.863257 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863267 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863274 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863281 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 7 06:03:35.863288 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863295 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863303 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863310 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:35.863317 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 7 06:03:35.863330 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 7 06:03:35.863337 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 7 06:03:35.863345 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 7 06:03:35.863352 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 7 06:03:35.863360 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 7 06:03:35.863367 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 7 06:03:35.863376 kernel: No NUMA configuration found Jul 7 06:03:35.863384 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 7 06:03:35.863391 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 7 06:03:35.863399 kernel: Zone ranges: Jul 7 06:03:35.863406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:03:35.863414 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 7 06:03:35.863421 kernel: Normal empty Jul 7 06:03:35.863429 kernel: Device empty Jul 7 06:03:35.863436 kernel: Movable zone start for each node Jul 7 06:03:35.863444 kernel: Early memory node ranges Jul 7 06:03:35.863453 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 06:03:35.863461 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 7 06:03:35.863468 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 7 06:03:35.863475 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:03:35.863483 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:03:35.863490 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 7 06:03:35.863497 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:03:35.863507 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:03:35.863515 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:03:35.863525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:03:35.863532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:03:35.863542 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:03:35.863549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:03:35.863557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:03:35.863564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:03:35.863571 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:03:35.863579 kernel: TSC deadline timer available Jul 7 06:03:35.863586 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:03:35.863603 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:03:35.863611 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:03:35.863618 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:03:35.863625 kernel: CPU topo: Num. cores per package: 4 Jul 7 06:03:35.863633 kernel: CPU topo: Num. threads per package: 4 Jul 7 06:03:35.863641 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 06:03:35.863648 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:03:35.863656 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:03:35.863663 kernel: kvm-guest: setup PV sched yield Jul 7 06:03:35.863671 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 06:03:35.863680 kernel: Booting paravirtualized kernel on KVM Jul 7 06:03:35.863688 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:03:35.863695 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 06:03:35.863703 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 06:03:35.863710 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 06:03:35.863718 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 06:03:35.863725 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:03:35.863732 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:03:35.863741 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:03:35.863751 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:03:35.863759 kernel: random: crng init done Jul 7 06:03:35.863766 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:03:35.863774 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:03:35.863782 kernel: Fallback order for Node 0: 0 Jul 7 06:03:35.863789 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 7 06:03:35.863796 kernel: Policy zone: DMA32 Jul 7 06:03:35.863804 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:03:35.863813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:03:35.863821 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:03:35.863828 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:03:35.863836 kernel: Dynamic Preempt: voluntary Jul 7 06:03:35.863843 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:03:35.863851 kernel: rcu: RCU event tracing is enabled. Jul 7 06:03:35.863859 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:03:35.863867 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:03:35.863876 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:03:35.863886 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:03:35.863894 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:03:35.863902 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:03:35.863909 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:03:35.863917 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:03:35.863924 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:03:35.863932 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 06:03:35.863940 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:03:35.863956 kernel: Console: colour VGA+ 80x25 Jul 7 06:03:35.863964 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:03:35.863971 kernel: ACPI: Core revision 20240827 Jul 7 06:03:35.863979 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:03:35.863989 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:03:35.863997 kernel: x2apic enabled Jul 7 06:03:35.864007 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:03:35.864015 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:03:35.864023 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:03:35.864033 kernel: kvm-guest: setup PV IPIs Jul 7 06:03:35.864041 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:03:35.864049 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 06:03:35.864057 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 7 06:03:35.864117 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:03:35.864125 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:03:35.864133 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:03:35.864141 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:03:35.864151 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:03:35.864159 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:03:35.864167 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 06:03:35.864175 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 06:03:35.864183 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:03:35.864191 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:03:35.864199 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:03:35.864207 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:03:35.864215 kernel: x86/bugs: return thunk changed Jul 7 06:03:35.864225 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:03:35.864232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:03:35.864240 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:03:35.864248 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:03:35.864256 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:03:35.864264 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:03:35.864271 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:03:35.864279 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:03:35.864287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:03:35.864297 kernel: landlock: Up and running. Jul 7 06:03:35.864304 kernel: SELinux: Initializing. Jul 7 06:03:35.864312 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:03:35.864323 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:03:35.864331 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 06:03:35.864339 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:03:35.864347 kernel: ... version: 0 Jul 7 06:03:35.864354 kernel: ... bit width: 48 Jul 7 06:03:35.864362 kernel: ... generic registers: 6 Jul 7 06:03:35.864372 kernel: ... value mask: 0000ffffffffffff Jul 7 06:03:35.864380 kernel: ... max period: 00007fffffffffff Jul 7 06:03:35.864387 kernel: ... fixed-purpose events: 0 Jul 7 06:03:35.864395 kernel: ... event mask: 000000000000003f Jul 7 06:03:35.864403 kernel: signal: max sigframe size: 1776 Jul 7 06:03:35.864411 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:03:35.864418 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:03:35.864426 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:03:35.864434 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:03:35.864444 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:03:35.864452 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 06:03:35.864460 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:03:35.864467 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 7 06:03:35.864476 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 7 06:03:35.864483 kernel: devtmpfs: initialized Jul 7 06:03:35.864491 kernel: x86/mm: Memory block size: 128MB Jul 7 06:03:35.864499 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:03:35.864507 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:03:35.864517 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:03:35.864525 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:03:35.864532 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:03:35.864540 kernel: audit: type=2000 audit(1751868213.347:1): state=initialized audit_enabled=0 res=1 Jul 7 06:03:35.864548 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:03:35.864556 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:03:35.864563 kernel: cpuidle: using governor menu Jul 7 06:03:35.864571 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:03:35.864579 kernel: dca service started, version 1.12.1 Jul 7 06:03:35.864589 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 06:03:35.864605 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 06:03:35.864613 kernel: PCI: Using configuration type 1 for base access Jul 7 06:03:35.864620 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:03:35.864628 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:03:35.864636 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:03:35.864644 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:03:35.864652 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:03:35.864660 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:03:35.864670 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:03:35.864678 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:03:35.864685 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:03:35.864693 kernel: ACPI: Interpreter enabled Jul 7 06:03:35.864701 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 06:03:35.864708 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:03:35.864716 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:03:35.864724 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:03:35.864732 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:03:35.864742 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:03:35.864931 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:03:35.865080 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:03:35.865208 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:03:35.865219 kernel: PCI host bridge to bus 0000:00 Jul 7 06:03:35.865361 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:03:35.865505 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:03:35.865634 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:03:35.865745 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 7 06:03:35.865855 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 06:03:35.865974 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 7 06:03:35.866102 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:03:35.866256 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:03:35.866479 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:03:35.866613 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 7 06:03:35.866753 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 7 06:03:35.866898 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 7 06:03:35.867224 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:03:35.867386 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:03:35.867512 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 7 06:03:35.867652 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 7 06:03:35.867776 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 7 06:03:35.867918 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:03:35.868042 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 7 06:03:35.868247 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 7 06:03:35.868374 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 7 06:03:35.868514 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:03:35.868655 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 7 06:03:35.868777 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 7 06:03:35.868898 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 7 06:03:35.869020 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 7 06:03:35.869178 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:03:35.869301 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:03:35.869442 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:03:35.869564 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 7 06:03:35.869695 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 7 06:03:35.869834 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:03:35.869957 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 06:03:35.869968 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:03:35.869976 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:03:35.869988 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:03:35.869997 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:03:35.870005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:03:35.870013 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:03:35.870021 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:03:35.870029 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:03:35.870037 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:03:35.870045 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:03:35.870052 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:03:35.870089 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:03:35.870097 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:03:35.870105 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:03:35.870113 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:03:35.870121 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:03:35.870129 kernel: iommu: Default domain type: Translated Jul 7 06:03:35.870137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:03:35.870145 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:03:35.870153 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:03:35.870163 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 06:03:35.870171 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 7 06:03:35.870295 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:03:35.870416 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:03:35.870536 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:03:35.870547 kernel: vgaarb: loaded Jul 7 06:03:35.870555 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:03:35.870563 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:03:35.870574 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:03:35.870583 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:03:35.870598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:03:35.870607 kernel: pnp: PnP ACPI init Jul 7 06:03:35.870751 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 06:03:35.870763 kernel: pnp: PnP ACPI: found 6 devices Jul 7 06:03:35.870772 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:03:35.870779 kernel: NET: Registered PF_INET protocol family Jul 7 06:03:35.870787 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:03:35.870799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:03:35.870807 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:03:35.870815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:03:35.870823 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:03:35.870831 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:03:35.870839 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:03:35.870847 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:03:35.870855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:03:35.870865 kernel: NET: Registered PF_XDP protocol family Jul 7 06:03:35.870979 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:03:35.871108 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:03:35.871220 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:03:35.871331 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 7 06:03:35.871444 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 06:03:35.871559 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 7 06:03:35.871570 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:03:35.871579 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 06:03:35.871600 kernel: Initialise system trusted keyrings Jul 7 06:03:35.871608 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:03:35.871616 kernel: Key type asymmetric registered Jul 7 06:03:35.871624 kernel: Asymmetric key parser 'x509' registered Jul 7 06:03:35.871632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:03:35.871640 kernel: io scheduler mq-deadline registered Jul 7 06:03:35.871648 kernel: io scheduler kyber registered Jul 7 06:03:35.871656 kernel: io scheduler bfq registered Jul 7 06:03:35.871664 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:03:35.871675 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:03:35.871683 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:03:35.871691 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 06:03:35.871699 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:03:35.871707 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:03:35.871715 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:03:35.871723 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:03:35.871731 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:03:35.871960 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 06:03:35.871976 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:03:35.872107 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 06:03:35.872225 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T06:03:35 UTC (1751868215) Jul 7 06:03:35.872340 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 06:03:35.872351 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:03:35.872359 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:03:35.872367 kernel: Segment Routing with IPv6 Jul 7 06:03:35.872374 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:03:35.872386 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:03:35.872394 kernel: Key type dns_resolver registered Jul 7 06:03:35.872402 kernel: IPI shorthand broadcast: enabled Jul 7 06:03:35.872410 kernel: sched_clock: Marking stable (3107003786, 111917551)->(3249169173, -30247836) Jul 7 06:03:35.872418 kernel: registered taskstats version 1 Jul 7 06:03:35.872425 kernel: Loading compiled-in X.509 certificates Jul 7 06:03:35.872434 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:03:35.872441 kernel: Demotion targets for Node 0: null Jul 7 06:03:35.872449 kernel: Key type .fscrypt registered Jul 7 06:03:35.872459 kernel: Key type fscrypt-provisioning registered Jul 7 06:03:35.872467 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:03:35.872475 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:03:35.872483 kernel: ima: No architecture policies found Jul 7 06:03:35.872491 kernel: clk: Disabling unused clocks Jul 7 06:03:35.872498 kernel: Warning: unable to open an initial console. Jul 7 06:03:35.872506 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:03:35.872514 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:03:35.872525 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:03:35.872533 kernel: Run /init as init process Jul 7 06:03:35.872541 kernel: with arguments: Jul 7 06:03:35.872548 kernel: /init Jul 7 06:03:35.872556 kernel: with environment: Jul 7 06:03:35.872564 kernel: HOME=/ Jul 7 06:03:35.872572 kernel: TERM=linux Jul 7 06:03:35.872579 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:03:35.872588 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:03:35.872610 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:03:35.872632 systemd[1]: Detected virtualization kvm. Jul 7 06:03:35.872641 systemd[1]: Detected architecture x86-64. Jul 7 06:03:35.872649 systemd[1]: Running in initrd. Jul 7 06:03:35.872657 systemd[1]: No hostname configured, using default hostname. Jul 7 06:03:35.872668 systemd[1]: Hostname set to . Jul 7 06:03:35.872677 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:03:35.872686 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:03:35.872695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:03:35.872704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:03:35.872713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:03:35.872722 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:03:35.872731 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:03:35.872742 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:03:35.872752 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:03:35.872761 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:03:35.872770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:03:35.872779 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:03:35.872788 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:03:35.872796 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:03:35.872807 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:03:35.872816 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:03:35.872824 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:03:35.872833 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:03:35.872842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:03:35.872850 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:03:35.872859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:03:35.872868 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:03:35.872877 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:03:35.872887 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:03:35.872896 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:03:35.872905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:03:35.872914 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:03:35.872923 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:03:35.872936 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:03:35.872945 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:03:35.872954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:03:35.872963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:35.872971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:03:35.872980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:03:35.872992 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:03:35.873000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:03:35.873029 systemd-journald[220]: Collecting audit messages is disabled. Jul 7 06:03:35.873053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:03:35.873079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:03:35.873089 systemd-journald[220]: Journal started Jul 7 06:03:35.873108 systemd-journald[220]: Runtime Journal (/run/log/journal/29ceff16b72a4dc89ad97272ba5efc38) is 6M, max 48.6M, 42.5M free. Jul 7 06:03:35.862964 systemd-modules-load[223]: Inserted module 'overlay' Jul 7 06:03:35.904898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:03:35.904914 kernel: Bridge firewalling registered Jul 7 06:03:35.904927 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:03:35.894053 systemd-modules-load[223]: Inserted module 'br_netfilter' Jul 7 06:03:35.905154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:03:35.905729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:35.909541 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:03:35.910633 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:03:35.913167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:03:35.931144 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:03:35.932792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:03:35.937173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:03:35.940133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:03:35.942760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:03:35.946730 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:03:35.950361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:03:35.973917 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:03:35.998768 systemd-resolved[261]: Positive Trust Anchors: Jul 7 06:03:35.998797 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:03:35.998837 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:03:36.002484 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 7 06:03:36.004170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:03:36.009181 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:03:36.077102 kernel: SCSI subsystem initialized Jul 7 06:03:36.086092 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:03:36.097092 kernel: iscsi: registered transport (tcp) Jul 7 06:03:36.229123 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:03:36.229218 kernel: QLogic iSCSI HBA Driver Jul 7 06:03:36.249911 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:03:36.279167 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:03:36.283495 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:03:36.339237 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:03:36.341139 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:03:36.401097 kernel: raid6: avx2x4 gen() 27665 MB/s Jul 7 06:03:36.418099 kernel: raid6: avx2x2 gen() 29729 MB/s Jul 7 06:03:36.435181 kernel: raid6: avx2x1 gen() 22967 MB/s Jul 7 06:03:36.435208 kernel: raid6: using algorithm avx2x2 gen() 29729 MB/s Jul 7 06:03:36.453167 kernel: raid6: .... xor() 16798 MB/s, rmw enabled Jul 7 06:03:36.453225 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:03:36.485103 kernel: xor: automatically using best checksumming function avx Jul 7 06:03:36.658107 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:03:36.666150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:03:36.668403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:03:36.702403 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 7 06:03:36.708182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:03:36.713585 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:03:36.749166 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jul 7 06:03:36.782076 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:03:36.784081 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:03:36.871423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:03:36.875099 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:03:36.908132 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 06:03:36.913458 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:03:36.922137 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:03:36.922200 kernel: GPT:9289727 != 19775487 Jul 7 06:03:36.922216 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:03:36.922230 kernel: GPT:9289727 != 19775487 Jul 7 06:03:36.922242 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:03:36.922272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:36.925094 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:03:36.941214 kernel: AES CTR mode by8 optimization enabled Jul 7 06:03:36.974304 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:03:37.039101 kernel: libata version 3.00 loaded. Jul 7 06:03:37.047120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:03:37.047251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:37.052463 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:03:37.051294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:37.055088 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:03:37.057336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:37.061369 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:03:37.065544 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:03:37.065746 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:03:37.065890 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:03:37.070085 kernel: scsi host0: ahci Jul 7 06:03:37.073133 kernel: scsi host1: ahci Jul 7 06:03:37.075576 kernel: scsi host2: ahci Jul 7 06:03:37.075786 kernel: scsi host3: ahci Jul 7 06:03:37.081488 kernel: scsi host4: ahci Jul 7 06:03:37.082719 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:03:37.085160 kernel: scsi host5: ahci Jul 7 06:03:37.085332 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 7 06:03:37.085344 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 7 06:03:37.090763 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 7 06:03:37.090785 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 7 06:03:37.090796 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 7 06:03:37.090811 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 7 06:03:37.112097 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:03:37.136379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:37.148421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:03:37.157613 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:03:37.159044 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:03:37.164299 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:03:37.201083 disk-uuid[632]: Primary Header is updated. Jul 7 06:03:37.201083 disk-uuid[632]: Secondary Entries is updated. Jul 7 06:03:37.201083 disk-uuid[632]: Secondary Header is updated. Jul 7 06:03:37.205081 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:37.210082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:37.401296 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:03:37.401368 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:03:37.401395 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:03:37.403098 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:03:37.403161 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 06:03:37.404092 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:03:37.405100 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 06:03:37.405113 kernel: ata3.00: applying bridge limits Jul 7 06:03:37.406090 kernel: ata3.00: configured for UDMA/100 Jul 7 06:03:37.408084 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 06:03:37.455100 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 06:03:37.455425 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:03:37.469091 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:03:37.823419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:03:37.824160 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:03:37.826823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:03:37.827999 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:03:37.830179 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:03:37.859054 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:03:38.211094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:38.211351 disk-uuid[633]: The operation has completed successfully. Jul 7 06:03:38.250402 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:03:38.250546 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:03:38.278798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:03:38.309710 sh[662]: Success Jul 7 06:03:38.328112 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:03:38.328161 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:03:38.329687 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:03:38.341092 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:03:38.375624 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:03:38.379172 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:03:38.398606 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:03:38.406038 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:03:38.406099 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (674) Jul 7 06:03:38.408318 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:03:38.408336 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:03:38.408349 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:03:38.413938 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:03:38.416168 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:03:38.418355 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:03:38.421134 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:03:38.423692 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:03:38.454740 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Jul 7 06:03:38.454779 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:03:38.454790 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:03:38.456396 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:03:38.465099 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:03:38.465346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:03:38.467276 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:03:38.628557 ignition[747]: Ignition 2.21.0 Jul 7 06:03:38.628569 ignition[747]: Stage: fetch-offline Jul 7 06:03:38.628606 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:38.628616 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:38.628703 ignition[747]: parsed url from cmdline: "" Jul 7 06:03:38.628707 ignition[747]: no config URL provided Jul 7 06:03:38.628712 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:03:38.628720 ignition[747]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:03:38.628743 ignition[747]: op(1): [started] loading QEMU firmware config module Jul 7 06:03:38.628748 ignition[747]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:03:38.647848 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:03:38.651630 ignition[747]: op(1): [finished] loading QEMU firmware config module Jul 7 06:03:38.656675 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:03:38.697293 ignition[747]: parsing config with SHA512: 4ef29439ee406671057bc33ec8c241fbc8dd798258fdd35676b6b20e70eb5d8c42f34685df2f2005d977d225175fcd767c9c0e135bb14c136ec747b0bd332309 Jul 7 06:03:38.707349 unknown[747]: fetched base config from "system" Jul 7 06:03:38.707519 unknown[747]: fetched user config from "qemu" Jul 7 06:03:38.707874 ignition[747]: fetch-offline: fetch-offline passed Jul 7 06:03:38.707941 ignition[747]: Ignition finished successfully Jul 7 06:03:38.711299 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:03:38.718926 systemd-networkd[853]: lo: Link UP Jul 7 06:03:38.718938 systemd-networkd[853]: lo: Gained carrier Jul 7 06:03:38.720670 systemd-networkd[853]: Enumeration completed Jul 7 06:03:38.720764 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:03:38.721097 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:38.721101 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:03:38.722370 systemd[1]: Reached target network.target - Network. Jul 7 06:03:38.723103 systemd-networkd[853]: eth0: Link UP Jul 7 06:03:38.723108 systemd-networkd[853]: eth0: Gained carrier Jul 7 06:03:38.723116 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:38.723457 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:03:38.724551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:03:38.747172 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:03:38.767468 ignition[857]: Ignition 2.21.0 Jul 7 06:03:38.767484 ignition[857]: Stage: kargs Jul 7 06:03:38.767643 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:38.767655 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:38.768400 ignition[857]: kargs: kargs passed Jul 7 06:03:38.768446 ignition[857]: Ignition finished successfully Jul 7 06:03:38.773861 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:03:38.777454 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:03:38.822678 ignition[866]: Ignition 2.21.0 Jul 7 06:03:38.822693 ignition[866]: Stage: disks Jul 7 06:03:38.822845 ignition[866]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:38.822856 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:38.825593 ignition[866]: disks: disks passed Jul 7 06:03:38.825658 ignition[866]: Ignition finished successfully Jul 7 06:03:38.829144 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:03:38.829531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:03:38.832779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:03:38.835324 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:03:38.837798 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:03:38.839985 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:03:38.843345 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:03:38.876583 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:03:38.884186 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:03:38.888959 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:03:39.021104 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:03:39.021846 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:03:39.022638 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:03:39.026302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:03:39.028175 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:03:39.029540 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:03:39.029589 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:03:39.029617 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:03:39.040431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:03:39.042437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:03:39.047428 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Jul 7 06:03:39.049746 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:03:39.049792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:03:39.049805 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:03:39.054925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:03:39.083733 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:03:39.089886 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:03:39.095309 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:03:39.100395 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:03:39.205393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:03:39.209116 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:03:39.211373 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:03:39.238091 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:03:39.259607 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:03:39.285898 ignition[999]: INFO : Ignition 2.21.0 Jul 7 06:03:39.285898 ignition[999]: INFO : Stage: mount Jul 7 06:03:39.287902 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:39.287902 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:39.291675 ignition[999]: INFO : mount: mount passed Jul 7 06:03:39.294311 ignition[999]: INFO : Ignition finished successfully Jul 7 06:03:39.301693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:03:39.305301 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:03:39.405574 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:03:39.408236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:03:39.440839 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 7 06:03:39.440881 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:03:39.440901 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:03:39.442528 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:03:39.446931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:03:39.610140 ignition[1028]: INFO : Ignition 2.21.0 Jul 7 06:03:39.610140 ignition[1028]: INFO : Stage: files Jul 7 06:03:39.610140 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:39.610140 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:39.614527 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:03:39.614527 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:03:39.614527 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:03:39.618740 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:03:39.618740 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:03:39.618740 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:03:39.618740 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:03:39.618740 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 06:03:39.615712 unknown[1028]: wrote ssh authorized keys file for user: core Jul 7 06:03:39.656883 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:03:39.771901 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:03:39.773923 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:03:39.773923 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 06:03:40.082364 systemd-networkd[853]: eth0: Gained IPv6LL Jul 7 06:03:40.267954 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:03:40.383112 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:03:40.383112 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:03:40.387209 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:03:40.580771 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:03:40.583026 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:03:40.583026 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:03:40.587419 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:03:40.587419 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:03:40.587419 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 06:03:41.101789 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:03:41.694900 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:03:41.694900 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:03:41.698593 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:03:41.705434 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:03:41.705434 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:03:41.705434 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 06:03:41.705434 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:03:41.712382 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:03:41.712382 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 06:03:41.712382 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:03:41.726474 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:03:41.730409 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:03:41.732132 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:03:41.732132 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:03:41.732132 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:03:41.732132 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:03:41.732132 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:03:41.732132 ignition[1028]: INFO : files: files passed Jul 7 06:03:41.732132 ignition[1028]: INFO : Ignition finished successfully Jul 7 06:03:41.733983 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:03:41.736865 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:03:41.740236 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:03:41.767115 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:03:41.767268 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:03:41.770294 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:03:41.773545 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:03:41.773545 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:03:41.777129 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:03:41.779787 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:03:41.780106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:03:41.784281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:03:41.834355 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:03:41.834499 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:03:41.837005 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:03:41.837750 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:03:41.840563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:03:41.842859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:03:41.867572 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:03:41.869448 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:03:41.902577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:03:41.902755 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:03:41.904921 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:03:41.905438 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:03:41.905583 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:03:41.911895 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:03:41.913131 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:03:41.914174 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:03:41.914643 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:03:41.914971 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:03:41.915522 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:03:41.915854 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:03:41.916361 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:03:41.925560 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:03:41.925823 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:03:41.926394 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:03:41.926654 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:03:41.926772 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:03:41.935393 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:03:41.935543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:03:41.935819 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:03:41.940465 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:03:41.940585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:03:41.940699 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:03:41.943342 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:03:41.943465 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:03:41.945682 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:03:41.947613 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:03:41.953171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:03:41.953340 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:03:41.955807 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:03:41.956146 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:03:41.956253 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:03:41.959033 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:03:41.959160 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:03:41.959562 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:03:41.959675 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:03:41.962411 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:03:41.962529 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:03:41.966166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:03:41.967274 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:03:41.967390 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:03:41.970008 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:03:41.971962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:03:41.972121 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:03:41.974133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:03:41.974249 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:03:41.982443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:03:41.984228 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:03:42.001690 ignition[1084]: INFO : Ignition 2.21.0 Jul 7 06:03:42.001690 ignition[1084]: INFO : Stage: umount Jul 7 06:03:42.003963 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:42.003963 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:42.003963 ignition[1084]: INFO : umount: umount passed Jul 7 06:03:42.003963 ignition[1084]: INFO : Ignition finished successfully Jul 7 06:03:42.006531 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:03:42.007264 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:03:42.007382 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:03:42.008367 systemd[1]: Stopped target network.target - Network. Jul 7 06:03:42.010841 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:03:42.010905 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:03:42.011965 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:03:42.012020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:03:42.014986 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:03:42.015039 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:03:42.016109 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:03:42.016162 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:03:42.016936 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:03:42.017537 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:03:42.025352 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:03:42.025570 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:03:42.030456 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:03:42.030780 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:03:42.030845 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:03:42.038983 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:03:42.039335 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:03:42.039495 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:03:42.043713 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:03:42.045040 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:03:42.047239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:03:42.047303 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:03:42.050424 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:03:42.050510 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:03:42.050568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:03:42.053523 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:03:42.053579 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:03:42.057522 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:03:42.058514 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:03:42.059637 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:03:42.062832 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:03:42.081873 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:03:42.082232 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:03:42.084408 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:03:42.084467 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:03:42.086497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:03:42.086539 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:03:42.088492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:03:42.088544 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:03:42.090724 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:03:42.090781 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:03:42.093944 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:03:42.093993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:03:42.098837 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:03:42.100025 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:03:42.100092 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:03:42.104185 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:03:42.104238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:03:42.108993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:03:42.109049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:42.112753 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:03:42.120224 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:03:42.127884 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:03:42.128005 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:03:42.221393 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:03:42.221572 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:03:42.222759 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:03:42.224218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:03:42.224275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:03:42.225445 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:03:42.248114 systemd[1]: Switching root. Jul 7 06:03:42.290631 systemd-journald[220]: Journal stopped Jul 7 06:03:43.660877 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 7 06:03:43.660938 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:03:43.660955 kernel: SELinux: policy capability open_perms=1 Jul 7 06:03:43.660967 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:03:43.660978 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:03:43.660989 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:03:43.661001 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:03:43.661012 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:03:43.661023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:03:43.661039 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:03:43.661050 kernel: audit: type=1403 audit(1751868222.837:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:03:43.661086 systemd[1]: Successfully loaded SELinux policy in 47.087ms. Jul 7 06:03:43.661112 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.211ms. Jul 7 06:03:43.661126 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:03:43.661139 systemd[1]: Detected virtualization kvm. Jul 7 06:03:43.661151 systemd[1]: Detected architecture x86-64. Jul 7 06:03:43.661163 systemd[1]: Detected first boot. Jul 7 06:03:43.661176 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:03:43.661188 zram_generator::config[1130]: No configuration found. Jul 7 06:03:43.661204 kernel: Guest personality initialized and is inactive Jul 7 06:03:43.661215 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:03:43.661226 kernel: Initialized host personality Jul 7 06:03:43.661238 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:03:43.661250 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:03:43.661263 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:03:43.661275 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:03:43.661293 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:03:43.661305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:03:43.661326 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:03:43.661338 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:03:43.661350 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:03:43.661363 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:03:43.661387 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:03:43.661400 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:03:43.661413 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:03:43.661425 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:03:43.661437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:03:43.661452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:03:43.661465 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:03:43.661478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:03:43.661490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:03:43.661503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:03:43.661515 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:03:43.661528 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:03:43.661546 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:03:43.661559 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:03:43.661571 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:03:43.661583 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:03:43.661595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:03:43.661608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:03:43.661620 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:03:43.661632 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:03:43.661644 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:03:43.661657 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:03:43.661677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:03:43.661690 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:03:43.661707 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:03:43.661720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:03:43.661732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:03:43.661744 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:03:43.661756 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:03:43.661771 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:03:43.661783 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:03:43.661798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:03:43.661810 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:03:43.661822 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:03:43.661834 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:03:43.661847 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:03:43.661859 systemd[1]: Reached target machines.target - Containers. Jul 7 06:03:43.661871 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:03:43.661884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:43.661898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:03:43.661911 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:03:43.661923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:43.661935 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:03:43.661947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:43.661960 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:03:43.661972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:43.661984 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:03:43.662002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:03:43.662014 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:03:43.662027 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:03:43.662039 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:03:43.662052 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:03:43.662083 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:03:43.662096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:03:43.662109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:03:43.662121 kernel: loop: module loaded Jul 7 06:03:43.662135 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:03:43.662149 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:03:43.662161 kernel: fuse: init (API version 7.41) Jul 7 06:03:43.662178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:03:43.662191 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:03:43.662206 systemd[1]: Stopped verity-setup.service. Jul 7 06:03:43.662218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:03:43.662231 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:03:43.662243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:03:43.662255 kernel: ACPI: bus type drm_connector registered Jul 7 06:03:43.662267 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:03:43.662279 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:03:43.662291 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:03:43.662303 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:03:43.662321 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:03:43.662333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:03:43.662345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:03:43.662358 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:03:43.662376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:43.662416 systemd-journald[1205]: Collecting audit messages is disabled. Jul 7 06:03:43.662438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:43.662453 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:03:43.662466 systemd-journald[1205]: Journal started Jul 7 06:03:43.662488 systemd-journald[1205]: Runtime Journal (/run/log/journal/29ceff16b72a4dc89ad97272ba5efc38) is 6M, max 48.6M, 42.5M free. Jul 7 06:03:43.394834 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:03:43.417228 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:03:43.417702 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:03:43.663511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:03:43.666139 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:03:43.667458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:43.667676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:43.669204 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:03:43.669449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:03:43.670942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:43.671273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:43.672703 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:03:43.674153 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:03:43.675736 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:03:43.677317 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:03:43.692711 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:03:43.695595 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:03:43.697976 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:03:43.699182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:03:43.699217 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:03:43.701261 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:03:43.712474 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:03:43.714348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:43.716029 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:03:43.718845 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:03:43.720410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:03:43.722728 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:03:43.724450 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:03:43.726360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:03:43.730513 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:03:43.739297 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:03:43.742365 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:03:43.744646 systemd-journald[1205]: Time spent on flushing to /var/log/journal/29ceff16b72a4dc89ad97272ba5efc38 is 17.595ms for 979 entries. Jul 7 06:03:43.744646 systemd-journald[1205]: System Journal (/var/log/journal/29ceff16b72a4dc89ad97272ba5efc38) is 8M, max 195.6M, 187.6M free. Jul 7 06:03:43.771025 systemd-journald[1205]: Received client request to flush runtime journal. Jul 7 06:03:43.771080 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 06:03:43.744247 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:03:43.750672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:03:43.766750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:03:43.768673 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:03:43.779363 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:03:43.781894 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:03:43.786245 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:03:43.794692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:03:43.805090 kernel: loop1: detected capacity change from 0 to 221472 Jul 7 06:03:43.814996 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:03:43.820098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:03:43.821903 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:03:43.841097 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:03:43.860040 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 7 06:03:43.860057 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 7 06:03:43.866534 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:03:43.872218 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:03:43.887280 kernel: loop4: detected capacity change from 0 to 221472 Jul 7 06:03:43.898093 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 06:03:43.904974 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:03:43.906138 (sd-merge)[1272]: Merged extensions into '/usr'. Jul 7 06:03:43.911994 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:03:43.912011 systemd[1]: Reloading... Jul 7 06:03:43.972100 zram_generator::config[1298]: No configuration found. Jul 7 06:03:44.085820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:03:44.095478 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:03:44.167233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:03:44.167497 systemd[1]: Reloading finished in 254 ms. Jul 7 06:03:44.198655 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:03:44.200385 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:03:44.215727 systemd[1]: Starting ensure-sysext.service... Jul 7 06:03:44.218045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:03:44.239936 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:03:44.239953 systemd[1]: Reloading... Jul 7 06:03:44.249783 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:03:44.249827 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:03:44.250151 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:03:44.250433 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:03:44.251372 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:03:44.252428 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 7 06:03:44.252505 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 7 06:03:44.285742 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:03:44.285762 systemd-tmpfiles[1336]: Skipping /boot Jul 7 06:03:44.291085 zram_generator::config[1363]: No configuration found. Jul 7 06:03:44.303269 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:03:44.303291 systemd-tmpfiles[1336]: Skipping /boot Jul 7 06:03:44.403148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:03:44.487365 systemd[1]: Reloading finished in 247 ms. Jul 7 06:03:44.510876 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:03:44.529655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:03:44.540552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:03:44.543643 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:03:44.553434 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:03:44.557334 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:03:44.561161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:03:44.565103 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:03:44.569277 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:03:44.569836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:44.573575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:44.578466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:44.584504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:44.585777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:44.585880 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:03:44.585976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:03:44.587174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:44.587929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:44.590564 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:44.590908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:44.593833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:44.594241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:44.596287 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:03:44.601510 augenrules[1430]: No rules Jul 7 06:03:44.603154 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:03:44.603558 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:03:44.612987 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:03:44.617283 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Jul 7 06:03:44.619771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:03:44.621887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:03:44.623280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:44.625792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:44.632591 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:03:44.635860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:44.640170 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:44.641699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:44.641847 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:03:44.644292 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:03:44.648899 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:03:44.651776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:03:44.655697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:03:44.663296 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:03:44.665238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:44.665501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:44.667170 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:03:44.667428 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:03:44.669615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:44.669826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:44.671795 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:44.672017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:44.674569 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:03:44.677649 augenrules[1441]: /sbin/augenrules: No change Jul 7 06:03:44.686135 systemd[1]: Finished ensure-sysext.service. Jul 7 06:03:44.698222 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:03:44.700238 augenrules[1498]: No rules Jul 7 06:03:44.699327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:03:44.699396 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:03:44.701573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:03:44.702688 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:03:44.703024 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:03:44.704362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:03:44.729754 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:03:44.780529 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:03:44.830849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:03:44.834221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:03:44.850123 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:03:44.858856 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:03:44.860117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:03:44.863751 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:03:44.871633 systemd-networkd[1500]: lo: Link UP Jul 7 06:03:44.871985 systemd-networkd[1500]: lo: Gained carrier Jul 7 06:03:44.875720 systemd-networkd[1500]: Enumeration completed Jul 7 06:03:44.875822 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:03:44.876441 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:44.876446 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:03:44.878646 systemd-networkd[1500]: eth0: Link UP Jul 7 06:03:44.878795 systemd-networkd[1500]: eth0: Gained carrier Jul 7 06:03:44.878809 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:44.883004 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:03:44.883452 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:03:44.882947 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:03:44.885646 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:03:45.059237 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:03:45.085226 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:03:45.088522 systemd-resolved[1405]: Positive Trust Anchors: Jul 7 06:03:45.088543 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:03:45.088577 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:03:45.097480 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:03:45.098980 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:03:46.075301 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:03:46.075368 systemd-timesyncd[1504]: Initial clock synchronization to Mon 2025-07-07 06:03:46.075199 UTC. Jul 7 06:03:46.076013 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jul 7 06:03:46.081771 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:03:46.083007 systemd[1]: Reached target network.target - Network. Jul 7 06:03:46.083940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:03:46.085156 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:03:46.086370 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:03:46.087660 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:03:46.088915 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:03:46.090287 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:03:46.091515 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:03:46.092778 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:03:46.094015 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:03:46.094046 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:03:46.094980 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:03:46.097139 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:03:46.099999 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:03:46.106143 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:03:46.107685 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:03:46.109021 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:03:46.158444 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:03:46.161729 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:03:46.166104 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:03:46.182048 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:03:46.183320 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:03:46.184424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:03:46.184518 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:03:46.186818 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:03:46.189606 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:03:46.192279 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:03:46.197695 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:03:46.207657 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:03:46.208771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:03:46.210117 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:03:46.215372 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:03:46.220847 jq[1552]: false Jul 7 06:03:46.226279 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:03:46.228689 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:03:46.232112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:03:46.238956 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jul 7 06:03:46.238968 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jul 7 06:03:46.241273 kernel: kvm_amd: TSC scaling supported Jul 7 06:03:46.241313 kernel: kvm_amd: Nested Virtualization enabled Jul 7 06:03:46.241330 kernel: kvm_amd: Nested Paging enabled Jul 7 06:03:46.242636 kernel: kvm_amd: LBR virtualization supported Jul 7 06:03:46.246173 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 06:03:46.246258 kernel: kvm_amd: Virtual GIF supported Jul 7 06:03:46.248393 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Jul 7 06:03:46.248393 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:03:46.248393 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Jul 7 06:03:46.247445 oslogin_cache_refresh[1554]: Failure getting users, quitting Jul 7 06:03:46.247485 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:03:46.247558 oslogin_cache_refresh[1554]: Refreshing group entry cache Jul 7 06:03:46.272969 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:03:46.328205 extend-filesystems[1553]: Found /dev/vda6 Jul 7 06:03:46.329329 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Jul 7 06:03:46.329329 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:03:46.328709 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:03:46.328301 oslogin_cache_refresh[1554]: Failure getting groups, quitting Jul 7 06:03:46.328322 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:03:46.330509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:03:46.335341 extend-filesystems[1553]: Found /dev/vda9 Jul 7 06:03:46.337976 extend-filesystems[1553]: Checking size of /dev/vda9 Jul 7 06:03:46.345681 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:03:46.352097 extend-filesystems[1553]: Resized partition /dev/vda9 Jul 7 06:03:46.351317 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:03:46.357011 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:03:46.357193 extend-filesystems[1576]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:03:46.358743 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:03:46.358996 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:03:46.361113 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:03:46.361427 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:03:46.361686 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:03:46.363357 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:03:46.364278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:03:46.367374 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:03:46.368171 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:03:46.377168 jq[1575]: true Jul 7 06:03:46.393934 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:03:46.399469 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:03:46.424686 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:03:46.424723 update_engine[1569]: I20250707 06:03:46.414198 1569 main.cc:92] Flatcar Update Engine starting Jul 7 06:03:46.425063 jq[1587]: true Jul 7 06:03:46.425293 tar[1580]: linux-amd64/helm Jul 7 06:03:46.401017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:46.425653 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:03:46.425653 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:03:46.425653 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:03:46.430265 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Jul 7 06:03:46.435193 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:03:46.436203 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:03:46.565137 systemd-logind[1567]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:03:46.565193 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:03:46.565753 systemd-logind[1567]: New seat seat0. Jul 7 06:03:46.568843 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:03:46.572177 dbus-daemon[1550]: [system] SELinux support is enabled Jul 7 06:03:46.572717 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:03:46.582704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:03:46.582738 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:03:46.585147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:03:46.585179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:03:46.591159 update_engine[1569]: I20250707 06:03:46.591051 1569 update_check_scheduler.cc:74] Next update check in 5m54s Jul 7 06:03:46.595837 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:03:46.596070 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:03:46.600002 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:03:46.603592 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:03:46.607295 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:03:46.609412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:03:46.679883 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:03:46.756658 containerd[1583]: time="2025-07-07T06:03:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:03:46.759506 containerd[1583]: time="2025-07-07T06:03:46.759461756Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:03:46.778148 containerd[1583]: time="2025-07-07T06:03:46.778111575Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.03µs" Jul 7 06:03:46.778253 containerd[1583]: time="2025-07-07T06:03:46.778235327Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:03:46.778312 containerd[1583]: time="2025-07-07T06:03:46.778299437Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:03:46.778573 containerd[1583]: time="2025-07-07T06:03:46.778554947Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:03:46.778655 containerd[1583]: time="2025-07-07T06:03:46.778639385Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:03:46.778736 containerd[1583]: time="2025-07-07T06:03:46.778721759Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:03:46.778864 containerd[1583]: time="2025-07-07T06:03:46.778845682Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:03:46.778916 containerd[1583]: time="2025-07-07T06:03:46.778903370Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:03:46.779271 containerd[1583]: time="2025-07-07T06:03:46.779245382Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780104 containerd[1583]: time="2025-07-07T06:03:46.779319371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780104 containerd[1583]: time="2025-07-07T06:03:46.779344498Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780104 containerd[1583]: time="2025-07-07T06:03:46.779352393Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780104 containerd[1583]: time="2025-07-07T06:03:46.779840448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780280 containerd[1583]: time="2025-07-07T06:03:46.780261037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780367 containerd[1583]: time="2025-07-07T06:03:46.780347289Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:03:46.780428 containerd[1583]: time="2025-07-07T06:03:46.780414655Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:03:46.780515 containerd[1583]: time="2025-07-07T06:03:46.780501298Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:03:46.780995 containerd[1583]: time="2025-07-07T06:03:46.780974075Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:03:46.781139 containerd[1583]: time="2025-07-07T06:03:46.781123034Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.789886024Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790104013Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790132497Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790164707Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790183653Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790206826Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790235700Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.790268272Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.811032076Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.811143705Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.811282786Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.811414413Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.812181783Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:03:46.816170 containerd[1583]: time="2025-07-07T06:03:46.812212561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812275629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812296558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812335431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812381868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812417415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812451739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812581393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812608724Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.812647967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.813009165Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.813038390Z" level=info msg="Start snapshots syncer" Jul 7 06:03:46.816835 containerd[1583]: time="2025-07-07T06:03:46.813175958Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:03:46.817303 containerd[1583]: time="2025-07-07T06:03:46.814195060Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:03:46.817303 containerd[1583]: time="2025-07-07T06:03:46.814566627Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:03:46.817634 containerd[1583]: time="2025-07-07T06:03:46.814939807Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:03:46.817634 containerd[1583]: time="2025-07-07T06:03:46.815217849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:03:46.817634 containerd[1583]: time="2025-07-07T06:03:46.815449183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:03:46.819306 containerd[1583]: time="2025-07-07T06:03:46.819233092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:03:46.820186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:46.821404 containerd[1583]: time="2025-07-07T06:03:46.821318614Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:03:46.821811 containerd[1583]: time="2025-07-07T06:03:46.821791592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:03:46.821933 containerd[1583]: time="2025-07-07T06:03:46.821910625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:03:46.822163 containerd[1583]: time="2025-07-07T06:03:46.822124426Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:03:46.822612 containerd[1583]: time="2025-07-07T06:03:46.822590761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:03:46.822697 containerd[1583]: time="2025-07-07T06:03:46.822679417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:03:46.822838 containerd[1583]: time="2025-07-07T06:03:46.822808690Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:03:46.823190 containerd[1583]: time="2025-07-07T06:03:46.823145532Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:03:46.823443 containerd[1583]: time="2025-07-07T06:03:46.823310982Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:03:46.823443 containerd[1583]: time="2025-07-07T06:03:46.823377587Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:03:46.823662 containerd[1583]: time="2025-07-07T06:03:46.823588042Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:03:46.823744 containerd[1583]: time="2025-07-07T06:03:46.823629800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:03:46.823912 containerd[1583]: time="2025-07-07T06:03:46.823836859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:03:46.823912 containerd[1583]: time="2025-07-07T06:03:46.823864691Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:03:46.824188 containerd[1583]: time="2025-07-07T06:03:46.824135599Z" level=info msg="runtime interface created" Jul 7 06:03:46.824310 containerd[1583]: time="2025-07-07T06:03:46.824292884Z" level=info msg="created NRI interface" Jul 7 06:03:46.824479 containerd[1583]: time="2025-07-07T06:03:46.824418510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:03:46.824625 containerd[1583]: time="2025-07-07T06:03:46.824550257Z" level=info msg="Connect containerd service" Jul 7 06:03:46.824738 containerd[1583]: time="2025-07-07T06:03:46.824710778Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:03:46.827098 containerd[1583]: time="2025-07-07T06:03:46.826482131Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:03:47.020256 containerd[1583]: time="2025-07-07T06:03:47.020116092Z" level=info msg="Start subscribing containerd event" Jul 7 06:03:47.020256 containerd[1583]: time="2025-07-07T06:03:47.020193518Z" level=info msg="Start recovering state" Jul 7 06:03:47.020418 containerd[1583]: time="2025-07-07T06:03:47.020355922Z" level=info msg="Start event monitor" Jul 7 06:03:47.020418 containerd[1583]: time="2025-07-07T06:03:47.020376070Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:03:47.020418 containerd[1583]: time="2025-07-07T06:03:47.020393132Z" level=info msg="Start streaming server" Jul 7 06:03:47.020418 containerd[1583]: time="2025-07-07T06:03:47.020407710Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:03:47.020418 containerd[1583]: time="2025-07-07T06:03:47.020418700Z" level=info msg="runtime interface starting up..." Jul 7 06:03:47.020563 containerd[1583]: time="2025-07-07T06:03:47.020431564Z" level=info msg="starting plugins..." Jul 7 06:03:47.020563 containerd[1583]: time="2025-07-07T06:03:47.020451321Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:03:47.021128 containerd[1583]: time="2025-07-07T06:03:47.021057188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:03:47.021304 containerd[1583]: time="2025-07-07T06:03:47.021279405Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:03:47.024926 containerd[1583]: time="2025-07-07T06:03:47.024898976Z" level=info msg="containerd successfully booted in 0.272012s" Jul 7 06:03:47.025221 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:03:47.074208 systemd-networkd[1500]: eth0: Gained IPv6LL Jul 7 06:03:47.078472 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:03:47.080188 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:03:47.083837 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:03:47.089229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:03:47.197196 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:03:47.235967 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:03:47.238681 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:03:47.239047 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:03:47.242746 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:03:47.254133 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:03:47.271142 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:03:47.274255 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:03:47.276747 tar[1580]: linux-amd64/LICENSE Jul 7 06:03:47.276865 tar[1580]: linux-amd64/README.md Jul 7 06:03:47.295729 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:03:47.298793 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:03:47.299122 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:03:47.302345 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:03:47.342111 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:03:47.345540 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:03:47.347955 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:03:47.349278 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:03:48.565185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:03:48.567104 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:03:48.568478 systemd[1]: Startup finished in 3.172s (kernel) + 7.213s (initrd) + 4.802s (userspace) = 15.187s. Jul 7 06:03:48.598572 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:03:49.020887 kubelet[1692]: E0707 06:03:49.020735 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:03:49.024926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:03:49.025158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:03:49.025621 systemd[1]: kubelet.service: Consumed 1.717s CPU time, 263.7M memory peak. Jul 7 06:03:49.960443 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:03:49.962062 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:34840.service - OpenSSH per-connection server daemon (10.0.0.1:34840). Jul 7 06:03:50.038910 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 34840 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:50.040814 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:50.048506 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:03:50.049834 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:03:50.057215 systemd-logind[1567]: New session 1 of user core. Jul 7 06:03:50.075450 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:03:50.079128 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:03:50.106937 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:03:50.109896 systemd-logind[1567]: New session c1 of user core. Jul 7 06:03:50.269485 systemd[1709]: Queued start job for default target default.target. Jul 7 06:03:50.293433 systemd[1709]: Created slice app.slice - User Application Slice. Jul 7 06:03:50.293461 systemd[1709]: Reached target paths.target - Paths. Jul 7 06:03:50.293506 systemd[1709]: Reached target timers.target - Timers. Jul 7 06:03:50.295292 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:03:50.308332 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:03:50.308460 systemd[1709]: Reached target sockets.target - Sockets. Jul 7 06:03:50.308508 systemd[1709]: Reached target basic.target - Basic System. Jul 7 06:03:50.308550 systemd[1709]: Reached target default.target - Main User Target. Jul 7 06:03:50.308581 systemd[1709]: Startup finished in 190ms. Jul 7 06:03:50.309268 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:03:50.311235 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:03:50.377254 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:34842.service - OpenSSH per-connection server daemon (10.0.0.1:34842). Jul 7 06:03:50.424465 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 34842 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:50.425972 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:50.431336 systemd-logind[1567]: New session 2 of user core. Jul 7 06:03:50.447240 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:03:50.502338 sshd[1722]: Connection closed by 10.0.0.1 port 34842 Jul 7 06:03:50.502720 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:50.516681 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:34842.service: Deactivated successfully. Jul 7 06:03:50.518553 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:03:50.519486 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:03:50.522768 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:34844.service - OpenSSH per-connection server daemon (10.0.0.1:34844). Jul 7 06:03:50.523594 systemd-logind[1567]: Removed session 2. Jul 7 06:03:50.574651 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 34844 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:50.576396 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:50.582139 systemd-logind[1567]: New session 3 of user core. Jul 7 06:03:50.596284 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:03:50.647529 sshd[1730]: Connection closed by 10.0.0.1 port 34844 Jul 7 06:03:50.647973 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:50.657315 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:34844.service: Deactivated successfully. Jul 7 06:03:50.659338 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:03:50.660238 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:03:50.663450 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:34846.service - OpenSSH per-connection server daemon (10.0.0.1:34846). Jul 7 06:03:50.664094 systemd-logind[1567]: Removed session 3. Jul 7 06:03:50.715036 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 34846 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:50.716927 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:50.721709 systemd-logind[1567]: New session 4 of user core. Jul 7 06:03:50.732230 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:03:50.787009 sshd[1738]: Connection closed by 10.0.0.1 port 34846 Jul 7 06:03:50.787248 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:50.798057 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:34846.service: Deactivated successfully. Jul 7 06:03:50.799867 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:03:50.800746 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:03:50.803673 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:34852.service - OpenSSH per-connection server daemon (10.0.0.1:34852). Jul 7 06:03:50.804565 systemd-logind[1567]: Removed session 4. Jul 7 06:03:50.854989 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 34852 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:50.856625 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:50.862299 systemd-logind[1567]: New session 5 of user core. Jul 7 06:03:50.871269 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:03:50.934148 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:03:50.934524 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:03:50.953759 sudo[1747]: pam_unix(sudo:session): session closed for user root Jul 7 06:03:50.955608 sshd[1746]: Connection closed by 10.0.0.1 port 34852 Jul 7 06:03:50.956119 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:50.969878 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:34852.service: Deactivated successfully. Jul 7 06:03:50.971912 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:03:50.972780 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:03:50.976013 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:34866.service - OpenSSH per-connection server daemon (10.0.0.1:34866). Jul 7 06:03:50.976752 systemd-logind[1567]: Removed session 5. Jul 7 06:03:51.042067 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 34866 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:51.043892 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:51.049679 systemd-logind[1567]: New session 6 of user core. Jul 7 06:03:51.068254 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:03:51.125523 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:03:51.125949 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:03:51.134763 sudo[1757]: pam_unix(sudo:session): session closed for user root Jul 7 06:03:51.141724 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:03:51.142044 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:03:51.152751 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:03:51.213364 augenrules[1779]: No rules Jul 7 06:03:51.215345 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:03:51.215673 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:03:51.216984 sudo[1756]: pam_unix(sudo:session): session closed for user root Jul 7 06:03:51.219258 sshd[1755]: Connection closed by 10.0.0.1 port 34866 Jul 7 06:03:51.219563 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:51.231712 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:34866.service: Deactivated successfully. Jul 7 06:03:51.233649 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:03:51.234476 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:03:51.237540 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:34876.service - OpenSSH per-connection server daemon (10.0.0.1:34876). Jul 7 06:03:51.238171 systemd-logind[1567]: Removed session 6. Jul 7 06:03:51.285942 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 34876 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:03:51.287995 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:51.294060 systemd-logind[1567]: New session 7 of user core. Jul 7 06:03:51.308317 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:03:51.365108 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:03:51.365513 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:03:52.070135 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:03:52.094674 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:03:52.690640 dockerd[1812]: time="2025-07-07T06:03:52.690547246Z" level=info msg="Starting up" Jul 7 06:03:52.694487 dockerd[1812]: time="2025-07-07T06:03:52.694429840Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:03:53.011531 dockerd[1812]: time="2025-07-07T06:03:53.011371105Z" level=info msg="Loading containers: start." Jul 7 06:03:53.024123 kernel: Initializing XFRM netlink socket Jul 7 06:03:53.343950 systemd-networkd[1500]: docker0: Link UP Jul 7 06:03:53.350279 dockerd[1812]: time="2025-07-07T06:03:53.350221352Z" level=info msg="Loading containers: done." Jul 7 06:03:53.374604 dockerd[1812]: time="2025-07-07T06:03:53.374535276Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:03:53.374812 dockerd[1812]: time="2025-07-07T06:03:53.374629313Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:03:53.374812 dockerd[1812]: time="2025-07-07T06:03:53.374760890Z" level=info msg="Initializing buildkit" Jul 7 06:03:53.408192 dockerd[1812]: time="2025-07-07T06:03:53.408105056Z" level=info msg="Completed buildkit initialization" Jul 7 06:03:53.412651 dockerd[1812]: time="2025-07-07T06:03:53.412591122Z" level=info msg="Daemon has completed initialization" Jul 7 06:03:53.412785 dockerd[1812]: time="2025-07-07T06:03:53.412689567Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:03:53.412990 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:03:54.523095 containerd[1583]: time="2025-07-07T06:03:54.523025436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:03:55.203239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294229396.mount: Deactivated successfully. Jul 7 06:03:57.011729 containerd[1583]: time="2025-07-07T06:03:57.011646627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 06:03:57.012276 containerd[1583]: time="2025-07-07T06:03:57.012210525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:57.013506 containerd[1583]: time="2025-07-07T06:03:57.013443047Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:57.016661 containerd[1583]: time="2025-07-07T06:03:57.016612774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:57.017732 containerd[1583]: time="2025-07-07T06:03:57.017673995Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.494542831s" Jul 7 06:03:57.017786 containerd[1583]: time="2025-07-07T06:03:57.017732294Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 06:03:57.018794 containerd[1583]: time="2025-07-07T06:03:57.018704057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:03:58.699493 containerd[1583]: time="2025-07-07T06:03:58.699406615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:58.700584 containerd[1583]: time="2025-07-07T06:03:58.700518211Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 06:03:58.702174 containerd[1583]: time="2025-07-07T06:03:58.702111219Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:58.705171 containerd[1583]: time="2025-07-07T06:03:58.705136385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:58.706310 containerd[1583]: time="2025-07-07T06:03:58.706258820Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.687507144s" Jul 7 06:03:58.706370 containerd[1583]: time="2025-07-07T06:03:58.706312331Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 06:03:58.706992 containerd[1583]: time="2025-07-07T06:03:58.706922075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:03:59.275658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:03:59.277512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:03:59.846910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:03:59.850864 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:04:00.537480 containerd[1583]: time="2025-07-07T06:04:00.537392638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:00.538446 containerd[1583]: time="2025-07-07T06:04:00.538134049Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 06:04:00.539449 containerd[1583]: time="2025-07-07T06:04:00.539412257Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:00.542733 containerd[1583]: time="2025-07-07T06:04:00.542684005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:00.543819 containerd[1583]: time="2025-07-07T06:04:00.543758491Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.836781263s" Jul 7 06:04:00.543819 containerd[1583]: time="2025-07-07T06:04:00.543806992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 06:04:00.544445 containerd[1583]: time="2025-07-07T06:04:00.544408921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:04:00.904366 kubelet[2094]: E0707 06:04:00.904217 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:04:00.910471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:04:00.910730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:00.911288 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 111.1M memory peak. Jul 7 06:04:02.245207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828503791.mount: Deactivated successfully. Jul 7 06:04:02.735505 containerd[1583]: time="2025-07-07T06:04:02.735324325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:02.736331 containerd[1583]: time="2025-07-07T06:04:02.736275049Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 06:04:02.737797 containerd[1583]: time="2025-07-07T06:04:02.737751899Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:02.740430 containerd[1583]: time="2025-07-07T06:04:02.740354231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:02.740961 containerd[1583]: time="2025-07-07T06:04:02.740893573Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.196453644s" Jul 7 06:04:02.740961 containerd[1583]: time="2025-07-07T06:04:02.740944789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 06:04:02.741929 containerd[1583]: time="2025-07-07T06:04:02.741882579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:04:03.448104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138701607.mount: Deactivated successfully. Jul 7 06:04:05.870722 containerd[1583]: time="2025-07-07T06:04:05.870634170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:05.871550 containerd[1583]: time="2025-07-07T06:04:05.871478043Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:04:05.872793 containerd[1583]: time="2025-07-07T06:04:05.872737606Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:05.878088 containerd[1583]: time="2025-07-07T06:04:05.878043781Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.136123591s" Jul 7 06:04:05.878166 containerd[1583]: time="2025-07-07T06:04:05.878100708Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:04:05.878745 containerd[1583]: time="2025-07-07T06:04:05.878660959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:05.878920 containerd[1583]: time="2025-07-07T06:04:05.878885390Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:04:06.617391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372190065.mount: Deactivated successfully. Jul 7 06:04:06.626070 containerd[1583]: time="2025-07-07T06:04:06.626012664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:06.626853 containerd[1583]: time="2025-07-07T06:04:06.626796394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:04:06.628255 containerd[1583]: time="2025-07-07T06:04:06.628211058Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:06.630325 containerd[1583]: time="2025-07-07T06:04:06.630271684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:06.630967 containerd[1583]: time="2025-07-07T06:04:06.630920652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 751.997701ms" Jul 7 06:04:06.630967 containerd[1583]: time="2025-07-07T06:04:06.630955317Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:04:06.631596 containerd[1583]: time="2025-07-07T06:04:06.631563307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:04:07.207588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053455914.mount: Deactivated successfully. Jul 7 06:04:10.263104 containerd[1583]: time="2025-07-07T06:04:10.263010001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:10.264026 containerd[1583]: time="2025-07-07T06:04:10.263987845Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 06:04:10.265517 containerd[1583]: time="2025-07-07T06:04:10.265481026Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:10.268888 containerd[1583]: time="2025-07-07T06:04:10.268855136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:10.270274 containerd[1583]: time="2025-07-07T06:04:10.270211862Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.638620772s" Jul 7 06:04:10.270274 containerd[1583]: time="2025-07-07T06:04:10.270258068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 06:04:10.934498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:04:10.936735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:11.194730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:11.209611 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:04:11.256247 kubelet[2252]: E0707 06:04:11.256123 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:04:11.260858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:04:11.261163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:11.261709 systemd[1]: kubelet.service: Consumed 264ms CPU time, 108.9M memory peak. Jul 7 06:04:12.645506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:12.645725 systemd[1]: kubelet.service: Consumed 264ms CPU time, 108.9M memory peak. Jul 7 06:04:12.648606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:12.717015 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-7.scope)... Jul 7 06:04:12.717035 systemd[1]: Reloading... Jul 7 06:04:12.819111 zram_generator::config[2313]: No configuration found. Jul 7 06:04:13.630577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:13.761099 systemd[1]: Reloading finished in 1043 ms. Jul 7 06:04:13.828252 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:04:13.828405 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:04:13.828789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:13.828850 systemd[1]: kubelet.service: Consumed 224ms CPU time, 98.3M memory peak. Jul 7 06:04:13.830956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:14.032934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:14.051590 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:04:14.099584 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:14.099584 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:04:14.099584 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:14.100133 kubelet[2358]: I0707 06:04:14.099712 2358 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:04:14.424836 kubelet[2358]: I0707 06:04:14.424686 2358 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:04:14.424836 kubelet[2358]: I0707 06:04:14.424719 2358 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:04:14.425108 kubelet[2358]: I0707 06:04:14.424956 2358 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:04:14.452507 kubelet[2358]: E0707 06:04:14.452424 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:14.453824 kubelet[2358]: I0707 06:04:14.453792 2358 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:04:14.460549 kubelet[2358]: I0707 06:04:14.460507 2358 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:04:14.467820 kubelet[2358]: I0707 06:04:14.467742 2358 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:04:14.468478 kubelet[2358]: I0707 06:04:14.468438 2358 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:04:14.468688 kubelet[2358]: I0707 06:04:14.468630 2358 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:04:14.468936 kubelet[2358]: I0707 06:04:14.468670 2358 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:04:14.468936 kubelet[2358]: I0707 06:04:14.468936 2358 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:04:14.469178 kubelet[2358]: I0707 06:04:14.468949 2358 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:04:14.469178 kubelet[2358]: I0707 06:04:14.469124 2358 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:14.471514 kubelet[2358]: I0707 06:04:14.471469 2358 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:04:14.471514 kubelet[2358]: I0707 06:04:14.471512 2358 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:04:14.471611 kubelet[2358]: I0707 06:04:14.471574 2358 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:04:14.471611 kubelet[2358]: I0707 06:04:14.471604 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:04:14.477611 kubelet[2358]: I0707 06:04:14.477249 2358 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:04:14.477611 kubelet[2358]: W0707 06:04:14.477512 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Jul 7 06:04:14.477611 kubelet[2358]: W0707 06:04:14.477558 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Jul 7 06:04:14.477730 kubelet[2358]: E0707 06:04:14.477698 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:14.477934 kubelet[2358]: E0707 06:04:14.477908 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:14.478368 kubelet[2358]: I0707 06:04:14.478328 2358 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:04:14.479429 kubelet[2358]: W0707 06:04:14.479396 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:04:14.481687 kubelet[2358]: I0707 06:04:14.481643 2358 server.go:1274] "Started kubelet" Jul 7 06:04:14.482027 kubelet[2358]: I0707 06:04:14.481975 2358 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:04:14.482589 kubelet[2358]: I0707 06:04:14.482142 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:04:14.483406 kubelet[2358]: I0707 06:04:14.482943 2358 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:04:14.484304 kubelet[2358]: I0707 06:04:14.484278 2358 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:04:14.487868 kubelet[2358]: I0707 06:04:14.487839 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:04:14.488761 kubelet[2358]: I0707 06:04:14.488726 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:04:14.491415 kubelet[2358]: E0707 06:04:14.491382 2358 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:04:14.537316 kubelet[2358]: I0707 06:04:14.537266 2358 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:04:14.537514 kubelet[2358]: I0707 06:04:14.537472 2358 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:04:14.537702 kubelet[2358]: I0707 06:04:14.537582 2358 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:04:14.538306 kubelet[2358]: E0707 06:04:14.495199 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:14.538306 kubelet[2358]: E0707 06:04:14.511673 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Jul 7 06:04:14.538403 kubelet[2358]: W0707 06:04:14.538298 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Jul 7 06:04:14.538403 kubelet[2358]: E0707 06:04:14.538371 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:14.538483 kubelet[2358]: I0707 06:04:14.538420 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:04:14.539914 kubelet[2358]: I0707 06:04:14.539881 2358 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:04:14.539914 kubelet[2358]: I0707 06:04:14.539905 2358 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:04:14.541213 kubelet[2358]: E0707 06:04:14.539761 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2e549a8a00a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:04:14.481588234 +0000 UTC m=+0.425575251,LastTimestamp:2025-07-07 06:04:14.481588234 +0000 UTC m=+0.425575251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:04:14.550127 kubelet[2358]: I0707 06:04:14.550026 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:04:14.552036 kubelet[2358]: I0707 06:04:14.551973 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:04:14.552036 kubelet[2358]: I0707 06:04:14.552018 2358 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:04:14.552153 kubelet[2358]: I0707 06:04:14.552062 2358 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:04:14.552193 kubelet[2358]: E0707 06:04:14.552144 2358 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:04:14.558796 kubelet[2358]: I0707 06:04:14.558386 2358 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:04:14.558796 kubelet[2358]: I0707 06:04:14.558413 2358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:04:14.558796 kubelet[2358]: I0707 06:04:14.558437 2358 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:14.558796 kubelet[2358]: W0707 06:04:14.558688 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Jul 7 06:04:14.559037 kubelet[2358]: E0707 06:04:14.558826 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:14.638240 kubelet[2358]: E0707 06:04:14.638147 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:14.644300 kubelet[2358]: I0707 06:04:14.644230 2358 policy_none.go:49] "None policy: Start" Jul 7 06:04:14.645309 kubelet[2358]: I0707 06:04:14.645290 2358 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:04:14.645358 kubelet[2358]: I0707 06:04:14.645317 2358 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:04:14.652464 kubelet[2358]: E0707 06:04:14.652421 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:04:14.653242 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:04:14.675793 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:04:14.680049 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:04:14.696312 kubelet[2358]: I0707 06:04:14.696223 2358 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:04:14.696655 kubelet[2358]: I0707 06:04:14.696628 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:04:14.696726 kubelet[2358]: I0707 06:04:14.696654 2358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:04:14.696930 kubelet[2358]: I0707 06:04:14.696906 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:04:14.698731 kubelet[2358]: E0707 06:04:14.698687 2358 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:04:14.739212 kubelet[2358]: E0707 06:04:14.739160 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Jul 7 06:04:14.798612 kubelet[2358]: I0707 06:04:14.798564 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:14.798934 kubelet[2358]: E0707 06:04:14.798903 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jul 7 06:04:14.863838 systemd[1]: Created slice kubepods-burstable-pod89ee74275af4e79e5a42bbbdcb166ad6.slice - libcontainer container kubepods-burstable-pod89ee74275af4e79e5a42bbbdcb166ad6.slice. Jul 7 06:04:14.877314 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 06:04:14.883936 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 06:04:14.940176 kubelet[2358]: I0707 06:04:14.940004 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:14.940176 kubelet[2358]: I0707 06:04:14.940048 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:14.940176 kubelet[2358]: I0707 06:04:14.940137 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:14.940176 kubelet[2358]: I0707 06:04:14.940167 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:04:14.940457 kubelet[2358]: I0707 06:04:14.940188 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89ee74275af4e79e5a42bbbdcb166ad6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee74275af4e79e5a42bbbdcb166ad6\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:14.940457 kubelet[2358]: I0707 06:04:14.940208 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89ee74275af4e79e5a42bbbdcb166ad6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89ee74275af4e79e5a42bbbdcb166ad6\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:14.940457 kubelet[2358]: I0707 06:04:14.940228 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:14.940457 kubelet[2358]: I0707 06:04:14.940249 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89ee74275af4e79e5a42bbbdcb166ad6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee74275af4e79e5a42bbbdcb166ad6\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:14.940457 kubelet[2358]: I0707 06:04:14.940271 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:15.000866 kubelet[2358]: I0707 06:04:15.000824 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:15.001381 kubelet[2358]: E0707 06:04:15.001327 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jul 7 06:04:15.140335 kubelet[2358]: E0707 06:04:15.140160 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Jul 7 06:04:15.174768 kubelet[2358]: E0707 06:04:15.174662 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:15.175728 containerd[1583]: time="2025-07-07T06:04:15.175670623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89ee74275af4e79e5a42bbbdcb166ad6,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:15.182265 kubelet[2358]: E0707 06:04:15.182194 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:15.183218 containerd[1583]: time="2025-07-07T06:04:15.183145276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:15.186752 kubelet[2358]: E0707 06:04:15.186669 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:15.187468 containerd[1583]: time="2025-07-07T06:04:15.187417681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:15.289998 containerd[1583]: time="2025-07-07T06:04:15.289815426Z" level=info msg="connecting to shim 66bca227523764261a377d86bd2480962a865d379d187945a578a0963b5ef34c" address="unix:///run/containerd/s/2bc04135c3539a56256bdc632eeb5a0f8f92700ec9cd7747f839746b5ca24a9a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:15.297108 containerd[1583]: time="2025-07-07T06:04:15.296609523Z" level=info msg="connecting to shim bbd35604ef18d5406e8e8e1832d7729aa92c8b5dbc0a6f5f74f36d3ad61c1c58" address="unix:///run/containerd/s/0b4e2481c20e676488760bef49ebd36fef008cb0b0db8a1ff302fb48fecbde21" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:15.312229 containerd[1583]: time="2025-07-07T06:04:15.311422853Z" level=info msg="connecting to shim 8d6c9975fd140d5823f111c61f07c1cc6bb58ea073bbd96abbbedd8a7982f63c" address="unix:///run/containerd/s/60a78973e67423d4420d61c736f501c6edec9156ecff498e7b1e597294e5912a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:15.341359 systemd[1]: Started cri-containerd-66bca227523764261a377d86bd2480962a865d379d187945a578a0963b5ef34c.scope - libcontainer container 66bca227523764261a377d86bd2480962a865d379d187945a578a0963b5ef34c. Jul 7 06:04:15.368391 systemd[1]: Started cri-containerd-8d6c9975fd140d5823f111c61f07c1cc6bb58ea073bbd96abbbedd8a7982f63c.scope - libcontainer container 8d6c9975fd140d5823f111c61f07c1cc6bb58ea073bbd96abbbedd8a7982f63c. Jul 7 06:04:15.374318 systemd[1]: Started cri-containerd-bbd35604ef18d5406e8e8e1832d7729aa92c8b5dbc0a6f5f74f36d3ad61c1c58.scope - libcontainer container bbd35604ef18d5406e8e8e1832d7729aa92c8b5dbc0a6f5f74f36d3ad61c1c58. Jul 7 06:04:15.405666 kubelet[2358]: I0707 06:04:15.405609 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:15.406388 kubelet[2358]: E0707 06:04:15.406224 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Jul 7 06:04:15.629116 containerd[1583]: time="2025-07-07T06:04:15.628940439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89ee74275af4e79e5a42bbbdcb166ad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"66bca227523764261a377d86bd2480962a865d379d187945a578a0963b5ef34c\"" Jul 7 06:04:15.630643 containerd[1583]: time="2025-07-07T06:04:15.630590375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd35604ef18d5406e8e8e1832d7729aa92c8b5dbc0a6f5f74f36d3ad61c1c58\"" Jul 7 06:04:15.631036 kubelet[2358]: E0707 06:04:15.631001 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:15.631510 kubelet[2358]: E0707 06:04:15.631460 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:15.633100 containerd[1583]: time="2025-07-07T06:04:15.633021996Z" level=info msg="CreateContainer within sandbox \"bbd35604ef18d5406e8e8e1832d7729aa92c8b5dbc0a6f5f74f36d3ad61c1c58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:04:15.633283 containerd[1583]: time="2025-07-07T06:04:15.633238412Z" level=info msg="CreateContainer within sandbox \"66bca227523764261a377d86bd2480962a865d379d187945a578a0963b5ef34c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:04:15.634426 containerd[1583]: time="2025-07-07T06:04:15.634392297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d6c9975fd140d5823f111c61f07c1cc6bb58ea073bbd96abbbedd8a7982f63c\"" Jul 7 06:04:15.634989 kubelet[2358]: E0707 06:04:15.634956 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:15.636691 containerd[1583]: time="2025-07-07T06:04:15.636662937Z" level=info msg="CreateContainer within sandbox \"8d6c9975fd140d5823f111c61f07c1cc6bb58ea073bbd96abbbedd8a7982f63c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:04:15.647196 containerd[1583]: time="2025-07-07T06:04:15.647138470Z" level=info msg="Container 5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:15.651801 kubelet[2358]: W0707 06:04:15.651690 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Jul 7 06:04:15.651876 kubelet[2358]: E0707 06:04:15.651814 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:15.652326 containerd[1583]: time="2025-07-07T06:04:15.652288832Z" level=info msg="Container 014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:15.658655 containerd[1583]: time="2025-07-07T06:04:15.658563524Z" level=info msg="Container ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:15.666132 containerd[1583]: time="2025-07-07T06:04:15.665939522Z" level=info msg="CreateContainer within sandbox \"bbd35604ef18d5406e8e8e1832d7729aa92c8b5dbc0a6f5f74f36d3ad61c1c58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9\"" Jul 7 06:04:15.667035 containerd[1583]: time="2025-07-07T06:04:15.666984853Z" level=info msg="StartContainer for \"5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9\"" Jul 7 06:04:15.668790 containerd[1583]: time="2025-07-07T06:04:15.668742881Z" level=info msg="connecting to shim 5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9" address="unix:///run/containerd/s/0b4e2481c20e676488760bef49ebd36fef008cb0b0db8a1ff302fb48fecbde21" protocol=ttrpc version=3 Jul 7 06:04:15.670002 containerd[1583]: time="2025-07-07T06:04:15.669966076Z" level=info msg="CreateContainer within sandbox \"66bca227523764261a377d86bd2480962a865d379d187945a578a0963b5ef34c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6\"" Jul 7 06:04:15.670636 containerd[1583]: time="2025-07-07T06:04:15.670597621Z" level=info msg="StartContainer for \"014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6\"" Jul 7 06:04:15.672062 containerd[1583]: time="2025-07-07T06:04:15.671996606Z" level=info msg="CreateContainer within sandbox \"8d6c9975fd140d5823f111c61f07c1cc6bb58ea073bbd96abbbedd8a7982f63c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac\"" Jul 7 06:04:15.672531 containerd[1583]: time="2025-07-07T06:04:15.672485172Z" level=info msg="StartContainer for \"ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac\"" Jul 7 06:04:15.673206 containerd[1583]: time="2025-07-07T06:04:15.673169396Z" level=info msg="connecting to shim 014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6" address="unix:///run/containerd/s/2bc04135c3539a56256bdc632eeb5a0f8f92700ec9cd7747f839746b5ca24a9a" protocol=ttrpc version=3 Jul 7 06:04:15.673928 containerd[1583]: time="2025-07-07T06:04:15.673897652Z" level=info msg="connecting to shim ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac" address="unix:///run/containerd/s/60a78973e67423d4420d61c736f501c6edec9156ecff498e7b1e597294e5912a" protocol=ttrpc version=3 Jul 7 06:04:15.702142 systemd[1]: Started cri-containerd-014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6.scope - libcontainer container 014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6. Jul 7 06:04:15.712257 systemd[1]: Started cri-containerd-ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac.scope - libcontainer container ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac. Jul 7 06:04:15.721272 systemd[1]: Started cri-containerd-5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9.scope - libcontainer container 5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9. Jul 7 06:04:15.794309 containerd[1583]: time="2025-07-07T06:04:15.794232729Z" level=info msg="StartContainer for \"014044bf0c3357e095396aac1b04bf8753b418b2649c7f5b9552682291c47bc6\" returns successfully" Jul 7 06:04:15.798679 containerd[1583]: time="2025-07-07T06:04:15.798467885Z" level=info msg="StartContainer for \"ff5fd0e3464dd47c0692c264b8d1bc2baecb309101ab60d76663b27a820d78ac\" returns successfully" Jul 7 06:04:15.843229 containerd[1583]: time="2025-07-07T06:04:15.842465467Z" level=info msg="StartContainer for \"5669d79211559bddce7e8add842d148f2f3704956837771bfc58d461346fddf9\" returns successfully" Jul 7 06:04:16.208684 kubelet[2358]: I0707 06:04:16.208618 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:16.580955 kubelet[2358]: E0707 06:04:16.580778 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:16.581686 kubelet[2358]: E0707 06:04:16.581653 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:16.588935 kubelet[2358]: E0707 06:04:16.588871 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:17.315754 kubelet[2358]: E0707 06:04:17.315663 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:04:17.482769 kubelet[2358]: I0707 06:04:17.482705 2358 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:04:17.482769 kubelet[2358]: E0707 06:04:17.482758 2358 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:04:17.495535 kubelet[2358]: E0707 06:04:17.495343 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:17.588760 kubelet[2358]: E0707 06:04:17.588611 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:17.596453 kubelet[2358]: E0707 06:04:17.596389 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:17.697016 kubelet[2358]: E0707 06:04:17.696949 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:17.797628 kubelet[2358]: E0707 06:04:17.797524 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:17.898369 kubelet[2358]: E0707 06:04:17.898159 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:17.998370 kubelet[2358]: E0707 06:04:17.998313 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:18.043478 kubelet[2358]: E0707 06:04:18.043420 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:18.098687 kubelet[2358]: E0707 06:04:18.098610 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:18.199465 kubelet[2358]: E0707 06:04:18.199214 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:18.300256 kubelet[2358]: E0707 06:04:18.300175 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:18.400910 kubelet[2358]: E0707 06:04:18.400843 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:18.475330 kubelet[2358]: I0707 06:04:18.475170 2358 apiserver.go:52] "Watching apiserver" Jul 7 06:04:18.538229 kubelet[2358]: I0707 06:04:18.538162 2358 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:04:20.009445 kubelet[2358]: E0707 06:04:20.009383 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.593274 kubelet[2358]: E0707 06:04:20.593214 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.988251 systemd[1]: Reload requested from client PID 2637 ('systemctl') (unit session-7.scope)... Jul 7 06:04:20.988276 systemd[1]: Reloading... Jul 7 06:04:21.070119 zram_generator::config[2680]: No configuration found. Jul 7 06:04:21.179654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:21.338225 systemd[1]: Reloading finished in 349 ms. Jul 7 06:04:21.374814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:21.393964 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:04:21.394394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:21.394476 systemd[1]: kubelet.service: Consumed 1.117s CPU time, 132.4M memory peak. Jul 7 06:04:21.397055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:21.613928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:21.624671 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:04:21.675178 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:21.675178 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:04:21.675178 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:21.675690 kubelet[2725]: I0707 06:04:21.675222 2725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:04:21.681104 kubelet[2725]: I0707 06:04:21.681054 2725 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:04:21.681104 kubelet[2725]: I0707 06:04:21.681092 2725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:04:21.681307 kubelet[2725]: I0707 06:04:21.681285 2725 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:04:21.682452 kubelet[2725]: I0707 06:04:21.682426 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:04:21.685323 kubelet[2725]: I0707 06:04:21.685300 2725 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:04:21.688809 kubelet[2725]: I0707 06:04:21.688771 2725 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:04:21.693825 kubelet[2725]: I0707 06:04:21.693789 2725 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:04:21.693926 kubelet[2725]: I0707 06:04:21.693911 2725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:04:21.694107 kubelet[2725]: I0707 06:04:21.694051 2725 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:04:21.694274 kubelet[2725]: I0707 06:04:21.694104 2725 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:04:21.694401 kubelet[2725]: I0707 06:04:21.694284 2725 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:04:21.694401 kubelet[2725]: I0707 06:04:21.694295 2725 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:04:21.694401 kubelet[2725]: I0707 06:04:21.694333 2725 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:21.694536 kubelet[2725]: I0707 06:04:21.694461 2725 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:04:21.694536 kubelet[2725]: I0707 06:04:21.694486 2725 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:04:21.694536 kubelet[2725]: I0707 06:04:21.694517 2725 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:04:21.694536 kubelet[2725]: I0707 06:04:21.694527 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:04:21.696103 kubelet[2725]: I0707 06:04:21.695779 2725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:04:21.696272 kubelet[2725]: I0707 06:04:21.696244 2725 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:04:21.701814 kubelet[2725]: I0707 06:04:21.701771 2725 server.go:1274] "Started kubelet" Jul 7 06:04:21.703564 kubelet[2725]: I0707 06:04:21.703522 2725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:04:21.705102 kubelet[2725]: I0707 06:04:21.704549 2725 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:04:21.705303 kubelet[2725]: I0707 06:04:21.705226 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:04:21.705575 kubelet[2725]: I0707 06:04:21.705545 2725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:04:21.707049 kubelet[2725]: I0707 06:04:21.707022 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:04:21.707388 kubelet[2725]: I0707 06:04:21.707264 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:04:21.709096 kubelet[2725]: I0707 06:04:21.709024 2725 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:04:21.709423 kubelet[2725]: I0707 06:04:21.709177 2725 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:04:21.709423 kubelet[2725]: I0707 06:04:21.709310 2725 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:04:21.709891 kubelet[2725]: I0707 06:04:21.709690 2725 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:04:21.709891 kubelet[2725]: I0707 06:04:21.709792 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:04:21.712236 kubelet[2725]: I0707 06:04:21.712206 2725 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:04:21.716814 kubelet[2725]: E0707 06:04:21.716740 2725 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:04:21.727277 kubelet[2725]: I0707 06:04:21.726435 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:04:21.729194 kubelet[2725]: I0707 06:04:21.728951 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:04:21.729194 kubelet[2725]: I0707 06:04:21.728980 2725 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:04:21.729194 kubelet[2725]: I0707 06:04:21.729005 2725 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:04:21.729194 kubelet[2725]: E0707 06:04:21.729050 2725 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:04:21.755700 kubelet[2725]: I0707 06:04:21.755668 2725 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:04:21.755829 kubelet[2725]: I0707 06:04:21.755685 2725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:04:21.755829 kubelet[2725]: I0707 06:04:21.755736 2725 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:21.755943 kubelet[2725]: I0707 06:04:21.755927 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:04:21.755986 kubelet[2725]: I0707 06:04:21.755965 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:04:21.755986 kubelet[2725]: I0707 06:04:21.755985 2725 policy_none.go:49] "None policy: Start" Jul 7 06:04:21.756614 kubelet[2725]: I0707 06:04:21.756593 2725 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:04:21.756614 kubelet[2725]: I0707 06:04:21.756616 2725 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:04:21.756755 kubelet[2725]: I0707 06:04:21.756739 2725 state_mem.go:75] "Updated machine memory state" Jul 7 06:04:21.761025 kubelet[2725]: I0707 06:04:21.760986 2725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:04:21.761177 kubelet[2725]: I0707 06:04:21.761163 2725 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:04:21.761213 kubelet[2725]: I0707 06:04:21.761175 2725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:04:21.761415 kubelet[2725]: I0707 06:04:21.761318 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:04:21.838266 kubelet[2725]: E0707 06:04:21.838215 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:04:21.869294 kubelet[2725]: I0707 06:04:21.868588 2725 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:21.876189 kubelet[2725]: I0707 06:04:21.876143 2725 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 06:04:21.876349 kubelet[2725]: I0707 06:04:21.876239 2725 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:04:21.911067 kubelet[2725]: I0707 06:04:21.911014 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:21.911067 kubelet[2725]: I0707 06:04:21.911069 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:21.911271 kubelet[2725]: I0707 06:04:21.911119 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:04:21.911271 kubelet[2725]: I0707 06:04:21.911149 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89ee74275af4e79e5a42bbbdcb166ad6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee74275af4e79e5a42bbbdcb166ad6\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:21.911271 kubelet[2725]: I0707 06:04:21.911170 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89ee74275af4e79e5a42bbbdcb166ad6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89ee74275af4e79e5a42bbbdcb166ad6\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:21.911271 kubelet[2725]: I0707 06:04:21.911188 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:21.911271 kubelet[2725]: I0707 06:04:21.911204 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:21.911383 kubelet[2725]: I0707 06:04:21.911225 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:21.911383 kubelet[2725]: I0707 06:04:21.911244 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89ee74275af4e79e5a42bbbdcb166ad6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee74275af4e79e5a42bbbdcb166ad6\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:21.988293 sudo[2763]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 06:04:21.988859 sudo[2763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 06:04:22.138839 kubelet[2725]: E0707 06:04:22.137490 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.139009 kubelet[2725]: E0707 06:04:22.138917 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.140226 kubelet[2725]: E0707 06:04:22.140190 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.695228 kubelet[2725]: I0707 06:04:22.695167 2725 apiserver.go:52] "Watching apiserver" Jul 7 06:04:22.709381 kubelet[2725]: I0707 06:04:22.709340 2725 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:04:22.744731 kubelet[2725]: E0707 06:04:22.744601 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.745588 kubelet[2725]: E0707 06:04:22.745005 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.751416 kubelet[2725]: E0707 06:04:22.751069 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:22.751618 kubelet[2725]: E0707 06:04:22.751589 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.770456 kubelet[2725]: I0707 06:04:22.770362 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7703165570000001 podStartE2EDuration="1.770316557s" podCreationTimestamp="2025-07-07 06:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:22.770205935 +0000 UTC m=+1.139551156" watchObservedRunningTime="2025-07-07 06:04:22.770316557 +0000 UTC m=+1.139661758" Jul 7 06:04:22.791409 kubelet[2725]: I0707 06:04:22.791326 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.791304655 podStartE2EDuration="1.791304655s" podCreationTimestamp="2025-07-07 06:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:22.778648331 +0000 UTC m=+1.147993532" watchObservedRunningTime="2025-07-07 06:04:22.791304655 +0000 UTC m=+1.160649856" Jul 7 06:04:22.791409 kubelet[2725]: I0707 06:04:22.791416 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.791410597 podStartE2EDuration="2.791410597s" podCreationTimestamp="2025-07-07 06:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:22.790571345 +0000 UTC m=+1.159916546" watchObservedRunningTime="2025-07-07 06:04:22.791410597 +0000 UTC m=+1.160755808" Jul 7 06:04:23.080706 sudo[2763]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:23.746383 kubelet[2725]: E0707 06:04:23.746330 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:24.511630 sudo[1791]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:24.513504 sshd[1790]: Connection closed by 10.0.0.1 port 34876 Jul 7 06:04:24.515479 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:24.521344 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:34876.service: Deactivated successfully. Jul 7 06:04:24.524300 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:04:24.524598 systemd[1]: session-7.scope: Consumed 5.712s CPU time, 261.2M memory peak. Jul 7 06:04:24.526335 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:04:24.527859 systemd-logind[1567]: Removed session 7. Jul 7 06:04:26.409458 kubelet[2725]: I0707 06:04:26.409410 2725 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:04:26.410147 containerd[1583]: time="2025-07-07T06:04:26.409837488Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:04:26.410460 kubelet[2725]: I0707 06:04:26.410215 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:04:26.967575 systemd[1]: Created slice kubepods-besteffort-poddbf6b01e_2da7_4060_8301_4a57298446a5.slice - libcontainer container kubepods-besteffort-poddbf6b01e_2da7_4060_8301_4a57298446a5.slice. Jul 7 06:04:26.991861 systemd[1]: Created slice kubepods-burstable-pod4d7e9bc8_0e9f_498f_ab5f_8f2789eb8343.slice - libcontainer container kubepods-burstable-pod4d7e9bc8_0e9f_498f_ab5f_8f2789eb8343.slice. Jul 7 06:04:27.039432 kubelet[2725]: I0707 06:04:27.039365 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbf6b01e-2da7-4060-8301-4a57298446a5-lib-modules\") pod \"kube-proxy-kpnmf\" (UID: \"dbf6b01e-2da7-4060-8301-4a57298446a5\") " pod="kube-system/kube-proxy-kpnmf" Jul 7 06:04:27.039432 kubelet[2725]: I0707 06:04:27.039421 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hostproc\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039432 kubelet[2725]: I0707 06:04:27.039446 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-lib-modules\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039432 kubelet[2725]: I0707 06:04:27.039466 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-net\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039791 kubelet[2725]: I0707 06:04:27.039488 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-xtables-lock\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039791 kubelet[2725]: I0707 06:04:27.039510 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-config-path\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039791 kubelet[2725]: I0707 06:04:27.039587 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbf6b01e-2da7-4060-8301-4a57298446a5-xtables-lock\") pod \"kube-proxy-kpnmf\" (UID: \"dbf6b01e-2da7-4060-8301-4a57298446a5\") " pod="kube-system/kube-proxy-kpnmf" Jul 7 06:04:27.039791 kubelet[2725]: I0707 06:04:27.039649 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-cgroup\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039791 kubelet[2725]: I0707 06:04:27.039670 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-bpf-maps\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039791 kubelet[2725]: I0707 06:04:27.039687 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-etc-cni-netd\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039994 kubelet[2725]: I0707 06:04:27.039718 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92v2\" (UniqueName: \"kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-kube-api-access-h92v2\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039994 kubelet[2725]: I0707 06:04:27.039735 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-run\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039994 kubelet[2725]: I0707 06:04:27.039753 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-clustermesh-secrets\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.039994 kubelet[2725]: I0707 06:04:27.039770 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbf6b01e-2da7-4060-8301-4a57298446a5-kube-proxy\") pod \"kube-proxy-kpnmf\" (UID: \"dbf6b01e-2da7-4060-8301-4a57298446a5\") " pod="kube-system/kube-proxy-kpnmf" Jul 7 06:04:27.039994 kubelet[2725]: I0707 06:04:27.039786 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-kernel\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.040188 kubelet[2725]: I0707 06:04:27.039804 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqr6l\" (UniqueName: \"kubernetes.io/projected/dbf6b01e-2da7-4060-8301-4a57298446a5-kube-api-access-tqr6l\") pod \"kube-proxy-kpnmf\" (UID: \"dbf6b01e-2da7-4060-8301-4a57298446a5\") " pod="kube-system/kube-proxy-kpnmf" Jul 7 06:04:27.040188 kubelet[2725]: I0707 06:04:27.039821 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cni-path\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.040188 kubelet[2725]: I0707 06:04:27.039836 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hubble-tls\") pod \"cilium-khrbp\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " pod="kube-system/cilium-khrbp" Jul 7 06:04:27.289016 kubelet[2725]: E0707 06:04:27.288983 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.290033 containerd[1583]: time="2025-07-07T06:04:27.289868828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpnmf,Uid:dbf6b01e-2da7-4060-8301-4a57298446a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:27.297281 kubelet[2725]: E0707 06:04:27.297233 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.297721 containerd[1583]: time="2025-07-07T06:04:27.297675864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khrbp,Uid:4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:27.478608 systemd[1]: Created slice kubepods-besteffort-podf7c2a5bc_8d70_44fa_809e_a4320d882493.slice - libcontainer container kubepods-besteffort-podf7c2a5bc_8d70_44fa_809e_a4320d882493.slice. Jul 7 06:04:27.483331 containerd[1583]: time="2025-07-07T06:04:27.482271033Z" level=info msg="connecting to shim 092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8" address="unix:///run/containerd/s/41367d4b69e6e1fde62ede55a46466145b81c4d5f6f46be655de07c8a035fd60" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:27.483331 containerd[1583]: time="2025-07-07T06:04:27.483284438Z" level=info msg="connecting to shim 049a8e8564939f9494742255b0267e9e5cd31266ea31cf5437df7240b291b6a4" address="unix:///run/containerd/s/748bd998d63eb0194efe5e419492cfe71cb4f3e3c56af3fdb2b76ded722bd335" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:27.520517 systemd[1]: Started cri-containerd-092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8.scope - libcontainer container 092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8. Jul 7 06:04:27.526570 systemd[1]: Started cri-containerd-049a8e8564939f9494742255b0267e9e5cd31266ea31cf5437df7240b291b6a4.scope - libcontainer container 049a8e8564939f9494742255b0267e9e5cd31266ea31cf5437df7240b291b6a4. Jul 7 06:04:27.545635 kubelet[2725]: I0707 06:04:27.543564 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmg6\" (UniqueName: \"kubernetes.io/projected/f7c2a5bc-8d70-44fa-809e-a4320d882493-kube-api-access-wtmg6\") pod \"cilium-operator-5d85765b45-mhgp7\" (UID: \"f7c2a5bc-8d70-44fa-809e-a4320d882493\") " pod="kube-system/cilium-operator-5d85765b45-mhgp7" Jul 7 06:04:27.545635 kubelet[2725]: I0707 06:04:27.543609 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7c2a5bc-8d70-44fa-809e-a4320d882493-cilium-config-path\") pod \"cilium-operator-5d85765b45-mhgp7\" (UID: \"f7c2a5bc-8d70-44fa-809e-a4320d882493\") " pod="kube-system/cilium-operator-5d85765b45-mhgp7" Jul 7 06:04:27.561401 containerd[1583]: time="2025-07-07T06:04:27.561345696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khrbp,Uid:4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343,Namespace:kube-system,Attempt:0,} returns sandbox id \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\"" Jul 7 06:04:27.562344 kubelet[2725]: E0707 06:04:27.562304 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.565744 containerd[1583]: time="2025-07-07T06:04:27.565365852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 06:04:27.567338 containerd[1583]: time="2025-07-07T06:04:27.567305215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpnmf,Uid:dbf6b01e-2da7-4060-8301-4a57298446a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"049a8e8564939f9494742255b0267e9e5cd31266ea31cf5437df7240b291b6a4\"" Jul 7 06:04:27.568430 kubelet[2725]: E0707 06:04:27.568374 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.570491 containerd[1583]: time="2025-07-07T06:04:27.570458295Z" level=info msg="CreateContainer within sandbox \"049a8e8564939f9494742255b0267e9e5cd31266ea31cf5437df7240b291b6a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:04:27.584750 containerd[1583]: time="2025-07-07T06:04:27.584689767Z" level=info msg="Container 17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:27.594938 containerd[1583]: time="2025-07-07T06:04:27.594881615Z" level=info msg="CreateContainer within sandbox \"049a8e8564939f9494742255b0267e9e5cd31266ea31cf5437df7240b291b6a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab\"" Jul 7 06:04:27.595480 containerd[1583]: time="2025-07-07T06:04:27.595445672Z" level=info msg="StartContainer for \"17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab\"" Jul 7 06:04:27.596968 containerd[1583]: time="2025-07-07T06:04:27.596923433Z" level=info msg="connecting to shim 17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab" address="unix:///run/containerd/s/748bd998d63eb0194efe5e419492cfe71cb4f3e3c56af3fdb2b76ded722bd335" protocol=ttrpc version=3 Jul 7 06:04:27.620265 systemd[1]: Started cri-containerd-17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab.scope - libcontainer container 17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab. Jul 7 06:04:27.676474 containerd[1583]: time="2025-07-07T06:04:27.676419993Z" level=info msg="StartContainer for \"17d1ea469eeb876ed199fecb45556f02cd5117d7249299017ec5cecd4e35d5ab\" returns successfully" Jul 7 06:04:27.760453 kubelet[2725]: E0707 06:04:27.760155 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.771291 kubelet[2725]: I0707 06:04:27.771202 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kpnmf" podStartSLOduration=1.771179192 podStartE2EDuration="1.771179192s" podCreationTimestamp="2025-07-07 06:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:27.770864281 +0000 UTC m=+6.140209482" watchObservedRunningTime="2025-07-07 06:04:27.771179192 +0000 UTC m=+6.140524393" Jul 7 06:04:27.785201 kubelet[2725]: E0707 06:04:27.785166 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.785807 containerd[1583]: time="2025-07-07T06:04:27.785756494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mhgp7,Uid:f7c2a5bc-8d70-44fa-809e-a4320d882493,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:27.811707 kubelet[2725]: E0707 06:04:27.811273 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.817886 containerd[1583]: time="2025-07-07T06:04:27.817754607Z" level=info msg="connecting to shim 565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d" address="unix:///run/containerd/s/f239be05a455528e281f24b955800377d52c42a2ad85ed0a686494bc6a27c1e4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:27.850702 systemd[1]: Started cri-containerd-565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d.scope - libcontainer container 565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d. Jul 7 06:04:27.920026 containerd[1583]: time="2025-07-07T06:04:27.919962998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mhgp7,Uid:f7c2a5bc-8d70-44fa-809e-a4320d882493,Namespace:kube-system,Attempt:0,} returns sandbox id \"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\"" Jul 7 06:04:27.922352 kubelet[2725]: E0707 06:04:27.922315 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:28.761911 kubelet[2725]: E0707 06:04:28.761876 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.286238 kubelet[2725]: E0707 06:04:30.286178 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.765780 kubelet[2725]: E0707 06:04:30.765482 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:31.375108 kubelet[2725]: E0707 06:04:31.374300 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:31.381246 update_engine[1569]: I20250707 06:04:31.381175 1569 update_attempter.cc:509] Updating boot flags... Jul 7 06:04:31.506701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052669166.mount: Deactivated successfully. Jul 7 06:04:31.768191 kubelet[2725]: E0707 06:04:31.767725 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:34.736774 containerd[1583]: time="2025-07-07T06:04:34.736690936Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:34.737569 containerd[1583]: time="2025-07-07T06:04:34.737481706Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 06:04:34.739042 containerd[1583]: time="2025-07-07T06:04:34.738999906Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:34.740420 containerd[1583]: time="2025-07-07T06:04:34.740378252Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.174919171s" Jul 7 06:04:34.740420 containerd[1583]: time="2025-07-07T06:04:34.740416984Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 06:04:34.741260 containerd[1583]: time="2025-07-07T06:04:34.741229176Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 06:04:34.743585 containerd[1583]: time="2025-07-07T06:04:34.743435271Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:04:34.750762 containerd[1583]: time="2025-07-07T06:04:34.750730472Z" level=info msg="Container 1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:34.754457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3978441578.mount: Deactivated successfully. Jul 7 06:04:34.758336 containerd[1583]: time="2025-07-07T06:04:34.758295203Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\"" Jul 7 06:04:34.759056 containerd[1583]: time="2025-07-07T06:04:34.758823976Z" level=info msg="StartContainer for \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\"" Jul 7 06:04:34.759759 containerd[1583]: time="2025-07-07T06:04:34.759733792Z" level=info msg="connecting to shim 1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87" address="unix:///run/containerd/s/41367d4b69e6e1fde62ede55a46466145b81c4d5f6f46be655de07c8a035fd60" protocol=ttrpc version=3 Jul 7 06:04:34.821251 systemd[1]: Started cri-containerd-1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87.scope - libcontainer container 1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87. Jul 7 06:04:34.856492 containerd[1583]: time="2025-07-07T06:04:34.856450162Z" level=info msg="StartContainer for \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" returns successfully" Jul 7 06:04:34.866497 systemd[1]: cri-containerd-1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87.scope: Deactivated successfully. Jul 7 06:04:34.869512 containerd[1583]: time="2025-07-07T06:04:34.869467127Z" level=info msg="received exit event container_id:\"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" id:\"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" pid:3165 exited_at:{seconds:1751868274 nanos:869096183}" Jul 7 06:04:34.869708 containerd[1583]: time="2025-07-07T06:04:34.869654262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" id:\"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" pid:3165 exited_at:{seconds:1751868274 nanos:869096183}" Jul 7 06:04:34.891769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87-rootfs.mount: Deactivated successfully. Jul 7 06:04:35.786466 kubelet[2725]: E0707 06:04:35.786425 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:35.789631 containerd[1583]: time="2025-07-07T06:04:35.789195402Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:04:35.803100 containerd[1583]: time="2025-07-07T06:04:35.803023989Z" level=info msg="Container f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:35.807159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311155355.mount: Deactivated successfully. Jul 7 06:04:35.817093 containerd[1583]: time="2025-07-07T06:04:35.817017639Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\"" Jul 7 06:04:35.817677 containerd[1583]: time="2025-07-07T06:04:35.817621063Z" level=info msg="StartContainer for \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\"" Jul 7 06:04:35.818579 containerd[1583]: time="2025-07-07T06:04:35.818542380Z" level=info msg="connecting to shim f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23" address="unix:///run/containerd/s/41367d4b69e6e1fde62ede55a46466145b81c4d5f6f46be655de07c8a035fd60" protocol=ttrpc version=3 Jul 7 06:04:35.845269 systemd[1]: Started cri-containerd-f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23.scope - libcontainer container f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23. Jul 7 06:04:35.881284 containerd[1583]: time="2025-07-07T06:04:35.881228344Z" level=info msg="StartContainer for \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" returns successfully" Jul 7 06:04:35.897420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:04:35.897887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:04:35.898221 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:04:35.900659 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:04:35.903834 containerd[1583]: time="2025-07-07T06:04:35.903796280Z" level=info msg="received exit event container_id:\"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" id:\"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" pid:3210 exited_at:{seconds:1751868275 nanos:903560262}" Jul 7 06:04:35.904248 containerd[1583]: time="2025-07-07T06:04:35.904228309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" id:\"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" pid:3210 exited_at:{seconds:1751868275 nanos:903560262}" Jul 7 06:04:35.904249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:04:35.905074 systemd[1]: cri-containerd-f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23.scope: Deactivated successfully. Jul 7 06:04:35.932010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:04:36.725272 containerd[1583]: time="2025-07-07T06:04:36.725193289Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:36.726144 containerd[1583]: time="2025-07-07T06:04:36.726110597Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 06:04:36.727383 containerd[1583]: time="2025-07-07T06:04:36.727320439Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:36.728426 containerd[1583]: time="2025-07-07T06:04:36.728375368Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.987114673s" Jul 7 06:04:36.728426 containerd[1583]: time="2025-07-07T06:04:36.728422177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 06:04:36.730549 containerd[1583]: time="2025-07-07T06:04:36.730498231Z" level=info msg="CreateContainer within sandbox \"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 06:04:36.739321 containerd[1583]: time="2025-07-07T06:04:36.739294235Z" level=info msg="Container cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:36.746054 containerd[1583]: time="2025-07-07T06:04:36.746015820Z" level=info msg="CreateContainer within sandbox \"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\"" Jul 7 06:04:36.746791 containerd[1583]: time="2025-07-07T06:04:36.746525587Z" level=info msg="StartContainer for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\"" Jul 7 06:04:36.747346 containerd[1583]: time="2025-07-07T06:04:36.747305183Z" level=info msg="connecting to shim cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793" address="unix:///run/containerd/s/f239be05a455528e281f24b955800377d52c42a2ad85ed0a686494bc6a27c1e4" protocol=ttrpc version=3 Jul 7 06:04:36.774298 systemd[1]: Started cri-containerd-cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793.scope - libcontainer container cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793. Jul 7 06:04:36.793363 kubelet[2725]: E0707 06:04:36.793315 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:36.803781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23-rootfs.mount: Deactivated successfully. Jul 7 06:04:36.811865 containerd[1583]: time="2025-07-07T06:04:36.811755163Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:04:36.961349 containerd[1583]: time="2025-07-07T06:04:36.961303362Z" level=info msg="StartContainer for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" returns successfully" Jul 7 06:04:36.978413 containerd[1583]: time="2025-07-07T06:04:36.978250340Z" level=info msg="Container db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:36.986126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651301239.mount: Deactivated successfully. Jul 7 06:04:36.992264 containerd[1583]: time="2025-07-07T06:04:36.991870476Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\"" Jul 7 06:04:36.992465 containerd[1583]: time="2025-07-07T06:04:36.992446306Z" level=info msg="StartContainer for \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\"" Jul 7 06:04:36.995176 containerd[1583]: time="2025-07-07T06:04:36.995118018Z" level=info msg="connecting to shim db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a" address="unix:///run/containerd/s/41367d4b69e6e1fde62ede55a46466145b81c4d5f6f46be655de07c8a035fd60" protocol=ttrpc version=3 Jul 7 06:04:37.018222 systemd[1]: Started cri-containerd-db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a.scope - libcontainer container db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a. Jul 7 06:04:37.076302 systemd[1]: cri-containerd-db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a.scope: Deactivated successfully. Jul 7 06:04:37.077315 containerd[1583]: time="2025-07-07T06:04:37.077265659Z" level=info msg="received exit event container_id:\"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" id:\"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" pid:3312 exited_at:{seconds:1751868277 nanos:76978966}" Jul 7 06:04:37.077503 containerd[1583]: time="2025-07-07T06:04:37.077450821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" id:\"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" pid:3312 exited_at:{seconds:1751868277 nanos:76978966}" Jul 7 06:04:37.078455 containerd[1583]: time="2025-07-07T06:04:37.078421738Z" level=info msg="StartContainer for \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" returns successfully" Jul 7 06:04:37.101506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a-rootfs.mount: Deactivated successfully. Jul 7 06:04:37.812058 kubelet[2725]: E0707 06:04:37.811170 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:37.815572 kubelet[2725]: E0707 06:04:37.815528 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:37.816389 containerd[1583]: time="2025-07-07T06:04:37.816349446Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:04:37.828304 containerd[1583]: time="2025-07-07T06:04:37.828249168Z" level=info msg="Container 469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:37.833538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795833715.mount: Deactivated successfully. Jul 7 06:04:37.837561 containerd[1583]: time="2025-07-07T06:04:37.837519100Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\"" Jul 7 06:04:37.838114 containerd[1583]: time="2025-07-07T06:04:37.838061598Z" level=info msg="StartContainer for \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\"" Jul 7 06:04:37.839060 containerd[1583]: time="2025-07-07T06:04:37.839036534Z" level=info msg="connecting to shim 469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8" address="unix:///run/containerd/s/41367d4b69e6e1fde62ede55a46466145b81c4d5f6f46be655de07c8a035fd60" protocol=ttrpc version=3 Jul 7 06:04:37.878227 systemd[1]: Started cri-containerd-469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8.scope - libcontainer container 469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8. Jul 7 06:04:37.912330 systemd[1]: cri-containerd-469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8.scope: Deactivated successfully. Jul 7 06:04:37.914779 containerd[1583]: time="2025-07-07T06:04:37.914687487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" id:\"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" pid:3352 exited_at:{seconds:1751868277 nanos:912195939}" Jul 7 06:04:37.915165 containerd[1583]: time="2025-07-07T06:04:37.915141256Z" level=info msg="received exit event container_id:\"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" id:\"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" pid:3352 exited_at:{seconds:1751868277 nanos:912195939}" Jul 7 06:04:37.924179 containerd[1583]: time="2025-07-07T06:04:37.924139885Z" level=info msg="StartContainer for \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" returns successfully" Jul 7 06:04:37.938873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8-rootfs.mount: Deactivated successfully. Jul 7 06:04:38.820715 kubelet[2725]: E0707 06:04:38.820663 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:38.821308 kubelet[2725]: E0707 06:04:38.820663 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:38.823129 containerd[1583]: time="2025-07-07T06:04:38.822992904Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:04:38.900745 kubelet[2725]: I0707 06:04:38.900656 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mhgp7" podStartSLOduration=3.094939262 podStartE2EDuration="11.900630514s" podCreationTimestamp="2025-07-07 06:04:27 +0000 UTC" firstStartedPulling="2025-07-07 06:04:27.923586516 +0000 UTC m=+6.292931717" lastFinishedPulling="2025-07-07 06:04:36.729277768 +0000 UTC m=+15.098622969" observedRunningTime="2025-07-07 06:04:37.847287978 +0000 UTC m=+16.216633179" watchObservedRunningTime="2025-07-07 06:04:38.900630514 +0000 UTC m=+17.269975715" Jul 7 06:04:39.084368 containerd[1583]: time="2025-07-07T06:04:39.084198437Z" level=info msg="Container d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:39.088050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139928884.mount: Deactivated successfully. Jul 7 06:04:39.170175 containerd[1583]: time="2025-07-07T06:04:39.170040807Z" level=info msg="CreateContainer within sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\"" Jul 7 06:04:39.170673 containerd[1583]: time="2025-07-07T06:04:39.170616505Z" level=info msg="StartContainer for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\"" Jul 7 06:04:39.171791 containerd[1583]: time="2025-07-07T06:04:39.171758626Z" level=info msg="connecting to shim d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4" address="unix:///run/containerd/s/41367d4b69e6e1fde62ede55a46466145b81c4d5f6f46be655de07c8a035fd60" protocol=ttrpc version=3 Jul 7 06:04:39.203243 systemd[1]: Started cri-containerd-d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4.scope - libcontainer container d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4. Jul 7 06:04:39.296802 containerd[1583]: time="2025-07-07T06:04:39.296742907Z" level=info msg="StartContainer for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" returns successfully" Jul 7 06:04:39.376130 containerd[1583]: time="2025-07-07T06:04:39.375973922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" id:\"67ce644f72b82dc18e3b575c3951d92faa874bbb75cbe0fc013c10ba21834833\" pid:3422 exited_at:{seconds:1751868279 nanos:375602459}" Jul 7 06:04:39.454166 kubelet[2725]: I0707 06:04:39.454100 2725 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:04:39.517223 systemd[1]: Created slice kubepods-burstable-pod5f08cca6_aba4_4f09_b898_51423e07649f.slice - libcontainer container kubepods-burstable-pod5f08cca6_aba4_4f09_b898_51423e07649f.slice. Jul 7 06:04:39.528184 systemd[1]: Created slice kubepods-burstable-pod98a480d6_6c8e_480b_86d4_a56848c50d78.slice - libcontainer container kubepods-burstable-pod98a480d6_6c8e_480b_86d4_a56848c50d78.slice. Jul 7 06:04:39.622915 kubelet[2725]: I0707 06:04:39.622827 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmg8w\" (UniqueName: \"kubernetes.io/projected/5f08cca6-aba4-4f09-b898-51423e07649f-kube-api-access-vmg8w\") pod \"coredns-7c65d6cfc9-vwjrq\" (UID: \"5f08cca6-aba4-4f09-b898-51423e07649f\") " pod="kube-system/coredns-7c65d6cfc9-vwjrq" Jul 7 06:04:39.622915 kubelet[2725]: I0707 06:04:39.622894 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a480d6-6c8e-480b-86d4-a56848c50d78-config-volume\") pod \"coredns-7c65d6cfc9-cstpq\" (UID: \"98a480d6-6c8e-480b-86d4-a56848c50d78\") " pod="kube-system/coredns-7c65d6cfc9-cstpq" Jul 7 06:04:39.622915 kubelet[2725]: I0707 06:04:39.622932 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f08cca6-aba4-4f09-b898-51423e07649f-config-volume\") pod \"coredns-7c65d6cfc9-vwjrq\" (UID: \"5f08cca6-aba4-4f09-b898-51423e07649f\") " pod="kube-system/coredns-7c65d6cfc9-vwjrq" Jul 7 06:04:39.623196 kubelet[2725]: I0707 06:04:39.622955 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66zk\" (UniqueName: \"kubernetes.io/projected/98a480d6-6c8e-480b-86d4-a56848c50d78-kube-api-access-c66zk\") pod \"coredns-7c65d6cfc9-cstpq\" (UID: \"98a480d6-6c8e-480b-86d4-a56848c50d78\") " pod="kube-system/coredns-7c65d6cfc9-cstpq" Jul 7 06:04:39.823113 kubelet[2725]: E0707 06:04:39.823043 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:39.825287 containerd[1583]: time="2025-07-07T06:04:39.825223638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vwjrq,Uid:5f08cca6-aba4-4f09-b898-51423e07649f,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:39.831127 kubelet[2725]: E0707 06:04:39.831027 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:39.832473 containerd[1583]: time="2025-07-07T06:04:39.832325440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cstpq,Uid:98a480d6-6c8e-480b-86d4-a56848c50d78,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:39.834699 kubelet[2725]: E0707 06:04:39.834658 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:39.856161 kubelet[2725]: I0707 06:04:39.854285 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-khrbp" podStartSLOduration=6.676193835 podStartE2EDuration="13.854268613s" podCreationTimestamp="2025-07-07 06:04:26 +0000 UTC" firstStartedPulling="2025-07-07 06:04:27.563015204 +0000 UTC m=+5.932360405" lastFinishedPulling="2025-07-07 06:04:34.741089982 +0000 UTC m=+13.110435183" observedRunningTime="2025-07-07 06:04:39.852410749 +0000 UTC m=+18.221755950" watchObservedRunningTime="2025-07-07 06:04:39.854268613 +0000 UTC m=+18.223613814" Jul 7 06:04:40.835854 kubelet[2725]: E0707 06:04:40.835796 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:41.214383 systemd-networkd[1500]: cilium_host: Link UP Jul 7 06:04:41.214627 systemd-networkd[1500]: cilium_net: Link UP Jul 7 06:04:41.214853 systemd-networkd[1500]: cilium_net: Gained carrier Jul 7 06:04:41.215067 systemd-networkd[1500]: cilium_host: Gained carrier Jul 7 06:04:41.219283 systemd-networkd[1500]: cilium_host: Gained IPv6LL Jul 7 06:04:41.328498 systemd-networkd[1500]: cilium_vxlan: Link UP Jul 7 06:04:41.328510 systemd-networkd[1500]: cilium_vxlan: Gained carrier Jul 7 06:04:41.482323 systemd-networkd[1500]: cilium_net: Gained IPv6LL Jul 7 06:04:41.559129 kernel: NET: Registered PF_ALG protocol family Jul 7 06:04:41.837905 kubelet[2725]: E0707 06:04:41.837865 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:42.292787 systemd-networkd[1500]: lxc_health: Link UP Jul 7 06:04:42.304871 systemd-networkd[1500]: lxc_health: Gained carrier Jul 7 06:04:42.395521 systemd-networkd[1500]: lxce51a05898c33: Link UP Jul 7 06:04:42.402112 kernel: eth0: renamed from tmp58870 Jul 7 06:04:42.403474 systemd-networkd[1500]: lxce51a05898c33: Gained carrier Jul 7 06:04:42.881329 systemd-networkd[1500]: lxc2ab3f32b3305: Link UP Jul 7 06:04:42.893151 kernel: eth0: renamed from tmp18b62 Jul 7 06:04:42.894843 systemd-networkd[1500]: lxc2ab3f32b3305: Gained carrier Jul 7 06:04:43.266332 systemd-networkd[1500]: cilium_vxlan: Gained IPv6LL Jul 7 06:04:43.299527 kubelet[2725]: E0707 06:04:43.299003 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:43.587378 systemd-networkd[1500]: lxce51a05898c33: Gained IPv6LL Jul 7 06:04:43.778340 systemd-networkd[1500]: lxc_health: Gained IPv6LL Jul 7 06:04:43.842748 kubelet[2725]: E0707 06:04:43.842565 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:44.738446 systemd-networkd[1500]: lxc2ab3f32b3305: Gained IPv6LL Jul 7 06:04:44.844912 kubelet[2725]: E0707 06:04:44.844858 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:46.139113 containerd[1583]: time="2025-07-07T06:04:46.138988069Z" level=info msg="connecting to shim 588709122335f65a67b2baf4e7b476481c069aabfcbd478dc6cd14630dd0a4b0" address="unix:///run/containerd/s/376ac26b96936ae3b060163f157e0709c0da12137a790748f56fde9bb9ad14b5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:46.142099 containerd[1583]: time="2025-07-07T06:04:46.141918175Z" level=info msg="connecting to shim 18b62fce1da05e6b95b111cac0761f554721b0a52dc054203fbdf94e8eac9da7" address="unix:///run/containerd/s/680a30836abfaf7344789c910f306359d0e80273f5a1b35525e0a671605bd321" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:04:46.175276 systemd[1]: Started cri-containerd-18b62fce1da05e6b95b111cac0761f554721b0a52dc054203fbdf94e8eac9da7.scope - libcontainer container 18b62fce1da05e6b95b111cac0761f554721b0a52dc054203fbdf94e8eac9da7. Jul 7 06:04:46.179277 systemd[1]: Started cri-containerd-588709122335f65a67b2baf4e7b476481c069aabfcbd478dc6cd14630dd0a4b0.scope - libcontainer container 588709122335f65a67b2baf4e7b476481c069aabfcbd478dc6cd14630dd0a4b0. Jul 7 06:04:46.189029 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:04:46.196132 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:04:46.232304 containerd[1583]: time="2025-07-07T06:04:46.232256335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cstpq,Uid:98a480d6-6c8e-480b-86d4-a56848c50d78,Namespace:kube-system,Attempt:0,} returns sandbox id \"18b62fce1da05e6b95b111cac0761f554721b0a52dc054203fbdf94e8eac9da7\"" Jul 7 06:04:46.234428 containerd[1583]: time="2025-07-07T06:04:46.234396821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vwjrq,Uid:5f08cca6-aba4-4f09-b898-51423e07649f,Namespace:kube-system,Attempt:0,} returns sandbox id \"588709122335f65a67b2baf4e7b476481c069aabfcbd478dc6cd14630dd0a4b0\"" Jul 7 06:04:46.235842 kubelet[2725]: E0707 06:04:46.235814 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:46.236172 kubelet[2725]: E0707 06:04:46.235825 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:46.238217 containerd[1583]: time="2025-07-07T06:04:46.238142695Z" level=info msg="CreateContainer within sandbox \"588709122335f65a67b2baf4e7b476481c069aabfcbd478dc6cd14630dd0a4b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:04:46.238340 containerd[1583]: time="2025-07-07T06:04:46.238149818Z" level=info msg="CreateContainer within sandbox \"18b62fce1da05e6b95b111cac0761f554721b0a52dc054203fbdf94e8eac9da7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:04:46.256640 containerd[1583]: time="2025-07-07T06:04:46.256600663Z" level=info msg="Container d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:46.263822 containerd[1583]: time="2025-07-07T06:04:46.263757389Z" level=info msg="CreateContainer within sandbox \"18b62fce1da05e6b95b111cac0761f554721b0a52dc054203fbdf94e8eac9da7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88\"" Jul 7 06:04:46.264428 containerd[1583]: time="2025-07-07T06:04:46.264399861Z" level=info msg="StartContainer for \"d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88\"" Jul 7 06:04:46.265218 containerd[1583]: time="2025-07-07T06:04:46.265191073Z" level=info msg="connecting to shim d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88" address="unix:///run/containerd/s/680a30836abfaf7344789c910f306359d0e80273f5a1b35525e0a671605bd321" protocol=ttrpc version=3 Jul 7 06:04:46.284297 containerd[1583]: time="2025-07-07T06:04:46.284233113Z" level=info msg="Container f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:46.290235 systemd[1]: Started cri-containerd-d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88.scope - libcontainer container d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88. Jul 7 06:04:46.303809 containerd[1583]: time="2025-07-07T06:04:46.303748775Z" level=info msg="CreateContainer within sandbox \"588709122335f65a67b2baf4e7b476481c069aabfcbd478dc6cd14630dd0a4b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee\"" Jul 7 06:04:46.306111 containerd[1583]: time="2025-07-07T06:04:46.305397515Z" level=info msg="StartContainer for \"f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee\"" Jul 7 06:04:46.306228 containerd[1583]: time="2025-07-07T06:04:46.306182365Z" level=info msg="connecting to shim f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee" address="unix:///run/containerd/s/376ac26b96936ae3b060163f157e0709c0da12137a790748f56fde9bb9ad14b5" protocol=ttrpc version=3 Jul 7 06:04:46.329391 systemd[1]: Started cri-containerd-f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee.scope - libcontainer container f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee. Jul 7 06:04:46.343554 containerd[1583]: time="2025-07-07T06:04:46.343498736Z" level=info msg="StartContainer for \"d70b6a801e599fcbe369e8b7a8731900fd088f3cd7ee66fe8cf9db5280166a88\" returns successfully" Jul 7 06:04:46.367780 containerd[1583]: time="2025-07-07T06:04:46.367726335Z" level=info msg="StartContainer for \"f31479275f617e03db6e4425676eecaf0270634f69a98f46fe62ef871469c5ee\" returns successfully" Jul 7 06:04:46.864878 kubelet[2725]: E0707 06:04:46.864842 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:46.868127 kubelet[2725]: E0707 06:04:46.868020 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:47.104501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408546069.mount: Deactivated successfully. Jul 7 06:04:47.187160 kubelet[2725]: I0707 06:04:47.186962 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vwjrq" podStartSLOduration=20.18693945 podStartE2EDuration="20.18693945s" podCreationTimestamp="2025-07-07 06:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:47.066586962 +0000 UTC m=+25.435932164" watchObservedRunningTime="2025-07-07 06:04:47.18693945 +0000 UTC m=+25.556284651" Jul 7 06:04:47.237852 kubelet[2725]: I0707 06:04:47.237688 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cstpq" podStartSLOduration=20.237669206 podStartE2EDuration="20.237669206s" podCreationTimestamp="2025-07-07 06:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:47.236715879 +0000 UTC m=+25.606061080" watchObservedRunningTime="2025-07-07 06:04:47.237669206 +0000 UTC m=+25.607014407" Jul 7 06:04:47.870041 kubelet[2725]: E0707 06:04:47.870000 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:47.870226 kubelet[2725]: E0707 06:04:47.870000 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:48.871994 kubelet[2725]: E0707 06:04:48.871938 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:48.871994 kubelet[2725]: E0707 06:04:48.871968 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:52.180922 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:40512.service - OpenSSH per-connection server daemon (10.0.0.1:40512). Jul 7 06:04:52.240015 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 40512 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:04:52.242182 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:52.247573 systemd-logind[1567]: New session 8 of user core. Jul 7 06:04:52.261251 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:04:52.396888 sshd[4067]: Connection closed by 10.0.0.1 port 40512 Jul 7 06:04:52.397244 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:52.402723 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:40512.service: Deactivated successfully. Jul 7 06:04:52.404975 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:04:52.405916 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:04:52.407594 systemd-logind[1567]: Removed session 8. Jul 7 06:04:57.411390 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:38298.service - OpenSSH per-connection server daemon (10.0.0.1:38298). Jul 7 06:04:57.469550 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 38298 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:04:57.471436 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:57.477221 systemd-logind[1567]: New session 9 of user core. Jul 7 06:04:57.488286 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:04:57.606497 sshd[4085]: Connection closed by 10.0.0.1 port 38298 Jul 7 06:04:57.607052 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:57.613459 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:38298.service: Deactivated successfully. Jul 7 06:04:57.615896 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:04:57.617068 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:04:57.618680 systemd-logind[1567]: Removed session 9. Jul 7 06:05:02.629528 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:38304.service - OpenSSH per-connection server daemon (10.0.0.1:38304). Jul 7 06:05:02.688004 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 38304 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:02.690517 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:02.697739 systemd-logind[1567]: New session 10 of user core. Jul 7 06:05:02.715284 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:05:02.870604 sshd[4104]: Connection closed by 10.0.0.1 port 38304 Jul 7 06:05:02.871031 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:02.876908 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:38304.service: Deactivated successfully. Jul 7 06:05:02.880129 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:05:02.881485 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:05:02.884142 systemd-logind[1567]: Removed session 10. Jul 7 06:05:07.896248 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:48102.service - OpenSSH per-connection server daemon (10.0.0.1:48102). Jul 7 06:05:07.946811 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 48102 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:07.948553 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:07.952840 systemd-logind[1567]: New session 11 of user core. Jul 7 06:05:07.967237 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:05:08.195812 sshd[4121]: Connection closed by 10.0.0.1 port 48102 Jul 7 06:05:08.196107 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:08.207321 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:48102.service: Deactivated successfully. Jul 7 06:05:08.209490 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:05:08.210410 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:05:08.213780 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:48110.service - OpenSSH per-connection server daemon (10.0.0.1:48110). Jul 7 06:05:08.214694 systemd-logind[1567]: Removed session 11. Jul 7 06:05:08.259256 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 48110 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:08.260641 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:08.265341 systemd-logind[1567]: New session 12 of user core. Jul 7 06:05:08.276225 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:05:08.418341 sshd[4137]: Connection closed by 10.0.0.1 port 48110 Jul 7 06:05:08.418984 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:08.434734 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:48110.service: Deactivated successfully. Jul 7 06:05:08.437657 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:05:08.440674 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:05:08.446239 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:48126.service - OpenSSH per-connection server daemon (10.0.0.1:48126). Jul 7 06:05:08.447121 systemd-logind[1567]: Removed session 12. Jul 7 06:05:08.505223 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:08.506799 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:08.511702 systemd-logind[1567]: New session 13 of user core. Jul 7 06:05:08.521281 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:05:08.723652 sshd[4150]: Connection closed by 10.0.0.1 port 48126 Jul 7 06:05:08.723905 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:08.729066 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:48126.service: Deactivated successfully. Jul 7 06:05:08.731365 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:05:08.732409 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:05:08.733911 systemd-logind[1567]: Removed session 13. Jul 7 06:05:13.738307 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:48136.service - OpenSSH per-connection server daemon (10.0.0.1:48136). Jul 7 06:05:13.791000 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 48136 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:13.792861 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:13.797629 systemd-logind[1567]: New session 14 of user core. Jul 7 06:05:13.808226 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:05:13.920371 sshd[4167]: Connection closed by 10.0.0.1 port 48136 Jul 7 06:05:13.920690 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:13.925993 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:48136.service: Deactivated successfully. Jul 7 06:05:13.928533 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:05:13.929416 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:05:13.931114 systemd-logind[1567]: Removed session 14. Jul 7 06:05:18.935296 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:59252.service - OpenSSH per-connection server daemon (10.0.0.1:59252). Jul 7 06:05:18.986866 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 59252 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:18.988357 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:18.993459 systemd-logind[1567]: New session 15 of user core. Jul 7 06:05:19.006233 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:05:19.122058 sshd[4182]: Connection closed by 10.0.0.1 port 59252 Jul 7 06:05:19.122501 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:19.126907 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:59252.service: Deactivated successfully. Jul 7 06:05:19.129059 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:05:19.130109 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:05:19.131505 systemd-logind[1567]: Removed session 15. Jul 7 06:05:24.135533 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:59254.service - OpenSSH per-connection server daemon (10.0.0.1:59254). Jul 7 06:05:24.191823 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 59254 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:24.193826 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:24.199254 systemd-logind[1567]: New session 16 of user core. Jul 7 06:05:24.213255 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:05:24.339957 sshd[4199]: Connection closed by 10.0.0.1 port 59254 Jul 7 06:05:24.340491 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:24.351154 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:59254.service: Deactivated successfully. Jul 7 06:05:24.353713 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:05:24.354697 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:05:24.359356 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:59270.service - OpenSSH per-connection server daemon (10.0.0.1:59270). Jul 7 06:05:24.360201 systemd-logind[1567]: Removed session 16. Jul 7 06:05:24.418749 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 59270 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:24.420812 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:24.426193 systemd-logind[1567]: New session 17 of user core. Jul 7 06:05:24.435292 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:05:24.679280 sshd[4215]: Connection closed by 10.0.0.1 port 59270 Jul 7 06:05:24.679822 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:24.695273 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:59270.service: Deactivated successfully. Jul 7 06:05:24.697723 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:05:24.698890 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:05:24.702944 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:59278.service - OpenSSH per-connection server daemon (10.0.0.1:59278). Jul 7 06:05:24.703664 systemd-logind[1567]: Removed session 17. Jul 7 06:05:24.761619 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 59278 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:24.763445 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:24.768752 systemd-logind[1567]: New session 18 of user core. Jul 7 06:05:24.782260 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:05:26.489430 sshd[4228]: Connection closed by 10.0.0.1 port 59278 Jul 7 06:05:26.489780 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:26.501939 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:59278.service: Deactivated successfully. Jul 7 06:05:26.504454 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:05:26.505642 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:05:26.510338 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:46340.service - OpenSSH per-connection server daemon (10.0.0.1:46340). Jul 7 06:05:26.512410 systemd-logind[1567]: Removed session 18. Jul 7 06:05:26.559278 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 46340 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:26.561637 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:26.566855 systemd-logind[1567]: New session 19 of user core. Jul 7 06:05:26.576249 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:05:27.031189 sshd[4249]: Connection closed by 10.0.0.1 port 46340 Jul 7 06:05:27.031604 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:27.043380 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:46340.service: Deactivated successfully. Jul 7 06:05:27.045729 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:05:27.048361 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:05:27.049768 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:46356.service - OpenSSH per-connection server daemon (10.0.0.1:46356). Jul 7 06:05:27.051522 systemd-logind[1567]: Removed session 19. Jul 7 06:05:27.109923 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 46356 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:27.111923 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:27.117000 systemd-logind[1567]: New session 20 of user core. Jul 7 06:05:27.124227 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:05:27.247727 sshd[4262]: Connection closed by 10.0.0.1 port 46356 Jul 7 06:05:27.248127 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:27.253716 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:46356.service: Deactivated successfully. Jul 7 06:05:27.255979 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:05:27.256832 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:05:27.258111 systemd-logind[1567]: Removed session 20. Jul 7 06:05:31.730458 kubelet[2725]: E0707 06:05:31.730346 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:32.261334 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:46360.service - OpenSSH per-connection server daemon (10.0.0.1:46360). Jul 7 06:05:32.306502 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:32.307882 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:32.312401 systemd-logind[1567]: New session 21 of user core. Jul 7 06:05:32.322237 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:05:32.437913 sshd[4280]: Connection closed by 10.0.0.1 port 46360 Jul 7 06:05:32.438277 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:32.443236 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:46360.service: Deactivated successfully. Jul 7 06:05:32.445441 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:05:32.446520 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:05:32.448149 systemd-logind[1567]: Removed session 21. Jul 7 06:05:35.730388 kubelet[2725]: E0707 06:05:35.730335 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:37.451490 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:57720.service - OpenSSH per-connection server daemon (10.0.0.1:57720). Jul 7 06:05:37.502551 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 57720 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:37.504065 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:37.508916 systemd-logind[1567]: New session 22 of user core. Jul 7 06:05:37.518222 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:05:37.635153 sshd[4299]: Connection closed by 10.0.0.1 port 57720 Jul 7 06:05:37.635476 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:37.638772 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:57720.service: Deactivated successfully. Jul 7 06:05:37.640921 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:05:37.643557 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:05:37.644651 systemd-logind[1567]: Removed session 22. Jul 7 06:05:40.730292 kubelet[2725]: E0707 06:05:40.730221 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:42.654747 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:57726.service - OpenSSH per-connection server daemon (10.0.0.1:57726). Jul 7 06:05:42.704237 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 57726 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:42.706406 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:42.711151 systemd-logind[1567]: New session 23 of user core. Jul 7 06:05:42.722235 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:05:42.842013 sshd[4314]: Connection closed by 10.0.0.1 port 57726 Jul 7 06:05:42.842382 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:42.847233 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:57726.service: Deactivated successfully. Jul 7 06:05:42.850013 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:05:42.851207 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:05:42.853106 systemd-logind[1567]: Removed session 23. Jul 7 06:05:47.856097 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:58562.service - OpenSSH per-connection server daemon (10.0.0.1:58562). Jul 7 06:05:47.907988 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 58562 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:47.909732 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:47.914867 systemd-logind[1567]: New session 24 of user core. Jul 7 06:05:47.923332 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:05:48.041945 sshd[4330]: Connection closed by 10.0.0.1 port 58562 Jul 7 06:05:48.042435 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:48.057371 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:58562.service: Deactivated successfully. Jul 7 06:05:48.059609 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:05:48.060877 systemd-logind[1567]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:05:48.064604 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:58568.service - OpenSSH per-connection server daemon (10.0.0.1:58568). Jul 7 06:05:48.065637 systemd-logind[1567]: Removed session 24. Jul 7 06:05:48.119047 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 58568 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:48.120650 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:48.125980 systemd-logind[1567]: New session 25 of user core. Jul 7 06:05:48.137289 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:05:49.561877 containerd[1583]: time="2025-07-07T06:05:49.561341253Z" level=info msg="StopContainer for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" with timeout 30 (s)" Jul 7 06:05:49.562524 containerd[1583]: time="2025-07-07T06:05:49.562482981Z" level=info msg="Stop container \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" with signal terminated" Jul 7 06:05:49.579407 systemd[1]: cri-containerd-cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793.scope: Deactivated successfully. Jul 7 06:05:49.581243 containerd[1583]: time="2025-07-07T06:05:49.581184335Z" level=info msg="received exit event container_id:\"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" id:\"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" pid:3276 exited_at:{seconds:1751868349 nanos:580790915}" Jul 7 06:05:49.581482 containerd[1583]: time="2025-07-07T06:05:49.581452196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" id:\"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" pid:3276 exited_at:{seconds:1751868349 nanos:580790915}" Jul 7 06:05:49.598398 containerd[1583]: time="2025-07-07T06:05:49.598349227Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:05:49.600429 containerd[1583]: time="2025-07-07T06:05:49.600398368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" id:\"2773b564fbdf49a5c9b29a6cb28bb6099f4c2786b3f14fd760e68d96e4100620\" pid:4374 exited_at:{seconds:1751868349 nanos:600020837}" Jul 7 06:05:49.603588 containerd[1583]: time="2025-07-07T06:05:49.603552756Z" level=info msg="StopContainer for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" with timeout 2 (s)" Jul 7 06:05:49.603986 containerd[1583]: time="2025-07-07T06:05:49.603884971Z" level=info msg="Stop container \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" with signal terminated" Jul 7 06:05:49.608577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793-rootfs.mount: Deactivated successfully. Jul 7 06:05:49.613329 systemd-networkd[1500]: lxc_health: Link DOWN Jul 7 06:05:49.613341 systemd-networkd[1500]: lxc_health: Lost carrier Jul 7 06:05:49.622307 containerd[1583]: time="2025-07-07T06:05:49.622255012Z" level=info msg="StopContainer for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" returns successfully" Jul 7 06:05:49.623613 containerd[1583]: time="2025-07-07T06:05:49.623574329Z" level=info msg="StopPodSandbox for \"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\"" Jul 7 06:05:49.623684 containerd[1583]: time="2025-07-07T06:05:49.623661255Z" level=info msg="Container to stop \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:05:49.631286 systemd[1]: cri-containerd-565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d.scope: Deactivated successfully. Jul 7 06:05:49.632876 containerd[1583]: time="2025-07-07T06:05:49.632718407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" id:\"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" pid:3006 exit_status:137 exited_at:{seconds:1751868349 nanos:632164801}" Jul 7 06:05:49.633802 systemd[1]: cri-containerd-d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4.scope: Deactivated successfully. Jul 7 06:05:49.634245 systemd[1]: cri-containerd-d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4.scope: Consumed 7.103s CPU time, 123.3M memory peak, 236K read from disk, 13.3M written to disk. Jul 7 06:05:49.636986 containerd[1583]: time="2025-07-07T06:05:49.636890869Z" level=info msg="received exit event container_id:\"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" id:\"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" pid:3387 exited_at:{seconds:1751868349 nanos:636379893}" Jul 7 06:05:49.664501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d-rootfs.mount: Deactivated successfully. Jul 7 06:05:49.666970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4-rootfs.mount: Deactivated successfully. Jul 7 06:05:49.672294 containerd[1583]: time="2025-07-07T06:05:49.672254786Z" level=info msg="shim disconnected" id=565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d namespace=k8s.io Jul 7 06:05:49.672444 containerd[1583]: time="2025-07-07T06:05:49.672396978Z" level=warning msg="cleaning up after shim disconnected" id=565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d namespace=k8s.io Jul 7 06:05:49.697094 containerd[1583]: time="2025-07-07T06:05:49.672411605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:05:49.719240 containerd[1583]: time="2025-07-07T06:05:49.719190484Z" level=info msg="StopContainer for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" returns successfully" Jul 7 06:05:49.720383 containerd[1583]: time="2025-07-07T06:05:49.720326873Z" level=info msg="StopPodSandbox for \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\"" Jul 7 06:05:49.720512 containerd[1583]: time="2025-07-07T06:05:49.720398078Z" level=info msg="Container to stop \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:05:49.720512 containerd[1583]: time="2025-07-07T06:05:49.720410843Z" level=info msg="Container to stop \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:05:49.720512 containerd[1583]: time="2025-07-07T06:05:49.720419750Z" level=info msg="Container to stop \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:05:49.720512 containerd[1583]: time="2025-07-07T06:05:49.720427575Z" level=info msg="Container to stop \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:05:49.720512 containerd[1583]: time="2025-07-07T06:05:49.720435440Z" level=info msg="Container to stop \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:05:49.727195 systemd[1]: cri-containerd-092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8.scope: Deactivated successfully. Jul 7 06:05:49.729670 kubelet[2725]: E0707 06:05:49.729630 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:49.730389 kubelet[2725]: E0707 06:05:49.729812 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:49.730389 kubelet[2725]: E0707 06:05:49.730041 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:49.744667 containerd[1583]: time="2025-07-07T06:05:49.744464509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" id:\"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" pid:3387 exited_at:{seconds:1751868349 nanos:636379893}" Jul 7 06:05:49.744667 containerd[1583]: time="2025-07-07T06:05:49.744507642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" id:\"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" pid:2873 exit_status:137 exited_at:{seconds:1751868349 nanos:732819260}" Jul 7 06:05:49.748858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d-shm.mount: Deactivated successfully. Jul 7 06:05:49.755525 containerd[1583]: time="2025-07-07T06:05:49.755474388Z" level=info msg="received exit event sandbox_id:\"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" exit_status:137 exited_at:{seconds:1751868349 nanos:632164801}" Jul 7 06:05:49.762050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8-rootfs.mount: Deactivated successfully. Jul 7 06:05:49.765120 containerd[1583]: time="2025-07-07T06:05:49.765050209Z" level=info msg="TearDown network for sandbox \"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" successfully" Jul 7 06:05:49.765120 containerd[1583]: time="2025-07-07T06:05:49.765116836Z" level=info msg="StopPodSandbox for \"565517c61efc4671fe36480685e1bae07fb57fdeeccabc26a682a6516a36fc7d\" returns successfully" Jul 7 06:05:49.765792 containerd[1583]: time="2025-07-07T06:05:49.765753671Z" level=info msg="received exit event sandbox_id:\"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" exit_status:137 exited_at:{seconds:1751868349 nanos:732819260}" Jul 7 06:05:49.766105 containerd[1583]: time="2025-07-07T06:05:49.766031672Z" level=info msg="TearDown network for sandbox \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" successfully" Jul 7 06:05:49.766105 containerd[1583]: time="2025-07-07T06:05:49.766058242Z" level=info msg="StopPodSandbox for \"092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8\" returns successfully" Jul 7 06:05:49.766576 containerd[1583]: time="2025-07-07T06:05:49.766544491Z" level=info msg="shim disconnected" id=092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8 namespace=k8s.io Jul 7 06:05:49.766576 containerd[1583]: time="2025-07-07T06:05:49.766568136Z" level=warning msg="cleaning up after shim disconnected" id=092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8 namespace=k8s.io Jul 7 06:05:49.766638 containerd[1583]: time="2025-07-07T06:05:49.766575309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:05:49.947346 kubelet[2725]: I0707 06:05:49.947139 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-net\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947346 kubelet[2725]: I0707 06:05:49.947203 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hubble-tls\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947346 kubelet[2725]: I0707 06:05:49.947221 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hostproc\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947346 kubelet[2725]: I0707 06:05:49.947234 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-etc-cni-netd\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947346 kubelet[2725]: I0707 06:05:49.947256 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-clustermesh-secrets\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947346 kubelet[2725]: I0707 06:05:49.947272 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtmg6\" (UniqueName: \"kubernetes.io/projected/f7c2a5bc-8d70-44fa-809e-a4320d882493-kube-api-access-wtmg6\") pod \"f7c2a5bc-8d70-44fa-809e-a4320d882493\" (UID: \"f7c2a5bc-8d70-44fa-809e-a4320d882493\") " Jul 7 06:05:49.947653 kubelet[2725]: I0707 06:05:49.947289 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-xtables-lock\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947653 kubelet[2725]: I0707 06:05:49.947303 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-bpf-maps\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947653 kubelet[2725]: I0707 06:05:49.947326 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-kernel\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947653 kubelet[2725]: I0707 06:05:49.947340 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-cgroup\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947653 kubelet[2725]: I0707 06:05:49.947355 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7c2a5bc-8d70-44fa-809e-a4320d882493-cilium-config-path\") pod \"f7c2a5bc-8d70-44fa-809e-a4320d882493\" (UID: \"f7c2a5bc-8d70-44fa-809e-a4320d882493\") " Jul 7 06:05:49.947653 kubelet[2725]: I0707 06:05:49.947369 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-lib-modules\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947810 kubelet[2725]: I0707 06:05:49.947384 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92v2\" (UniqueName: \"kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-kube-api-access-h92v2\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947810 kubelet[2725]: I0707 06:05:49.947396 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-run\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947810 kubelet[2725]: I0707 06:05:49.947409 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cni-path\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947810 kubelet[2725]: I0707 06:05:49.947390 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.947810 kubelet[2725]: I0707 06:05:49.947429 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-config-path\") pod \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\" (UID: \"4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343\") " Jul 7 06:05:49.947810 kubelet[2725]: I0707 06:05:49.947553 2725 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:49.948042 kubelet[2725]: I0707 06:05:49.947582 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.948042 kubelet[2725]: I0707 06:05:49.947602 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hostproc" (OuterVolumeSpecName: "hostproc") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.948042 kubelet[2725]: I0707 06:05:49.947641 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.948704 kubelet[2725]: I0707 06:05:49.948359 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.948704 kubelet[2725]: I0707 06:05:49.948529 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.952530 kubelet[2725]: I0707 06:05:49.952457 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:05:49.952683 kubelet[2725]: I0707 06:05:49.952586 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c2a5bc-8d70-44fa-809e-a4320d882493-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7c2a5bc-8d70-44fa-809e-a4320d882493" (UID: "f7c2a5bc-8d70-44fa-809e-a4320d882493"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:05:49.952683 kubelet[2725]: I0707 06:05:49.952625 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.952973 kubelet[2725]: I0707 06:05:49.952856 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.952973 kubelet[2725]: I0707 06:05:49.952909 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cni-path" (OuterVolumeSpecName: "cni-path") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.952973 kubelet[2725]: I0707 06:05:49.952941 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:05:49.953499 kubelet[2725]: I0707 06:05:49.953448 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:05:49.953563 kubelet[2725]: I0707 06:05:49.953530 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:05:49.956407 kubelet[2725]: I0707 06:05:49.956374 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-kube-api-access-h92v2" (OuterVolumeSpecName: "kube-api-access-h92v2") pod "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" (UID: "4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343"). InnerVolumeSpecName "kube-api-access-h92v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:05:49.956527 kubelet[2725]: I0707 06:05:49.956485 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c2a5bc-8d70-44fa-809e-a4320d882493-kube-api-access-wtmg6" (OuterVolumeSpecName: "kube-api-access-wtmg6") pod "f7c2a5bc-8d70-44fa-809e-a4320d882493" (UID: "f7c2a5bc-8d70-44fa-809e-a4320d882493"). InnerVolumeSpecName "kube-api-access-wtmg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:05:49.999348 kubelet[2725]: I0707 06:05:49.999305 2725 scope.go:117] "RemoveContainer" containerID="d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4" Jul 7 06:05:50.003293 containerd[1583]: time="2025-07-07T06:05:50.002778477Z" level=info msg="RemoveContainer for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\"" Jul 7 06:05:50.017697 containerd[1583]: time="2025-07-07T06:05:50.017635253Z" level=info msg="RemoveContainer for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" returns successfully" Jul 7 06:05:50.017795 systemd[1]: Removed slice kubepods-besteffort-podf7c2a5bc_8d70_44fa_809e_a4320d882493.slice - libcontainer container kubepods-besteffort-podf7c2a5bc_8d70_44fa_809e_a4320d882493.slice. Jul 7 06:05:50.018426 kubelet[2725]: I0707 06:05:50.018296 2725 scope.go:117] "RemoveContainer" containerID="469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8" Jul 7 06:05:50.019945 systemd[1]: Removed slice kubepods-burstable-pod4d7e9bc8_0e9f_498f_ab5f_8f2789eb8343.slice - libcontainer container kubepods-burstable-pod4d7e9bc8_0e9f_498f_ab5f_8f2789eb8343.slice. Jul 7 06:05:50.020034 systemd[1]: kubepods-burstable-pod4d7e9bc8_0e9f_498f_ab5f_8f2789eb8343.slice: Consumed 7.226s CPU time, 123.6M memory peak, 248K read from disk, 13.3M written to disk. Jul 7 06:05:50.029167 containerd[1583]: time="2025-07-07T06:05:50.029103486Z" level=info msg="RemoveContainer for \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\"" Jul 7 06:05:50.048046 kubelet[2725]: I0707 06:05:50.048000 2725 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048046 kubelet[2725]: I0707 06:05:50.048042 2725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtmg6\" (UniqueName: \"kubernetes.io/projected/f7c2a5bc-8d70-44fa-809e-a4320d882493-kube-api-access-wtmg6\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048046 kubelet[2725]: I0707 06:05:50.048052 2725 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048061 2725 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048070 2725 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048104 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048115 2725 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048122 2725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92v2\" (UniqueName: \"kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-kube-api-access-h92v2\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048129 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048138 2725 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048188 kubelet[2725]: I0707 06:05:50.048146 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7c2a5bc-8d70-44fa-809e-a4320d882493-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048357 kubelet[2725]: I0707 06:05:50.048153 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048357 kubelet[2725]: I0707 06:05:50.048161 2725 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048357 kubelet[2725]: I0707 06:05:50.048170 2725 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.048357 kubelet[2725]: I0707 06:05:50.048179 2725 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 06:05:50.085932 containerd[1583]: time="2025-07-07T06:05:50.085870249Z" level=info msg="RemoveContainer for \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" returns successfully" Jul 7 06:05:50.086378 kubelet[2725]: I0707 06:05:50.086346 2725 scope.go:117] "RemoveContainer" containerID="db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a" Jul 7 06:05:50.089118 containerd[1583]: time="2025-07-07T06:05:50.089060634Z" level=info msg="RemoveContainer for \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\"" Jul 7 06:05:50.174707 containerd[1583]: time="2025-07-07T06:05:50.174645963Z" level=info msg="RemoveContainer for \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" returns successfully" Jul 7 06:05:50.174914 kubelet[2725]: I0707 06:05:50.174882 2725 scope.go:117] "RemoveContainer" containerID="f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23" Jul 7 06:05:50.176537 containerd[1583]: time="2025-07-07T06:05:50.176490320Z" level=info msg="RemoveContainer for \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\"" Jul 7 06:05:50.243216 containerd[1583]: time="2025-07-07T06:05:50.243017138Z" level=info msg="RemoveContainer for \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" returns successfully" Jul 7 06:05:50.243619 kubelet[2725]: I0707 06:05:50.243595 2725 scope.go:117] "RemoveContainer" containerID="1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87" Jul 7 06:05:50.245589 containerd[1583]: time="2025-07-07T06:05:50.245270475Z" level=info msg="RemoveContainer for \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\"" Jul 7 06:05:50.338465 containerd[1583]: time="2025-07-07T06:05:50.338414011Z" level=info msg="RemoveContainer for \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" returns successfully" Jul 7 06:05:50.338713 kubelet[2725]: I0707 06:05:50.338673 2725 scope.go:117] "RemoveContainer" containerID="d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4" Jul 7 06:05:50.338985 containerd[1583]: time="2025-07-07T06:05:50.338911459Z" level=error msg="ContainerStatus for \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\": not found" Jul 7 06:05:50.342753 kubelet[2725]: E0707 06:05:50.342709 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\": not found" containerID="d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4" Jul 7 06:05:50.344890 kubelet[2725]: I0707 06:05:50.344774 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4"} err="failed to get container status \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2003e21bce020c84252bb809966b2deca81c094c6cdbd1b0aa187cfebb37cf4\": not found" Jul 7 06:05:50.344890 kubelet[2725]: I0707 06:05:50.344873 2725 scope.go:117] "RemoveContainer" containerID="469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8" Jul 7 06:05:50.345134 containerd[1583]: time="2025-07-07T06:05:50.345099171Z" level=error msg="ContainerStatus for \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\": not found" Jul 7 06:05:50.345257 kubelet[2725]: E0707 06:05:50.345222 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\": not found" containerID="469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8" Jul 7 06:05:50.345316 kubelet[2725]: I0707 06:05:50.345255 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8"} err="failed to get container status \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"469ad65aa80c405dbe3edb247e602c5312ff6fea57add7c98c5cd0cc1832a7f8\": not found" Jul 7 06:05:50.345316 kubelet[2725]: I0707 06:05:50.345277 2725 scope.go:117] "RemoveContainer" containerID="db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a" Jul 7 06:05:50.345497 containerd[1583]: time="2025-07-07T06:05:50.345455941Z" level=error msg="ContainerStatus for \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\": not found" Jul 7 06:05:50.345616 kubelet[2725]: E0707 06:05:50.345584 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\": not found" containerID="db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a" Jul 7 06:05:50.345670 kubelet[2725]: I0707 06:05:50.345613 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a"} err="failed to get container status \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\": rpc error: code = NotFound desc = an error occurred when try to find container \"db70c83995cfc9aac5c2b3fa54f68bb89084682ef4eb0ffe3b344f1160f6e98a\": not found" Jul 7 06:05:50.345670 kubelet[2725]: I0707 06:05:50.345635 2725 scope.go:117] "RemoveContainer" containerID="f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23" Jul 7 06:05:50.345841 containerd[1583]: time="2025-07-07T06:05:50.345802843Z" level=error msg="ContainerStatus for \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\": not found" Jul 7 06:05:50.345977 kubelet[2725]: E0707 06:05:50.345945 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\": not found" containerID="f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23" Jul 7 06:05:50.346028 kubelet[2725]: I0707 06:05:50.345977 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23"} err="failed to get container status \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3a75844e356673094e8a5ca7dac623d5522d5a42c9d5dba3471bf47ccbafe23\": not found" Jul 7 06:05:50.346028 kubelet[2725]: I0707 06:05:50.345994 2725 scope.go:117] "RemoveContainer" containerID="1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87" Jul 7 06:05:50.346244 containerd[1583]: time="2025-07-07T06:05:50.346208606Z" level=error msg="ContainerStatus for \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\": not found" Jul 7 06:05:50.346352 kubelet[2725]: E0707 06:05:50.346326 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\": not found" containerID="1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87" Jul 7 06:05:50.346426 kubelet[2725]: I0707 06:05:50.346347 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87"} err="failed to get container status \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\": rpc error: code = NotFound desc = an error occurred when try to find container \"1914fb6f4d294ea38c39dafdf976a5da197383226d520fd7b8506eb091f26b87\": not found" Jul 7 06:05:50.346426 kubelet[2725]: I0707 06:05:50.346363 2725 scope.go:117] "RemoveContainer" containerID="cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793" Jul 7 06:05:50.347819 containerd[1583]: time="2025-07-07T06:05:50.347792628Z" level=info msg="RemoveContainer for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\"" Jul 7 06:05:50.388350 containerd[1583]: time="2025-07-07T06:05:50.388303485Z" level=info msg="RemoveContainer for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" returns successfully" Jul 7 06:05:50.388509 kubelet[2725]: I0707 06:05:50.388472 2725 scope.go:117] "RemoveContainer" containerID="cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793" Jul 7 06:05:50.388708 containerd[1583]: time="2025-07-07T06:05:50.388675425Z" level=error msg="ContainerStatus for \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\": not found" Jul 7 06:05:50.388821 kubelet[2725]: E0707 06:05:50.388793 2725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\": not found" containerID="cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793" Jul 7 06:05:50.388939 kubelet[2725]: I0707 06:05:50.388889 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793"} err="failed to get container status \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\": rpc error: code = NotFound desc = an error occurred when try to find container \"cacb5cf651d4875bea7accf76e4eb543ffc1b658c64ba65bab7e8ff23daf0793\": not found" Jul 7 06:05:50.608441 systemd[1]: var-lib-kubelet-pods-f7c2a5bc\x2d8d70\x2d44fa\x2d809e\x2da4320d882493-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwtmg6.mount: Deactivated successfully. Jul 7 06:05:50.608582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-092ab031372830ff2adbec3549a87572ff0001f7c2a7713ce44d3c6205b87bb8-shm.mount: Deactivated successfully. Jul 7 06:05:50.608681 systemd[1]: var-lib-kubelet-pods-4d7e9bc8\x2d0e9f\x2d498f\x2dab5f\x2d8f2789eb8343-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh92v2.mount: Deactivated successfully. Jul 7 06:05:50.608781 systemd[1]: var-lib-kubelet-pods-4d7e9bc8\x2d0e9f\x2d498f\x2dab5f\x2d8f2789eb8343-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 06:05:50.608878 systemd[1]: var-lib-kubelet-pods-4d7e9bc8\x2d0e9f\x2d498f\x2dab5f\x2d8f2789eb8343-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 06:05:51.446858 sshd[4345]: Connection closed by 10.0.0.1 port 58568 Jul 7 06:05:51.447379 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:51.457379 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:58568.service: Deactivated successfully. Jul 7 06:05:51.459431 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:05:51.460375 systemd-logind[1567]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:05:51.463850 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:58578.service - OpenSSH per-connection server daemon (10.0.0.1:58578). Jul 7 06:05:51.464662 systemd-logind[1567]: Removed session 25. Jul 7 06:05:51.523287 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 58578 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:51.525111 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:51.529897 systemd-logind[1567]: New session 26 of user core. Jul 7 06:05:51.539258 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:05:51.733192 kubelet[2725]: I0707 06:05:51.733020 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" path="/var/lib/kubelet/pods/4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343/volumes" Jul 7 06:05:51.733875 kubelet[2725]: I0707 06:05:51.733852 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c2a5bc-8d70-44fa-809e-a4320d882493" path="/var/lib/kubelet/pods/f7c2a5bc-8d70-44fa-809e-a4320d882493/volumes" Jul 7 06:05:51.786385 kubelet[2725]: E0707 06:05:51.786295 2725 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:05:52.417906 sshd[4501]: Connection closed by 10.0.0.1 port 58578 Jul 7 06:05:52.418381 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:52.431250 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:58578.service: Deactivated successfully. Jul 7 06:05:52.435203 kubelet[2725]: E0707 06:05:52.434553 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" containerName="mount-bpf-fs" Jul 7 06:05:52.435203 kubelet[2725]: E0707 06:05:52.434592 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" containerName="mount-cgroup" Jul 7 06:05:52.435203 kubelet[2725]: E0707 06:05:52.434599 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" containerName="apply-sysctl-overwrites" Jul 7 06:05:52.435203 kubelet[2725]: E0707 06:05:52.434607 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7c2a5bc-8d70-44fa-809e-a4320d882493" containerName="cilium-operator" Jul 7 06:05:52.435203 kubelet[2725]: E0707 06:05:52.434614 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" containerName="clean-cilium-state" Jul 7 06:05:52.435203 kubelet[2725]: E0707 06:05:52.434621 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" containerName="cilium-agent" Jul 7 06:05:52.435203 kubelet[2725]: I0707 06:05:52.434661 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c2a5bc-8d70-44fa-809e-a4320d882493" containerName="cilium-operator" Jul 7 06:05:52.435203 kubelet[2725]: I0707 06:05:52.434671 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7e9bc8-0e9f-498f-ab5f-8f2789eb8343" containerName="cilium-agent" Jul 7 06:05:52.436256 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:05:52.437862 systemd-logind[1567]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:05:52.442817 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:58594.service - OpenSSH per-connection server daemon (10.0.0.1:58594). Jul 7 06:05:52.445141 systemd-logind[1567]: Removed session 26. Jul 7 06:05:52.456309 systemd[1]: Created slice kubepods-burstable-pod7114ab1b_722c_425c_a3c2_6238348f50cb.slice - libcontainer container kubepods-burstable-pod7114ab1b_722c_425c_a3c2_6238348f50cb.slice. Jul 7 06:05:52.463432 kubelet[2725]: I0707 06:05:52.463110 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-etc-cni-netd\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463432 kubelet[2725]: I0707 06:05:52.463141 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7114ab1b-722c-425c-a3c2-6238348f50cb-hubble-tls\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463432 kubelet[2725]: I0707 06:05:52.463157 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-lib-modules\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463432 kubelet[2725]: I0707 06:05:52.463169 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-host-proc-sys-net\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463432 kubelet[2725]: I0707 06:05:52.463184 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw8n\" (UniqueName: \"kubernetes.io/projected/7114ab1b-722c-425c-a3c2-6238348f50cb-kube-api-access-bjw8n\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463432 kubelet[2725]: I0707 06:05:52.463198 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-cilium-cgroup\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463718 kubelet[2725]: I0707 06:05:52.463214 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-xtables-lock\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463718 kubelet[2725]: I0707 06:05:52.463237 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7114ab1b-722c-425c-a3c2-6238348f50cb-cilium-config-path\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463718 kubelet[2725]: I0707 06:05:52.463252 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-host-proc-sys-kernel\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463718 kubelet[2725]: I0707 06:05:52.463265 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-cni-path\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463718 kubelet[2725]: I0707 06:05:52.463282 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7114ab1b-722c-425c-a3c2-6238348f50cb-clustermesh-secrets\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463718 kubelet[2725]: I0707 06:05:52.463300 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-cilium-run\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463847 kubelet[2725]: I0707 06:05:52.463312 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-bpf-maps\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463847 kubelet[2725]: I0707 06:05:52.463326 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7114ab1b-722c-425c-a3c2-6238348f50cb-hostproc\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.463847 kubelet[2725]: I0707 06:05:52.463338 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7114ab1b-722c-425c-a3c2-6238348f50cb-cilium-ipsec-secrets\") pod \"cilium-8nqjx\" (UID: \"7114ab1b-722c-425c-a3c2-6238348f50cb\") " pod="kube-system/cilium-8nqjx" Jul 7 06:05:52.497383 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 58594 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:52.499264 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:52.503633 systemd-logind[1567]: New session 27 of user core. Jul 7 06:05:52.520238 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 06:05:52.575855 sshd[4515]: Connection closed by 10.0.0.1 port 58594 Jul 7 06:05:52.576649 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:52.585943 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:58594.service: Deactivated successfully. Jul 7 06:05:52.587914 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 06:05:52.588774 systemd-logind[1567]: Session 27 logged out. Waiting for processes to exit. Jul 7 06:05:52.591936 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:58598.service - OpenSSH per-connection server daemon (10.0.0.1:58598). Jul 7 06:05:52.593131 systemd-logind[1567]: Removed session 27. Jul 7 06:05:52.639144 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 58598 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:05:52.641273 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:52.648382 systemd-logind[1567]: New session 28 of user core. Jul 7 06:05:52.657211 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 06:05:52.760754 kubelet[2725]: E0707 06:05:52.760599 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:52.762470 containerd[1583]: time="2025-07-07T06:05:52.762349613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nqjx,Uid:7114ab1b-722c-425c-a3c2-6238348f50cb,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:52.784958 containerd[1583]: time="2025-07-07T06:05:52.784895062Z" level=info msg="connecting to shim 95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b" address="unix:///run/containerd/s/c51e66fd91e7d41f9bf3e10aff70f853363a5fc62533944cfe411fb8a43040ab" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:05:52.817207 systemd[1]: Started cri-containerd-95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b.scope - libcontainer container 95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b. Jul 7 06:05:52.843726 containerd[1583]: time="2025-07-07T06:05:52.843678312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nqjx,Uid:7114ab1b-722c-425c-a3c2-6238348f50cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\"" Jul 7 06:05:52.844374 kubelet[2725]: E0707 06:05:52.844335 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:52.846402 containerd[1583]: time="2025-07-07T06:05:52.846362668Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:05:52.865287 containerd[1583]: time="2025-07-07T06:05:52.865244544Z" level=info msg="Container 36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:52.883703 containerd[1583]: time="2025-07-07T06:05:52.883657987Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\"" Jul 7 06:05:52.884369 containerd[1583]: time="2025-07-07T06:05:52.884301955Z" level=info msg="StartContainer for \"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\"" Jul 7 06:05:52.885536 containerd[1583]: time="2025-07-07T06:05:52.885503735Z" level=info msg="connecting to shim 36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298" address="unix:///run/containerd/s/c51e66fd91e7d41f9bf3e10aff70f853363a5fc62533944cfe411fb8a43040ab" protocol=ttrpc version=3 Jul 7 06:05:52.915232 systemd[1]: Started cri-containerd-36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298.scope - libcontainer container 36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298. Jul 7 06:05:52.946653 containerd[1583]: time="2025-07-07T06:05:52.946605324Z" level=info msg="StartContainer for \"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\" returns successfully" Jul 7 06:05:52.956264 systemd[1]: cri-containerd-36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298.scope: Deactivated successfully. Jul 7 06:05:52.958404 containerd[1583]: time="2025-07-07T06:05:52.958366609Z" level=info msg="received exit event container_id:\"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\" id:\"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\" pid:4595 exited_at:{seconds:1751868352 nanos:958032322}" Jul 7 06:05:52.965626 containerd[1583]: time="2025-07-07T06:05:52.965577682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\" id:\"36e74d59adbc7c1a674390c169627827b2d26dbf5831fa649851884e76c6e298\" pid:4595 exited_at:{seconds:1751868352 nanos:958032322}" Jul 7 06:05:53.011802 kubelet[2725]: E0707 06:05:53.011660 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:53.014252 containerd[1583]: time="2025-07-07T06:05:53.014199428Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:05:53.024545 containerd[1583]: time="2025-07-07T06:05:53.024479323Z" level=info msg="Container 3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:53.034028 containerd[1583]: time="2025-07-07T06:05:53.033975133Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\"" Jul 7 06:05:53.035402 containerd[1583]: time="2025-07-07T06:05:53.035140784Z" level=info msg="StartContainer for \"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\"" Jul 7 06:05:53.036582 containerd[1583]: time="2025-07-07T06:05:53.036150428Z" level=info msg="connecting to shim 3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c" address="unix:///run/containerd/s/c51e66fd91e7d41f9bf3e10aff70f853363a5fc62533944cfe411fb8a43040ab" protocol=ttrpc version=3 Jul 7 06:05:53.069266 systemd[1]: Started cri-containerd-3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c.scope - libcontainer container 3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c. Jul 7 06:05:53.103879 containerd[1583]: time="2025-07-07T06:05:53.103821729Z" level=info msg="StartContainer for \"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\" returns successfully" Jul 7 06:05:53.109732 systemd[1]: cri-containerd-3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c.scope: Deactivated successfully. Jul 7 06:05:53.110873 containerd[1583]: time="2025-07-07T06:05:53.110837516Z" level=info msg="received exit event container_id:\"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\" id:\"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\" pid:4638 exited_at:{seconds:1751868353 nanos:110659257}" Jul 7 06:05:53.111185 containerd[1583]: time="2025-07-07T06:05:53.111139882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\" id:\"3bce744974deaa2ec5552849288a9b327cf52398338a035adba984e958513e8c\" pid:4638 exited_at:{seconds:1751868353 nanos:110659257}" Jul 7 06:05:53.607541 kubelet[2725]: I0707 06:05:53.607474 2725 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T06:05:53Z","lastTransitionTime":"2025-07-07T06:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 06:05:54.017496 kubelet[2725]: E0707 06:05:54.017298 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:54.019825 containerd[1583]: time="2025-07-07T06:05:54.019783386Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:05:54.032538 containerd[1583]: time="2025-07-07T06:05:54.031700510Z" level=info msg="Container 917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:54.042346 containerd[1583]: time="2025-07-07T06:05:54.042285429Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\"" Jul 7 06:05:54.044105 containerd[1583]: time="2025-07-07T06:05:54.042787735Z" level=info msg="StartContainer for \"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\"" Jul 7 06:05:54.044105 containerd[1583]: time="2025-07-07T06:05:54.044056773Z" level=info msg="connecting to shim 917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69" address="unix:///run/containerd/s/c51e66fd91e7d41f9bf3e10aff70f853363a5fc62533944cfe411fb8a43040ab" protocol=ttrpc version=3 Jul 7 06:05:54.078259 systemd[1]: Started cri-containerd-917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69.scope - libcontainer container 917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69. Jul 7 06:05:54.128160 containerd[1583]: time="2025-07-07T06:05:54.128110748Z" level=info msg="StartContainer for \"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\" returns successfully" Jul 7 06:05:54.128894 systemd[1]: cri-containerd-917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69.scope: Deactivated successfully. Jul 7 06:05:54.130596 containerd[1583]: time="2025-07-07T06:05:54.130551036Z" level=info msg="received exit event container_id:\"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\" id:\"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\" pid:4683 exited_at:{seconds:1751868354 nanos:130342859}" Jul 7 06:05:54.130736 containerd[1583]: time="2025-07-07T06:05:54.130665554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\" id:\"917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69\" pid:4683 exited_at:{seconds:1751868354 nanos:130342859}" Jul 7 06:05:54.156516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917862f40580c3f78c74aa14014158e8fb83d7639ffc64d90a47d95569f82b69-rootfs.mount: Deactivated successfully. Jul 7 06:05:55.022846 kubelet[2725]: E0707 06:05:55.022785 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:55.024636 containerd[1583]: time="2025-07-07T06:05:55.024562671Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:05:55.038231 containerd[1583]: time="2025-07-07T06:05:55.038158334Z" level=info msg="Container 5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:55.052685 containerd[1583]: time="2025-07-07T06:05:55.052614205Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\"" Jul 7 06:05:55.053233 containerd[1583]: time="2025-07-07T06:05:55.053195271Z" level=info msg="StartContainer for \"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\"" Jul 7 06:05:55.054235 containerd[1583]: time="2025-07-07T06:05:55.054201938Z" level=info msg="connecting to shim 5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2" address="unix:///run/containerd/s/c51e66fd91e7d41f9bf3e10aff70f853363a5fc62533944cfe411fb8a43040ab" protocol=ttrpc version=3 Jul 7 06:05:55.081575 systemd[1]: Started cri-containerd-5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2.scope - libcontainer container 5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2. Jul 7 06:05:55.115413 systemd[1]: cri-containerd-5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2.scope: Deactivated successfully. Jul 7 06:05:55.116018 containerd[1583]: time="2025-07-07T06:05:55.115958638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\" id:\"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\" pid:4722 exited_at:{seconds:1751868355 nanos:115645953}" Jul 7 06:05:55.118157 containerd[1583]: time="2025-07-07T06:05:55.118059137Z" level=info msg="received exit event container_id:\"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\" id:\"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\" pid:4722 exited_at:{seconds:1751868355 nanos:115645953}" Jul 7 06:05:55.127616 containerd[1583]: time="2025-07-07T06:05:55.127555516Z" level=info msg="StartContainer for \"5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2\" returns successfully" Jul 7 06:05:55.143037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d74938f517090632abd2df6d9da39b8d51d4365c31d47a7891b2e6024093bc2-rootfs.mount: Deactivated successfully. Jul 7 06:05:56.029387 kubelet[2725]: E0707 06:05:56.029329 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:56.031638 containerd[1583]: time="2025-07-07T06:05:56.031566483Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:05:56.058966 containerd[1583]: time="2025-07-07T06:05:56.058880305Z" level=info msg="Container 2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:05:56.068731 containerd[1583]: time="2025-07-07T06:05:56.068674355Z" level=info msg="CreateContainer within sandbox \"95f223806c6e738c3a3c1f0ed73c52a9a9c96c6b2ed1979aa796674c746f0b3b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\"" Jul 7 06:05:56.069391 containerd[1583]: time="2025-07-07T06:05:56.069301498Z" level=info msg="StartContainer for \"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\"" Jul 7 06:05:56.070257 containerd[1583]: time="2025-07-07T06:05:56.070232309Z" level=info msg="connecting to shim 2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713" address="unix:///run/containerd/s/c51e66fd91e7d41f9bf3e10aff70f853363a5fc62533944cfe411fb8a43040ab" protocol=ttrpc version=3 Jul 7 06:05:56.092222 systemd[1]: Started cri-containerd-2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713.scope - libcontainer container 2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713. Jul 7 06:05:56.197707 containerd[1583]: time="2025-07-07T06:05:56.197652467Z" level=info msg="StartContainer for \"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" returns successfully" Jul 7 06:05:56.272730 containerd[1583]: time="2025-07-07T06:05:56.272678001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" id:\"ddccd868151e82a7bf692ecde9a86bce5766887d7fbfb540434ad05be4eb2056\" pid:4796 exited_at:{seconds:1751868356 nanos:272280003}" Jul 7 06:05:56.612116 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 06:05:57.037632 kubelet[2725]: E0707 06:05:57.037575 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:58.762297 kubelet[2725]: E0707 06:05:58.762217 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:59.174341 containerd[1583]: time="2025-07-07T06:05:59.174255462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" id:\"71ba5454b586ab3204b220ce372443e23b432b00f4ae9ec4beca919d02c445da\" pid:5091 exit_status:1 exited_at:{seconds:1751868359 nanos:173462194}" Jul 7 06:05:59.999450 systemd-networkd[1500]: lxc_health: Link UP Jul 7 06:05:59.999766 systemd-networkd[1500]: lxc_health: Gained carrier Jul 7 06:06:00.762995 kubelet[2725]: E0707 06:06:00.762942 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:00.782699 kubelet[2725]: I0707 06:06:00.782406 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8nqjx" podStartSLOduration=8.782347697 podStartE2EDuration="8.782347697s" podCreationTimestamp="2025-07-07 06:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:57.129362685 +0000 UTC m=+95.498707886" watchObservedRunningTime="2025-07-07 06:06:00.782347697 +0000 UTC m=+99.151692898" Jul 7 06:06:01.026368 systemd-networkd[1500]: lxc_health: Gained IPv6LL Jul 7 06:06:01.047311 kubelet[2725]: E0707 06:06:01.047261 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:01.280752 containerd[1583]: time="2025-07-07T06:06:01.280421458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" id:\"12914e0d2c2eb1e5c0d840c24fca66028b5273a406e444d21033438662b9fc23\" pid:5331 exited_at:{seconds:1751868361 nanos:279835725}" Jul 7 06:06:02.049053 kubelet[2725]: E0707 06:06:02.048997 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:03.622100 containerd[1583]: time="2025-07-07T06:06:03.622021964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" id:\"e7b907939c37e3c484e8804a6bc5692e4b2d61185b270340d2caf6a0bd1a517e\" pid:5367 exited_at:{seconds:1751868363 nanos:621600484}" Jul 7 06:06:05.736547 containerd[1583]: time="2025-07-07T06:06:05.736481793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" id:\"4b302cd359c783f4726841e46342a3d54a187058eda90160315bd7e94ded11d2\" pid:5391 exited_at:{seconds:1751868365 nanos:735940756}" Jul 7 06:06:07.841930 containerd[1583]: time="2025-07-07T06:06:07.841862006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2228418d1cdf3874bdfa0aabfdbe90c16645a2b427bd38827d20125ad10c5713\" id:\"975e167cf082634c3d3432aa294640ff705e790053c9defd9d8f57eecf0ee94c\" pid:5416 exited_at:{seconds:1751868367 nanos:841257900}" Jul 7 06:06:07.849368 sshd[4530]: Connection closed by 10.0.0.1 port 58598 Jul 7 06:06:07.850047 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:07.854150 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:58598.service: Deactivated successfully. Jul 7 06:06:07.856347 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 06:06:07.857115 systemd-logind[1567]: Session 28 logged out. Waiting for processes to exit. Jul 7 06:06:07.859015 systemd-logind[1567]: Removed session 28.