Jul 12 00:09:43.904451 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 22:06:57 -00 2025 Jul 12 00:09:43.904485 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:09:43.904496 kernel: BIOS-provided physical RAM map: Jul 12 00:09:43.904504 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 12 00:09:43.904511 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 12 00:09:43.904518 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 12 00:09:43.904525 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 12 00:09:43.904535 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 12 00:09:43.904545 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 12 00:09:43.904552 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 12 00:09:43.904559 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 00:09:43.904566 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 12 00:09:43.904572 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 00:09:43.904579 kernel: NX (Execute Disable) protection: active Jul 12 00:09:43.904590 kernel: APIC: Static calls initialized Jul 12 00:09:43.904597 kernel: SMBIOS 2.8 present. Jul 12 00:09:43.904607 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 12 00:09:43.904615 kernel: DMI: Memory slots populated: 1/1 Jul 12 00:09:43.904622 kernel: Hypervisor detected: KVM Jul 12 00:09:43.904629 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 12 00:09:43.904636 kernel: kvm-clock: using sched offset of 4365972987 cycles Jul 12 00:09:43.904644 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 12 00:09:43.904652 kernel: tsc: Detected 2794.746 MHz processor Jul 12 00:09:43.904682 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 12 00:09:43.904695 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 12 00:09:43.904705 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 12 00:09:43.904715 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 12 00:09:43.904725 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 12 00:09:43.904734 kernel: Using GB pages for direct mapping Jul 12 00:09:43.904741 kernel: ACPI: Early table checksum verification disabled Jul 12 00:09:43.904750 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 12 00:09:43.904760 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904787 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904797 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904806 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 12 00:09:43.904816 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904826 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904836 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904848 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:09:43.904857 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 12 00:09:43.904876 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 12 00:09:43.904886 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 12 00:09:43.904896 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 12 00:09:43.904906 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 12 00:09:43.904915 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 12 00:09:43.904926 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 12 00:09:43.904939 kernel: No NUMA configuration found Jul 12 00:09:43.904949 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 12 00:09:43.904960 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 12 00:09:43.905119 kernel: Zone ranges: Jul 12 00:09:43.905135 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 12 00:09:43.905145 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 12 00:09:43.905156 kernel: Normal empty Jul 12 00:09:43.905166 kernel: Device empty Jul 12 00:09:43.905176 kernel: Movable zone start for each node Jul 12 00:09:43.905186 kernel: Early memory node ranges Jul 12 00:09:43.905202 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 12 00:09:43.905213 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 12 00:09:43.905223 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 12 00:09:43.905233 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 00:09:43.905243 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 12 00:09:43.905253 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 12 00:09:43.905263 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 12 00:09:43.905278 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 12 00:09:43.905288 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 12 00:09:43.905303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 12 00:09:43.905313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 12 00:09:43.905326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 12 00:09:43.905337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 12 00:09:43.905347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 12 00:09:43.905357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 12 00:09:43.905367 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 12 00:09:43.905378 kernel: TSC deadline timer available Jul 12 00:09:43.905388 kernel: CPU topo: Max. logical packages: 1 Jul 12 00:09:43.905402 kernel: CPU topo: Max. logical dies: 1 Jul 12 00:09:43.905412 kernel: CPU topo: Max. dies per package: 1 Jul 12 00:09:43.905422 kernel: CPU topo: Max. threads per core: 1 Jul 12 00:09:43.905432 kernel: CPU topo: Num. cores per package: 4 Jul 12 00:09:43.905443 kernel: CPU topo: Num. threads per package: 4 Jul 12 00:09:43.905453 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 12 00:09:43.905462 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 12 00:09:43.905472 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 12 00:09:43.905482 kernel: kvm-guest: setup PV sched yield Jul 12 00:09:43.905493 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 12 00:09:43.905507 kernel: Booting paravirtualized kernel on KVM Jul 12 00:09:43.905518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 12 00:09:43.905528 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 12 00:09:43.905538 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 12 00:09:43.905548 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 12 00:09:43.905558 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 12 00:09:43.905568 kernel: kvm-guest: PV spinlocks enabled Jul 12 00:09:43.905578 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 12 00:09:43.905590 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:09:43.905604 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:09:43.905614 kernel: random: crng init done Jul 12 00:09:43.905624 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:09:43.905635 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:09:43.905645 kernel: Fallback order for Node 0: 0 Jul 12 00:09:43.905655 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 12 00:09:43.905665 kernel: Policy zone: DMA32 Jul 12 00:09:43.905675 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:09:43.905689 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:09:43.905699 kernel: ftrace: allocating 40095 entries in 157 pages Jul 12 00:09:43.905710 kernel: ftrace: allocated 157 pages with 5 groups Jul 12 00:09:43.905720 kernel: Dynamic Preempt: voluntary Jul 12 00:09:43.905730 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:09:43.905741 kernel: rcu: RCU event tracing is enabled. Jul 12 00:09:43.905752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:09:43.905763 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:09:43.905790 kernel: Rude variant of Tasks RCU enabled. Jul 12 00:09:43.905806 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:09:43.905816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:09:43.905826 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:09:43.905837 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:09:43.905847 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:09:43.905858 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:09:43.905868 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 12 00:09:43.905879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:09:43.905903 kernel: Console: colour VGA+ 80x25 Jul 12 00:09:43.905914 kernel: printk: legacy console [ttyS0] enabled Jul 12 00:09:43.905925 kernel: ACPI: Core revision 20240827 Jul 12 00:09:43.905939 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 12 00:09:43.905950 kernel: APIC: Switch to symmetric I/O mode setup Jul 12 00:09:43.905961 kernel: x2apic enabled Jul 12 00:09:43.905993 kernel: APIC: Switched APIC routing to: physical x2apic Jul 12 00:09:43.906004 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 12 00:09:43.906015 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 12 00:09:43.906031 kernel: kvm-guest: setup PV IPIs Jul 12 00:09:43.906042 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 12 00:09:43.906053 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 00:09:43.906064 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 12 00:09:43.906075 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 12 00:09:43.906086 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 12 00:09:43.906111 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 12 00:09:43.906122 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 12 00:09:43.906137 kernel: Spectre V2 : Mitigation: Retpolines Jul 12 00:09:43.906148 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 12 00:09:43.906158 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 12 00:09:43.906169 kernel: RETBleed: Mitigation: untrained return thunk Jul 12 00:09:43.906180 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 12 00:09:43.906191 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 12 00:09:43.906201 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 12 00:09:43.906213 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 12 00:09:43.906227 kernel: x86/bugs: return thunk changed Jul 12 00:09:43.906237 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 12 00:09:43.906248 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 12 00:09:43.906259 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 12 00:09:43.906269 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 12 00:09:43.906280 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 12 00:09:43.906290 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 12 00:09:43.906301 kernel: Freeing SMP alternatives memory: 32K Jul 12 00:09:43.906309 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:09:43.906320 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 00:09:43.906328 kernel: landlock: Up and running. Jul 12 00:09:43.906336 kernel: SELinux: Initializing. Jul 12 00:09:43.906344 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:09:43.906356 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:09:43.906364 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 12 00:09:43.906372 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 12 00:09:43.906380 kernel: ... version: 0 Jul 12 00:09:43.906387 kernel: ... bit width: 48 Jul 12 00:09:43.906400 kernel: ... generic registers: 6 Jul 12 00:09:43.906410 kernel: ... value mask: 0000ffffffffffff Jul 12 00:09:43.906421 kernel: ... max period: 00007fffffffffff Jul 12 00:09:43.906431 kernel: ... fixed-purpose events: 0 Jul 12 00:09:43.906441 kernel: ... event mask: 000000000000003f Jul 12 00:09:43.906451 kernel: signal: max sigframe size: 1776 Jul 12 00:09:43.906462 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:09:43.906473 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:09:43.906484 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 00:09:43.906494 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:09:43.906509 kernel: smpboot: x86: Booting SMP configuration: Jul 12 00:09:43.906520 kernel: .... node #0, CPUs: #1 #2 #3 Jul 12 00:09:43.906531 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:09:43.906542 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 12 00:09:43.906552 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 136904K reserved, 0K cma-reserved) Jul 12 00:09:43.906563 kernel: devtmpfs: initialized Jul 12 00:09:43.906574 kernel: x86/mm: Memory block size: 128MB Jul 12 00:09:43.906585 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:09:43.906600 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:09:43.906611 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:09:43.906622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:09:43.906632 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:09:43.906643 kernel: audit: type=2000 audit(1752278980.627:1): state=initialized audit_enabled=0 res=1 Jul 12 00:09:43.906662 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:09:43.906685 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 12 00:09:43.906706 kernel: cpuidle: using governor menu Jul 12 00:09:43.906717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:09:43.906732 kernel: dca service started, version 1.12.1 Jul 12 00:09:43.906743 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 12 00:09:43.906753 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 12 00:09:43.906774 kernel: PCI: Using configuration type 1 for base access Jul 12 00:09:43.906786 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 12 00:09:43.906804 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:09:43.906815 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:09:43.906827 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:09:43.906838 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:09:43.906855 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:09:43.906865 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:09:43.906876 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:09:43.906887 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:09:43.906898 kernel: ACPI: Interpreter enabled Jul 12 00:09:43.906909 kernel: ACPI: PM: (supports S0 S3 S5) Jul 12 00:09:43.906920 kernel: ACPI: Using IOAPIC for interrupt routing Jul 12 00:09:43.906931 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 12 00:09:43.906942 kernel: PCI: Using E820 reservations for host bridge windows Jul 12 00:09:43.906957 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 12 00:09:43.907046 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:09:43.907373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:09:43.907559 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 12 00:09:43.907775 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 12 00:09:43.907795 kernel: PCI host bridge to bus 0000:00 Jul 12 00:09:43.908008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 12 00:09:43.908159 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 12 00:09:43.908327 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 12 00:09:43.908497 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 12 00:09:43.908623 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 12 00:09:43.908737 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 12 00:09:43.908862 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:09:43.909057 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 12 00:09:43.909242 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 12 00:09:43.909388 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 12 00:09:43.909591 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 12 00:09:43.909743 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 12 00:09:43.909887 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 12 00:09:43.910075 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 00:09:43.910237 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 12 00:09:43.910367 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 12 00:09:43.910521 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 12 00:09:43.910670 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 12 00:09:43.910807 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 12 00:09:43.910932 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 12 00:09:43.911148 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 12 00:09:43.911314 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 12 00:09:43.911468 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 12 00:09:43.911599 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 12 00:09:43.911722 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 12 00:09:43.911863 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 12 00:09:43.912029 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 12 00:09:43.912180 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 12 00:09:43.912339 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 12 00:09:43.912492 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 12 00:09:43.912622 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 12 00:09:43.912823 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 12 00:09:43.913040 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 12 00:09:43.913059 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 12 00:09:43.913070 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 12 00:09:43.913087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 12 00:09:43.913098 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 12 00:09:43.913109 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 12 00:09:43.913120 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 12 00:09:43.913130 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 12 00:09:43.913141 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 12 00:09:43.913152 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 12 00:09:43.913162 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 12 00:09:43.913173 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 12 00:09:43.913187 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 12 00:09:43.913198 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 12 00:09:43.913209 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 12 00:09:43.913220 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 12 00:09:43.913231 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 12 00:09:43.913241 kernel: iommu: Default domain type: Translated Jul 12 00:09:43.913252 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 12 00:09:43.913263 kernel: PCI: Using ACPI for IRQ routing Jul 12 00:09:43.913274 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 12 00:09:43.913288 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 12 00:09:43.913299 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 12 00:09:43.913463 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 12 00:09:43.913624 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 12 00:09:43.913797 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 12 00:09:43.913814 kernel: vgaarb: loaded Jul 12 00:09:43.913825 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 12 00:09:43.913838 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 12 00:09:43.913856 kernel: clocksource: Switched to clocksource kvm-clock Jul 12 00:09:43.913868 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:09:43.913879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:09:43.913890 kernel: pnp: PnP ACPI init Jul 12 00:09:43.914114 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 12 00:09:43.914135 kernel: pnp: PnP ACPI: found 6 devices Jul 12 00:09:43.914146 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 12 00:09:43.914157 kernel: NET: Registered PF_INET protocol family Jul 12 00:09:43.914173 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:09:43.914184 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:09:43.914195 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:09:43.914206 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:09:43.914217 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:09:43.914229 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:09:43.914239 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:09:43.914250 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:09:43.914261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:09:43.914275 kernel: NET: Registered PF_XDP protocol family Jul 12 00:09:43.914431 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 12 00:09:43.914579 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 12 00:09:43.914726 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 12 00:09:43.914886 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 12 00:09:43.915052 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 12 00:09:43.915203 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 12 00:09:43.915220 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:09:43.915237 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 00:09:43.915248 kernel: Initialise system trusted keyrings Jul 12 00:09:43.915259 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:09:43.915270 kernel: Key type asymmetric registered Jul 12 00:09:43.915281 kernel: Asymmetric key parser 'x509' registered Jul 12 00:09:43.915292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:09:43.915303 kernel: io scheduler mq-deadline registered Jul 12 00:09:43.915314 kernel: io scheduler kyber registered Jul 12 00:09:43.915325 kernel: io scheduler bfq registered Jul 12 00:09:43.915340 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 12 00:09:43.915352 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 12 00:09:43.915363 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 12 00:09:43.915374 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 12 00:09:43.915385 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:09:43.915396 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 12 00:09:43.915407 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 12 00:09:43.915417 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 12 00:09:43.915428 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 12 00:09:43.915626 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 12 00:09:43.915645 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 12 00:09:43.915806 kernel: rtc_cmos 00:04: registered as rtc0 Jul 12 00:09:43.915958 kernel: rtc_cmos 00:04: setting system clock to 2025-07-12T00:09:43 UTC (1752278983) Jul 12 00:09:43.916178 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 12 00:09:43.916197 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 12 00:09:43.916209 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:09:43.916219 kernel: Segment Routing with IPv6 Jul 12 00:09:43.916236 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:09:43.916248 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:09:43.916258 kernel: Key type dns_resolver registered Jul 12 00:09:43.916269 kernel: IPI shorthand broadcast: enabled Jul 12 00:09:43.916280 kernel: sched_clock: Marking stable (3287002150, 111837207)->(3417399646, -18560289) Jul 12 00:09:43.916291 kernel: registered taskstats version 1 Jul 12 00:09:43.916302 kernel: Loading compiled-in X.509 certificates Jul 12 00:09:43.916313 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f8f9174ae27e6261b0ae25e5f0210210a376c8b8' Jul 12 00:09:43.916324 kernel: Demotion targets for Node 0: null Jul 12 00:09:43.916339 kernel: Key type .fscrypt registered Jul 12 00:09:43.916350 kernel: Key type fscrypt-provisioning registered Jul 12 00:09:43.916361 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:09:43.916372 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:09:43.916383 kernel: ima: No architecture policies found Jul 12 00:09:43.916394 kernel: clk: Disabling unused clocks Jul 12 00:09:43.916405 kernel: Warning: unable to open an initial console. Jul 12 00:09:43.916416 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 12 00:09:43.916428 kernel: Write protecting the kernel read-only data: 24576k Jul 12 00:09:43.916442 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 12 00:09:43.916453 kernel: Run /init as init process Jul 12 00:09:43.916464 kernel: with arguments: Jul 12 00:09:43.916475 kernel: /init Jul 12 00:09:43.916485 kernel: with environment: Jul 12 00:09:43.916496 kernel: HOME=/ Jul 12 00:09:43.916507 kernel: TERM=linux Jul 12 00:09:43.916518 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:09:43.916530 systemd[1]: Successfully made /usr/ read-only. Jul 12 00:09:43.916549 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:09:43.916578 systemd[1]: Detected virtualization kvm. Jul 12 00:09:43.916590 systemd[1]: Detected architecture x86-64. Jul 12 00:09:43.916602 systemd[1]: Running in initrd. Jul 12 00:09:43.916613 systemd[1]: No hostname configured, using default hostname. Jul 12 00:09:43.916629 systemd[1]: Hostname set to . Jul 12 00:09:43.916641 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:09:43.916653 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:09:43.916665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:09:43.916677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:09:43.916691 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:09:43.916703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:09:43.916715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:09:43.916731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:09:43.916745 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:09:43.916758 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:09:43.916781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:09:43.916794 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:09:43.916806 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:09:43.916818 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:09:43.916834 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:09:43.916846 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:09:43.916859 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:09:43.916871 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:09:43.916883 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:09:43.916895 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 00:09:43.916907 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:09:43.916919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:09:43.916931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:09:43.916946 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:09:43.916958 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:09:43.917007 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:09:43.917020 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:09:43.917034 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 00:09:43.917053 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:09:43.917065 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:09:43.917077 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:09:43.917089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:09:43.917101 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:09:43.917149 systemd-journald[220]: Collecting audit messages is disabled. Jul 12 00:09:43.917179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:09:43.917192 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:09:43.917208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:09:43.917221 systemd-journald[220]: Journal started Jul 12 00:09:43.917246 systemd-journald[220]: Runtime Journal (/run/log/journal/46b18de90ab34c4f9562d9d494c6838e) is 6M, max 48.6M, 42.5M free. Jul 12 00:09:43.904730 systemd-modules-load[222]: Inserted module 'overlay' Jul 12 00:09:43.962434 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:09:43.962467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:09:43.962497 kernel: Bridge firewalling registered Jul 12 00:09:43.934773 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 12 00:09:43.965222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:09:43.967524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:09:43.976376 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:09:43.983587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:09:43.987256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:09:43.988369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:09:43.998705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:09:44.010335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:09:44.011185 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:09:44.016391 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 00:09:44.022111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:09:44.024296 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:09:44.024642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:09:44.027609 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:09:44.075617 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:09:44.094918 systemd-resolved[261]: Positive Trust Anchors: Jul 12 00:09:44.094935 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:09:44.094990 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:09:44.098174 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 12 00:09:44.099603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:09:44.105406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:09:44.218025 kernel: SCSI subsystem initialized Jul 12 00:09:44.230020 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:09:44.242041 kernel: iscsi: registered transport (tcp) Jul 12 00:09:44.265075 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:09:44.265185 kernel: QLogic iSCSI HBA Driver Jul 12 00:09:44.289303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:09:44.316768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:09:44.317386 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:09:44.410078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:09:44.412571 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:09:44.486049 kernel: raid6: avx2x4 gen() 26764 MB/s Jul 12 00:09:44.503022 kernel: raid6: avx2x2 gen() 29698 MB/s Jul 12 00:09:44.520142 kernel: raid6: avx2x1 gen() 24411 MB/s Jul 12 00:09:44.520184 kernel: raid6: using algorithm avx2x2 gen() 29698 MB/s Jul 12 00:09:44.538333 kernel: raid6: .... xor() 18919 MB/s, rmw enabled Jul 12 00:09:44.538419 kernel: raid6: using avx2x2 recovery algorithm Jul 12 00:09:44.569040 kernel: xor: automatically using best checksumming function avx Jul 12 00:09:44.775040 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:09:44.785280 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:09:44.787313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:09:44.826060 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 12 00:09:44.832488 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:09:44.833869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:09:44.863218 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jul 12 00:09:44.896824 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:09:44.899448 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:09:44.984679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:09:44.990075 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:09:45.022011 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 12 00:09:45.025049 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:09:45.040227 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:09:45.040326 kernel: GPT:9289727 != 19775487 Jul 12 00:09:45.040391 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:09:45.040430 kernel: GPT:9289727 != 19775487 Jul 12 00:09:45.040467 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:09:45.040494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:09:45.059023 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:09:45.059104 kernel: libata version 3.00 loaded. Jul 12 00:09:45.064008 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 12 00:09:45.073561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:09:45.073660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:09:45.090087 kernel: AES CTR mode by8 optimization enabled Jul 12 00:09:45.094327 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:09:45.103239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:09:45.108907 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:09:45.117540 kernel: ahci 0000:00:1f.2: version 3.0 Jul 12 00:09:45.117872 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 12 00:09:45.126225 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 12 00:09:45.126517 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 12 00:09:45.126745 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 12 00:09:45.133656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:09:45.134412 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:09:45.142010 kernel: scsi host0: ahci Jul 12 00:09:45.143043 kernel: scsi host1: ahci Jul 12 00:09:45.144032 kernel: scsi host2: ahci Jul 12 00:09:45.147008 kernel: scsi host3: ahci Jul 12 00:09:45.151530 kernel: scsi host4: ahci Jul 12 00:09:45.151825 kernel: scsi host5: ahci Jul 12 00:09:45.152033 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 12 00:09:45.152050 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 12 00:09:45.152063 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 12 00:09:45.153127 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 12 00:09:45.153151 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 12 00:09:45.153162 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 12 00:09:45.154203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:09:45.204073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:09:45.216638 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:09:45.226834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:09:45.229457 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:09:45.466303 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 12 00:09:45.466389 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 12 00:09:45.466402 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 12 00:09:45.466428 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 12 00:09:45.468050 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 12 00:09:45.468148 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 12 00:09:45.469121 kernel: ata3.00: applying bridge limits Jul 12 00:09:45.470011 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 12 00:09:45.470037 kernel: ata3.00: configured for UDMA/100 Jul 12 00:09:45.471020 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:09:45.512402 disk-uuid[635]: Primary Header is updated. Jul 12 00:09:45.512402 disk-uuid[635]: Secondary Entries is updated. Jul 12 00:09:45.512402 disk-uuid[635]: Secondary Header is updated. Jul 12 00:09:45.518021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:09:45.523007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:09:45.531019 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 12 00:09:45.531351 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:09:45.549028 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:09:45.958658 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:09:45.960327 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:09:45.962674 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:09:45.964312 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:09:45.965807 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:09:45.995784 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:09:46.524014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:09:46.524386 disk-uuid[636]: The operation has completed successfully. Jul 12 00:09:46.559717 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:09:46.559874 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:09:46.603631 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:09:46.628014 sh[665]: Success Jul 12 00:09:46.649048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:09:46.649117 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:09:46.650370 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 00:09:46.682008 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 12 00:09:46.718219 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:09:46.720496 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:09:46.743035 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:09:46.750831 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 00:09:46.750872 kernel: BTRFS: device fsid bb55a55d-83fd-4659-93e1-1a7697cb01ff devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) Jul 12 00:09:46.753103 kernel: BTRFS info (device dm-0): first mount of filesystem bb55a55d-83fd-4659-93e1-1a7697cb01ff Jul 12 00:09:46.753134 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:09:46.753149 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 00:09:46.759414 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:09:46.763344 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:09:46.766191 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:09:46.769640 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:09:46.774159 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:09:46.811011 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 12 00:09:46.811070 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:09:46.812020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:09:46.813401 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:09:46.821019 kernel: BTRFS info (device vda6): last unmount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:09:46.822290 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:09:46.827886 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:09:46.933165 ignition[751]: Ignition 2.21.0 Jul 12 00:09:46.933192 ignition[751]: Stage: fetch-offline Jul 12 00:09:46.933226 ignition[751]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:09:46.933236 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:09:46.933339 ignition[751]: parsed url from cmdline: "" Jul 12 00:09:46.933343 ignition[751]: no config URL provided Jul 12 00:09:46.933348 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:09:46.933356 ignition[751]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:09:46.933379 ignition[751]: op(1): [started] loading QEMU firmware config module Jul 12 00:09:46.933384 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:09:46.943096 ignition[751]: op(1): [finished] loading QEMU firmware config module Jul 12 00:09:46.964238 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:09:46.969215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:09:46.985534 ignition[751]: parsing config with SHA512: 98bc5d1a13204db75f3b10cdaaa261fdb1ceebc19161ff55cffeebef0ab5a2b1d46a8b8304d64b8c651078d775bfef08878a86b0dae37fda3fbee292fc63d36a Jul 12 00:09:46.991964 unknown[751]: fetched base config from "system" Jul 12 00:09:46.992176 unknown[751]: fetched user config from "qemu" Jul 12 00:09:46.992720 ignition[751]: fetch-offline: fetch-offline passed Jul 12 00:09:46.992799 ignition[751]: Ignition finished successfully Jul 12 00:09:46.996270 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:09:47.027947 systemd-networkd[854]: lo: Link UP Jul 12 00:09:47.027984 systemd-networkd[854]: lo: Gained carrier Jul 12 00:09:47.030753 systemd-networkd[854]: Enumeration completed Jul 12 00:09:47.031334 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:09:47.031339 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:09:47.031612 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:09:47.032366 systemd-networkd[854]: eth0: Link UP Jul 12 00:09:47.032371 systemd-networkd[854]: eth0: Gained carrier Jul 12 00:09:47.032381 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:09:47.036999 systemd[1]: Reached target network.target - Network. Jul 12 00:09:47.043137 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:09:47.045907 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:09:47.053029 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:09:47.376753 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.57 Jul 12 00:09:47.376773 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jul 12 00:09:47.380144 ignition[859]: Ignition 2.21.0 Jul 12 00:09:47.380162 ignition[859]: Stage: kargs Jul 12 00:09:47.380333 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:09:47.380352 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:09:47.382048 ignition[859]: kargs: kargs passed Jul 12 00:09:47.382128 ignition[859]: Ignition finished successfully Jul 12 00:09:47.388058 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:09:47.390679 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:09:47.445846 ignition[868]: Ignition 2.21.0 Jul 12 00:09:47.445869 ignition[868]: Stage: disks Jul 12 00:09:47.446137 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:09:47.446151 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:09:47.447071 ignition[868]: disks: disks passed Jul 12 00:09:47.450061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:09:47.447128 ignition[868]: Ignition finished successfully Jul 12 00:09:47.451800 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:09:47.453594 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:09:47.455537 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:09:47.457683 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:09:47.458757 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:09:47.462177 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:09:47.501293 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 00:09:47.509503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:09:47.512829 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:09:47.642015 kernel: EXT4-fs (vda9): mounted filesystem 0ad89691-b65b-416c-92a9-d1ab167398bb r/w with ordered data mode. Quota mode: none. Jul 12 00:09:47.643355 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:09:47.644111 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:09:47.647326 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:09:47.649968 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:09:47.650494 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:09:47.650546 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:09:47.650576 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:09:47.681270 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:09:47.685241 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:09:47.691203 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Jul 12 00:09:47.691231 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:09:47.691246 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:09:47.691272 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:09:47.694630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:09:47.735039 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:09:47.739556 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:09:47.745514 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:09:47.749581 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:09:47.855853 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:09:47.858598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:09:47.860371 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:09:47.889333 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:09:47.890932 kernel: BTRFS info (device vda6): last unmount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:09:47.905435 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:09:47.967528 ignition[1001]: INFO : Ignition 2.21.0 Jul 12 00:09:47.967528 ignition[1001]: INFO : Stage: mount Jul 12 00:09:47.970350 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:09:47.970350 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:09:47.972619 ignition[1001]: INFO : mount: mount passed Jul 12 00:09:47.972619 ignition[1001]: INFO : Ignition finished successfully Jul 12 00:09:47.973745 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:09:47.976848 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:09:48.013802 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:09:48.039752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 12 00:09:48.039797 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:09:48.039809 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:09:48.041254 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:09:48.044950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:09:48.097497 ignition[1031]: INFO : Ignition 2.21.0 Jul 12 00:09:48.097497 ignition[1031]: INFO : Stage: files Jul 12 00:09:48.097497 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:09:48.097497 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:09:48.101702 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:09:48.103399 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:09:48.103399 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:09:48.108741 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:09:48.110543 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:09:48.112668 unknown[1031]: wrote ssh authorized keys file for user: core Jul 12 00:09:48.114049 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:09:48.115547 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 12 00:09:48.117531 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 12 00:09:48.164559 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:09:48.290206 systemd-networkd[854]: eth0: Gained IPv6LL Jul 12 00:09:48.570464 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 12 00:09:48.570464 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:09:48.574265 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 12 00:09:48.924612 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:09:49.515514 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:09:49.517763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:09:49.535195 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:09:49.617485 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:09:49.619596 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:09:49.621559 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 00:09:49.668378 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 00:09:49.668378 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 00:09:49.673766 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 12 00:09:50.064137 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:09:50.651176 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 00:09:50.651176 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:09:50.893173 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:09:51.256750 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:09:51.256750 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:09:51.256750 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:09:51.263244 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:09:51.263244 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:09:51.263244 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:09:51.263244 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:09:51.284264 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:09:51.290734 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:09:51.292898 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:09:51.292898 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:09:51.292898 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:09:51.292898 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:09:51.292898 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:09:51.292898 ignition[1031]: INFO : files: files passed Jul 12 00:09:51.292898 ignition[1031]: INFO : Ignition finished successfully Jul 12 00:09:51.294110 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:09:51.298571 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:09:51.307325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:09:51.327518 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:09:51.327670 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:09:51.331631 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:09:51.334345 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:09:51.336612 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:09:51.336612 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:09:51.337855 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:09:51.339109 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:09:51.345011 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:09:51.440538 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:09:51.440710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:09:51.443165 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:09:51.444190 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:09:51.446131 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:09:51.449440 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:09:51.493471 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:09:51.496429 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:09:51.529575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:09:51.529757 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:09:51.533120 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:09:51.534406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:09:51.534560 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:09:51.538688 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:09:51.540776 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:09:51.542636 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:09:51.544569 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:09:51.545695 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:09:51.546016 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:09:51.546487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:09:51.546814 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:09:51.547357 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:09:51.547682 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:09:51.548066 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:09:51.548547 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:09:51.548741 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:09:51.562784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:09:51.563281 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:09:51.563604 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:09:51.568160 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:09:51.568447 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:09:51.568573 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:09:51.574047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:09:51.574324 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:09:51.576452 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:09:51.578329 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:09:51.583116 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:09:51.583344 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:09:51.583741 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:09:51.584073 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:09:51.584174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:09:51.584569 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:09:51.584658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:09:51.591170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:09:51.591327 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:09:51.593121 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:09:51.593262 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:09:51.596107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:09:51.597799 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:09:51.600790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:09:51.600960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:09:51.602436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:09:51.602546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:09:51.613330 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:09:51.613455 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:09:51.641406 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:09:51.643139 ignition[1085]: INFO : Ignition 2.21.0 Jul 12 00:09:51.643139 ignition[1085]: INFO : Stage: umount Jul 12 00:09:51.644854 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:09:51.644854 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:09:51.645911 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:09:51.646164 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:09:51.650356 ignition[1085]: INFO : umount: umount passed Jul 12 00:09:51.651205 ignition[1085]: INFO : Ignition finished successfully Jul 12 00:09:51.654899 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:09:51.655052 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:09:51.657910 systemd[1]: Stopped target network.target - Network. Jul 12 00:09:51.658010 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:09:51.658065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:09:51.658502 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:09:51.658560 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:09:51.658874 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:09:51.658929 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:09:51.659383 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:09:51.659489 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:09:51.659854 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:09:51.659905 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:09:51.660603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:09:51.661427 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:09:51.678956 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:09:51.679140 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:09:51.683822 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 00:09:51.684240 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:09:51.684297 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:09:51.688184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:09:51.688439 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:09:51.688584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:09:51.691050 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 00:09:51.691636 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 00:09:51.694261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:09:51.694366 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:09:51.698599 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:09:51.700359 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:09:51.700427 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:09:51.701435 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:09:51.701505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:09:51.705606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:09:51.705668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:09:51.706279 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:09:51.707813 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:09:51.731616 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:09:51.731796 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:09:51.734051 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:09:51.734262 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:09:51.736519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:09:51.736642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:09:51.737806 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:09:51.737863 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:09:51.740114 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:09:51.740183 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:09:51.744107 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:09:51.744178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:09:51.746457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:09:51.746524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:09:51.751957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:09:51.754298 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 00:09:51.754366 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:09:51.757718 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:09:51.757789 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:09:51.761240 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:09:51.761298 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:09:51.764702 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:09:51.764758 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:09:51.766375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:09:51.766429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:09:51.782469 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:09:51.782659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:09:51.783877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:09:51.787923 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:09:51.811650 systemd[1]: Switching root. Jul 12 00:09:51.853103 systemd-journald[220]: Journal stopped Jul 12 00:09:53.122008 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 12 00:09:53.122091 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:09:53.122116 kernel: SELinux: policy capability open_perms=1 Jul 12 00:09:53.122132 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:09:53.122159 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:09:53.122178 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:09:53.122204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:09:53.122219 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:09:53.122233 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:09:53.122248 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 00:09:53.122263 kernel: audit: type=1403 audit(1752278992.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:09:53.122279 systemd[1]: Successfully loaded SELinux policy in 51.944ms. Jul 12 00:09:53.122309 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.714ms. Jul 12 00:09:53.122330 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:09:53.122346 systemd[1]: Detected virtualization kvm. Jul 12 00:09:53.122362 systemd[1]: Detected architecture x86-64. Jul 12 00:09:53.122378 systemd[1]: Detected first boot. Jul 12 00:09:53.122395 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:09:53.122412 zram_generator::config[1130]: No configuration found. Jul 12 00:09:53.122435 kernel: Guest personality initialized and is inactive Jul 12 00:09:53.122450 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 12 00:09:53.122465 kernel: Initialized host personality Jul 12 00:09:53.122483 kernel: NET: Registered PF_VSOCK protocol family Jul 12 00:09:53.122498 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:09:53.122525 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 00:09:53.122551 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:09:53.122567 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:09:53.122583 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:09:53.122600 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:09:53.122616 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:09:53.122636 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:09:53.122651 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:09:53.122668 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:09:53.122684 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:09:53.122700 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:09:53.122716 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:09:53.122732 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:09:53.122748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:09:53.122764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:09:53.122782 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:09:53.122800 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:09:53.122820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:09:53.122837 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 00:09:53.122853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:09:53.122868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:09:53.122884 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:09:53.122908 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:09:53.122927 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:09:53.122943 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:09:53.122959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:09:53.123019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:09:53.123037 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:09:53.123053 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:09:53.123068 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:09:53.123084 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:09:53.123100 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 00:09:53.123119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:09:53.123135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:09:53.123151 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:09:53.123167 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:09:53.123182 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:09:53.123199 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:09:53.123215 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:09:53.123230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:09:53.123247 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:09:53.123267 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:09:53.123283 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:09:53.123300 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:09:53.123316 systemd[1]: Reached target machines.target - Containers. Jul 12 00:09:53.123345 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:09:53.123375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:09:53.123392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:09:53.123408 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:09:53.123428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:09:53.123444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:09:53.123460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:09:53.123476 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:09:53.123492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:09:53.123526 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:09:53.123543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:09:53.123558 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:09:53.123574 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:09:53.123593 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:09:53.123610 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:09:53.123626 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:09:53.123642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:09:53.123657 kernel: fuse: init (API version 7.41) Jul 12 00:09:53.123673 kernel: loop: module loaded Jul 12 00:09:53.123720 systemd-journald[1194]: Collecting audit messages is disabled. Jul 12 00:09:53.123752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:09:53.123768 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:09:53.123784 systemd-journald[1194]: Journal started Jul 12 00:09:53.123813 systemd-journald[1194]: Runtime Journal (/run/log/journal/46b18de90ab34c4f9562d9d494c6838e) is 6M, max 48.6M, 42.5M free. Jul 12 00:09:52.819675 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:09:52.846146 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:09:52.846668 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:09:53.129990 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 00:09:53.143621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:09:53.143698 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:09:53.143715 systemd[1]: Stopped verity-setup.service. Jul 12 00:09:53.147998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:09:53.152004 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:09:53.153897 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:09:53.155204 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:09:53.156426 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:09:53.157574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:09:53.159011 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:09:53.160428 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:09:53.164130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:09:53.168005 kernel: ACPI: bus type drm_connector registered Jul 12 00:09:53.169141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:09:53.169405 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:09:53.170953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:09:53.171194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:09:53.172775 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:09:53.173071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:09:53.174556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:09:53.174800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:09:53.176561 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:09:53.176835 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:09:53.178299 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:09:53.178523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:09:53.179895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:09:53.181329 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:09:53.195309 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:09:53.197589 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:09:53.198752 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:09:53.198791 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:09:53.201200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 00:09:53.209084 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:09:53.216719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:09:53.221110 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:09:53.224216 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:09:53.225552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:09:53.227016 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:09:53.228657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:09:53.238223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:09:53.244127 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:09:53.248212 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:09:53.252352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:09:53.256192 systemd-journald[1194]: Time spent on flushing to /var/log/journal/46b18de90ab34c4f9562d9d494c6838e is 25.378ms for 979 entries. Jul 12 00:09:53.256192 systemd-journald[1194]: System Journal (/var/log/journal/46b18de90ab34c4f9562d9d494c6838e) is 8M, max 195.6M, 187.6M free. Jul 12 00:09:53.580441 systemd-journald[1194]: Received client request to flush runtime journal. Jul 12 00:09:53.580531 kernel: loop0: detected capacity change from 0 to 113872 Jul 12 00:09:53.580577 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:09:53.580605 kernel: loop1: detected capacity change from 0 to 221472 Jul 12 00:09:53.580632 kernel: loop2: detected capacity change from 0 to 146240 Jul 12 00:09:53.255672 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 00:09:53.258756 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:09:53.260700 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:09:53.267129 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:09:53.309201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:09:53.322826 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 12 00:09:53.322845 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 12 00:09:53.351635 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:09:53.366331 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:09:53.375848 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:09:53.379075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:09:53.385178 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 00:09:53.523310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:09:53.526144 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:09:53.583508 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:09:53.692036 kernel: loop3: detected capacity change from 0 to 113872 Jul 12 00:09:53.703011 kernel: loop4: detected capacity change from 0 to 221472 Jul 12 00:09:53.717009 kernel: loop5: detected capacity change from 0 to 146240 Jul 12 00:09:53.726390 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:09:53.730192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:09:53.734804 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 00:09:53.736686 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:09:53.737355 (sd-merge)[1270]: Merged extensions into '/usr'. Jul 12 00:09:53.744044 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:09:53.744213 systemd[1]: Reloading... Jul 12 00:09:53.759394 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 12 00:09:53.759807 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 12 00:09:53.802299 zram_generator::config[1300]: No configuration found. Jul 12 00:09:53.897804 ldconfig[1227]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:09:53.921379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:09:54.012716 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:09:54.013381 systemd[1]: Reloading finished in 268 ms. Jul 12 00:09:54.046997 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:09:54.048896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:09:54.050824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:09:54.069041 systemd[1]: Starting ensure-sysext.service... Jul 12 00:09:54.071666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:09:54.083259 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:09:54.083281 systemd[1]: Reloading... Jul 12 00:09:54.126770 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 00:09:54.127392 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 00:09:54.127775 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:09:54.128146 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:09:54.129076 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:09:54.129435 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 12 00:09:54.129586 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 12 00:09:54.137283 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:09:54.137308 systemd-tmpfiles[1340]: Skipping /boot Jul 12 00:09:54.140016 zram_generator::config[1367]: No configuration found. Jul 12 00:09:54.153737 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:09:54.153986 systemd-tmpfiles[1340]: Skipping /boot Jul 12 00:09:54.269831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:09:54.362046 systemd[1]: Reloading finished in 278 ms. Jul 12 00:09:54.384290 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:09:54.402126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:09:54.411821 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:09:54.414724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:09:54.429433 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:09:54.434629 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:09:54.437868 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:09:54.443345 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:09:54.452561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:09:54.453039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:09:54.462263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:09:54.465492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:09:54.470424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:09:54.472118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:09:54.472251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:09:54.482169 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:09:54.484851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:09:54.487824 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:09:54.491087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:09:54.491523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:09:54.493917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:09:54.495188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:09:54.497314 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:09:54.497548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:09:54.502849 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Jul 12 00:09:54.507454 augenrules[1436]: No rules Jul 12 00:09:54.508296 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:09:54.510887 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:09:54.511263 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:09:54.514085 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:09:54.523679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:09:54.525803 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:09:54.529256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:09:54.533355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:09:54.540362 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:09:54.546385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:09:54.549613 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:09:54.551134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:09:54.551310 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:09:54.559946 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:09:54.561763 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:09:54.561919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:09:54.564080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:09:54.572217 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:09:54.574964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:09:54.576589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:09:54.579686 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:09:54.581362 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:09:54.583451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:09:54.585048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:09:54.587446 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:09:54.587725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:09:54.599584 systemd[1]: Finished ensure-sysext.service. Jul 12 00:09:54.608670 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:09:54.617641 augenrules[1447]: /sbin/augenrules: No change Jul 12 00:09:54.630214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:09:54.631570 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:09:54.631706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:09:54.637309 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:09:54.645810 augenrules[1509]: No rules Jul 12 00:09:54.649536 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:09:54.649831 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:09:54.658112 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 00:09:54.724631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:09:54.728853 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:09:54.748007 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:09:54.751987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:09:54.759002 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 12 00:09:54.763996 kernel: ACPI: button: Power Button [PWRF] Jul 12 00:09:54.789322 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 12 00:09:54.789712 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 12 00:09:54.833100 systemd-networkd[1507]: lo: Link UP Jul 12 00:09:54.833110 systemd-networkd[1507]: lo: Gained carrier Jul 12 00:09:54.835007 systemd-networkd[1507]: Enumeration completed Jul 12 00:09:54.835113 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:09:54.835631 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:09:54.835636 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:09:54.836217 systemd-networkd[1507]: eth0: Link UP Jul 12 00:09:54.836413 systemd-networkd[1507]: eth0: Gained carrier Jul 12 00:09:54.836427 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:09:54.841270 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 00:09:54.847100 systemd-networkd[1507]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:09:54.854743 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:09:54.868599 systemd-resolved[1409]: Positive Trust Anchors: Jul 12 00:09:54.868617 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:09:54.868650 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:09:54.872332 systemd-resolved[1409]: Defaulting to hostname 'linux'. Jul 12 00:09:54.874395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:09:54.875613 systemd[1]: Reached target network.target - Network. Jul 12 00:09:54.876529 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:09:54.895025 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:09:54.896401 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:09:54.897550 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:09:54.898775 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:09:54.901069 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 12 00:09:54.945648 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:09:55.361407 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:09:55.361459 systemd-timesyncd[1508]: Initial clock synchronization to Sat 2025-07-12 00:09:55.361307 UTC. Jul 12 00:09:55.361497 systemd-resolved[1409]: Clock change detected. Flushing caches. Jul 12 00:09:55.362536 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:09:55.362568 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:09:55.374966 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:09:55.376518 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:09:55.378068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:09:55.379699 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:09:55.382678 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:09:55.385755 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:09:55.389768 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 00:09:55.391573 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 00:09:55.395202 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 00:09:55.402228 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:09:55.404066 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 00:09:55.406752 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 00:09:55.408592 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:09:55.428405 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:09:55.429691 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:09:55.430931 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:09:55.431067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:09:55.432800 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:09:55.436116 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:09:55.438473 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:09:55.442976 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:09:55.453275 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:09:55.454655 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:09:55.458042 kernel: kvm_amd: TSC scaling supported Jul 12 00:09:55.458071 kernel: kvm_amd: Nested Virtualization enabled Jul 12 00:09:55.458155 kernel: kvm_amd: Nested Paging enabled Jul 12 00:09:55.458169 kernel: kvm_amd: LBR virtualization supported Jul 12 00:09:55.459221 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 12 00:09:55.462904 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 12 00:09:55.462934 kernel: kvm_amd: Virtual GIF supported Jul 12 00:09:55.464665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:09:55.466038 jq[1553]: false Jul 12 00:09:55.513521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:09:55.547510 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:09:55.551325 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:09:55.553492 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jul 12 00:09:55.553514 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jul 12 00:09:55.558199 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:09:55.561202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:09:55.563655 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:09:55.564470 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:09:55.565047 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jul 12 00:09:55.565081 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 00:09:55.565046 oslogin_cache_refresh[1555]: Failure getting users, quitting Jul 12 00:09:55.565070 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 00:09:55.565162 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jul 12 00:09:55.565131 oslogin_cache_refresh[1555]: Refreshing group entry cache Jul 12 00:09:55.568706 extend-filesystems[1554]: Found /dev/vda6 Jul 12 00:09:55.571288 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:09:55.575958 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:09:55.604451 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jul 12 00:09:55.604451 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 00:09:55.604441 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jul 12 00:09:55.604459 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 00:09:55.605316 extend-filesystems[1554]: Found /dev/vda9 Jul 12 00:09:55.610173 kernel: EDAC MC: Ver: 3.0.0 Jul 12 00:09:55.610964 extend-filesystems[1554]: Checking size of /dev/vda9 Jul 12 00:09:55.615431 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:09:55.618688 jq[1574]: true Jul 12 00:09:55.617613 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:09:55.619407 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:09:55.620303 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 12 00:09:55.620695 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 12 00:09:55.624599 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:09:55.624959 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:09:55.632202 update_engine[1571]: I20250712 00:09:55.632128 1571 main.cc:92] Flatcar Update Engine starting Jul 12 00:09:55.643698 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:09:55.644014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:09:55.658931 jq[1584]: true Jul 12 00:09:55.659119 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:09:55.676114 tar[1583]: linux-amd64/helm Jul 12 00:09:55.677169 systemd-logind[1567]: Watching system buttons on /dev/input/event2 (Power Button) Jul 12 00:09:55.677195 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 00:09:55.677551 systemd-logind[1567]: New seat seat0. Jul 12 00:09:55.678713 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:09:55.721952 extend-filesystems[1554]: Resized partition /dev/vda9 Jul 12 00:09:55.733983 extend-filesystems[1615]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 00:09:55.734933 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:09:55.739687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:09:55.740465 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:09:55.748380 dbus-daemon[1551]: [system] SELinux support is enabled Jul 12 00:09:55.748927 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:09:55.750942 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:09:55.759370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:09:55.759420 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:09:55.759571 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:09:55.759590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:09:55.775442 update_engine[1571]: I20250712 00:09:55.775383 1571 update_check_scheduler.cc:74] Next update check in 6m35s Jul 12 00:09:55.778805 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:09:55.784970 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:09:55.784987 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:09:55.822598 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:09:55.822598 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:09:55.822598 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:09:55.826010 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jul 12 00:09:55.828438 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:09:55.830193 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:09:55.881864 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:09:55.920521 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:09:55.929333 containerd[1585]: time="2025-07-12T00:09:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 00:09:55.931109 containerd[1585]: time="2025-07-12T00:09:55.931060667Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 12 00:09:55.939540 containerd[1585]: time="2025-07-12T00:09:55.939500010Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.919µs" Jul 12 00:09:55.939540 containerd[1585]: time="2025-07-12T00:09:55.939531399Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 00:09:55.939657 containerd[1585]: time="2025-07-12T00:09:55.939553079Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 00:09:55.939751 containerd[1585]: time="2025-07-12T00:09:55.939727356Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 00:09:55.939751 containerd[1585]: time="2025-07-12T00:09:55.939747143Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 00:09:55.939823 containerd[1585]: time="2025-07-12T00:09:55.939780977Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:09:55.939900 containerd[1585]: time="2025-07-12T00:09:55.939852401Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:09:55.939900 containerd[1585]: time="2025-07-12T00:09:55.939884902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940196 containerd[1585]: time="2025-07-12T00:09:55.940169586Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940196 containerd[1585]: time="2025-07-12T00:09:55.940191878Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940287 containerd[1585]: time="2025-07-12T00:09:55.940207607Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940287 containerd[1585]: time="2025-07-12T00:09:55.940218908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940358 containerd[1585]: time="2025-07-12T00:09:55.940326851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940608 containerd[1585]: time="2025-07-12T00:09:55.940582981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940651 containerd[1585]: time="2025-07-12T00:09:55.940621383Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:09:55.940651 containerd[1585]: time="2025-07-12T00:09:55.940632584Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 00:09:55.940714 containerd[1585]: time="2025-07-12T00:09:55.940677318Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 00:09:55.941271 containerd[1585]: time="2025-07-12T00:09:55.941237018Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 00:09:55.941362 containerd[1585]: time="2025-07-12T00:09:55.941329712Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:09:55.952053 containerd[1585]: time="2025-07-12T00:09:55.951981496Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 00:09:55.952183 containerd[1585]: time="2025-07-12T00:09:55.952137278Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952299362Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952328056Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952357591Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952387868Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952406674Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952422984Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952438583Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952453331Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952467658Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952495139Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952694193Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952721664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952746501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 00:09:55.952905 containerd[1585]: time="2025-07-12T00:09:55.952761479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 00:09:55.953314 containerd[1585]: time="2025-07-12T00:09:55.952776157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 00:09:55.953314 containerd[1585]: time="2025-07-12T00:09:55.952791736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 00:09:55.953314 containerd[1585]: time="2025-07-12T00:09:55.952809389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 00:09:55.953314 containerd[1585]: time="2025-07-12T00:09:55.952832332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 00:09:55.953314 containerd[1585]: time="2025-07-12T00:09:55.952849945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 00:09:55.953552 containerd[1585]: time="2025-07-12T00:09:55.952865354Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 00:09:55.953655 containerd[1585]: time="2025-07-12T00:09:55.953633195Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 00:09:55.953816 containerd[1585]: time="2025-07-12T00:09:55.953795760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 00:09:55.953922 containerd[1585]: time="2025-07-12T00:09:55.953903642Z" level=info msg="Start snapshots syncer" Jul 12 00:09:55.954024 containerd[1585]: time="2025-07-12T00:09:55.954003910Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 00:09:55.954553 containerd[1585]: time="2025-07-12T00:09:55.954390155Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 00:09:55.954553 containerd[1585]: time="2025-07-12T00:09:55.954476136Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 00:09:55.956040 containerd[1585]: time="2025-07-12T00:09:55.956010424Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956302442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956333941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956359559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956374998Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956397561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956412799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956429250Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956460288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956475918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 00:09:55.956749 containerd[1585]: time="2025-07-12T00:09:55.956492629Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 00:09:55.957711 containerd[1585]: time="2025-07-12T00:09:55.957685387Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:09:55.957909 containerd[1585]: time="2025-07-12T00:09:55.957884801Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:09:55.957983 containerd[1585]: time="2025-07-12T00:09:55.957966123Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:09:55.958150 containerd[1585]: time="2025-07-12T00:09:55.958129270Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:09:55.958220 containerd[1585]: time="2025-07-12T00:09:55.958203359Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 00:09:55.958293 containerd[1585]: time="2025-07-12T00:09:55.958276165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 00:09:55.958377 containerd[1585]: time="2025-07-12T00:09:55.958358630Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 00:09:55.958458 containerd[1585]: time="2025-07-12T00:09:55.958443469Z" level=info msg="runtime interface created" Jul 12 00:09:55.958518 containerd[1585]: time="2025-07-12T00:09:55.958504353Z" level=info msg="created NRI interface" Jul 12 00:09:55.958583 containerd[1585]: time="2025-07-12T00:09:55.958567682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 00:09:55.958661 containerd[1585]: time="2025-07-12T00:09:55.958645338Z" level=info msg="Connect containerd service" Jul 12 00:09:55.958753 containerd[1585]: time="2025-07-12T00:09:55.958735657Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:09:55.962083 containerd[1585]: time="2025-07-12T00:09:55.962049524Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:09:55.964392 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:09:55.966071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:09:55.972004 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:09:55.986587 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:09:55.986880 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:09:55.990800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:09:56.148076 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:09:56.155725 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:09:56.161420 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 00:09:56.162784 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:09:56.246271 containerd[1585]: time="2025-07-12T00:09:56.246137173Z" level=info msg="Start subscribing containerd event" Jul 12 00:09:56.246271 containerd[1585]: time="2025-07-12T00:09:56.246219207Z" level=info msg="Start recovering state" Jul 12 00:09:56.246271 containerd[1585]: time="2025-07-12T00:09:56.246266305Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246323893Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246344231Z" level=info msg="Start event monitor" Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246365050Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246373055Z" level=info msg="Start streaming server" Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246382744Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246390188Z" level=info msg="runtime interface starting up..." Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246395998Z" level=info msg="starting plugins..." Jul 12 00:09:56.246491 containerd[1585]: time="2025-07-12T00:09:56.246414423Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 00:09:56.246705 containerd[1585]: time="2025-07-12T00:09:56.246688477Z" level=info msg="containerd successfully booted in 0.317928s" Jul 12 00:09:56.246831 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:09:56.387113 tar[1583]: linux-amd64/LICENSE Jul 12 00:09:56.387311 tar[1583]: linux-amd64/README.md Jul 12 00:09:56.417956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:09:56.833150 systemd-networkd[1507]: eth0: Gained IPv6LL Jul 12 00:09:56.837884 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:09:56.839771 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:09:56.842617 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:09:56.845770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:09:56.848199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:09:56.927926 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:09:56.928227 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:09:56.930531 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:09:56.933207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:09:58.413539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:09:58.417658 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:09:58.419072 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:09:58.431561 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:09:58.432583 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:58078.service - OpenSSH per-connection server daemon (10.0.0.1:58078). Jul 12 00:09:58.435316 systemd[1]: Startup finished in 3.349s (kernel) + 8.609s (initrd) + 5.836s (userspace) = 17.795s. Jul 12 00:09:58.598106 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 58078 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:09:58.601135 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:58.612498 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:09:58.614600 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:09:58.626165 systemd-logind[1567]: New session 1 of user core. Jul 12 00:09:58.706224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:09:58.709701 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:09:58.726165 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:09:58.729091 systemd-logind[1567]: New session c1 of user core. Jul 12 00:09:58.900275 systemd[1709]: Queued start job for default target default.target. Jul 12 00:09:58.911388 systemd[1709]: Created slice app.slice - User Application Slice. Jul 12 00:09:58.911421 systemd[1709]: Reached target paths.target - Paths. Jul 12 00:09:58.911470 systemd[1709]: Reached target timers.target - Timers. Jul 12 00:09:58.913221 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:09:58.928443 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:09:58.928610 systemd[1709]: Reached target sockets.target - Sockets. Jul 12 00:09:58.928660 systemd[1709]: Reached target basic.target - Basic System. Jul 12 00:09:58.928701 systemd[1709]: Reached target default.target - Main User Target. Jul 12 00:09:58.928739 systemd[1709]: Startup finished in 191ms. Jul 12 00:09:58.929212 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:09:58.931756 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:09:59.058590 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:58092.service - OpenSSH per-connection server daemon (10.0.0.1:58092). Jul 12 00:09:59.117137 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 58092 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:09:59.119381 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:59.125460 systemd-logind[1567]: New session 2 of user core. Jul 12 00:09:59.132039 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:09:59.159539 kubelet[1693]: E0712 00:09:59.159462 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:09:59.164004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:09:59.164242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:09:59.164720 systemd[1]: kubelet.service: Consumed 2.009s CPU time, 265.8M memory peak. Jul 12 00:09:59.188026 sshd[1722]: Connection closed by 10.0.0.1 port 58092 Jul 12 00:09:59.188418 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:59.202464 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:58092.service: Deactivated successfully. Jul 12 00:09:59.204785 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:09:59.205713 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:09:59.208961 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:58100.service - OpenSSH per-connection server daemon (10.0.0.1:58100). Jul 12 00:09:59.209620 systemd-logind[1567]: Removed session 2. Jul 12 00:09:59.264833 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 58100 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:09:59.266504 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:59.272092 systemd-logind[1567]: New session 3 of user core. Jul 12 00:09:59.283088 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:09:59.336321 sshd[1731]: Connection closed by 10.0.0.1 port 58100 Jul 12 00:09:59.336498 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:59.351071 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:58100.service: Deactivated successfully. Jul 12 00:09:59.353932 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:09:59.354847 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:09:59.360312 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:58110.service - OpenSSH per-connection server daemon (10.0.0.1:58110). Jul 12 00:09:59.360989 systemd-logind[1567]: Removed session 3. Jul 12 00:09:59.419899 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 58110 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:09:59.421830 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:59.427173 systemd-logind[1567]: New session 4 of user core. Jul 12 00:09:59.441303 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:09:59.498430 sshd[1739]: Connection closed by 10.0.0.1 port 58110 Jul 12 00:09:59.498824 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:59.518563 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:58110.service: Deactivated successfully. Jul 12 00:09:59.520643 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:09:59.521565 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:09:59.525258 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:58116.service - OpenSSH per-connection server daemon (10.0.0.1:58116). Jul 12 00:09:59.526007 systemd-logind[1567]: Removed session 4. Jul 12 00:09:59.578277 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 58116 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:09:59.580148 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:59.584912 systemd-logind[1567]: New session 5 of user core. Jul 12 00:09:59.595045 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:09:59.656063 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:09:59.656454 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:09:59.676666 sudo[1748]: pam_unix(sudo:session): session closed for user root Jul 12 00:09:59.678961 sshd[1747]: Connection closed by 10.0.0.1 port 58116 Jul 12 00:09:59.679434 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:59.695013 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:58116.service: Deactivated successfully. Jul 12 00:09:59.697140 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:09:59.697916 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:09:59.701164 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:58120.service - OpenSSH per-connection server daemon (10.0.0.1:58120). Jul 12 00:09:59.701908 systemd-logind[1567]: Removed session 5. Jul 12 00:09:59.765939 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 58120 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:09:59.767705 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:59.772677 systemd-logind[1567]: New session 6 of user core. Jul 12 00:09:59.786042 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:09:59.841614 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:09:59.841954 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:00.013728 sudo[1758]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:00.020332 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 00:10:00.020633 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:00.032688 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:10:00.077822 augenrules[1780]: No rules Jul 12 00:10:00.079408 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:10:00.079694 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:10:00.080934 sudo[1757]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:00.082538 sshd[1756]: Connection closed by 10.0.0.1 port 58120 Jul 12 00:10:00.082864 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:00.089654 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:58120.service: Deactivated successfully. Jul 12 00:10:00.091637 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:10:00.092497 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:10:00.095471 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:58136.service - OpenSSH per-connection server daemon (10.0.0.1:58136). Jul 12 00:10:00.096101 systemd-logind[1567]: Removed session 6. Jul 12 00:10:00.157134 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 58136 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:10:00.158588 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:00.162966 systemd-logind[1567]: New session 7 of user core. Jul 12 00:10:00.176996 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:10:00.230983 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:10:00.231302 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:10:00.781403 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:10:00.805232 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:10:01.562170 dockerd[1813]: time="2025-07-12T00:10:01.562069602Z" level=info msg="Starting up" Jul 12 00:10:01.563979 dockerd[1813]: time="2025-07-12T00:10:01.563947394Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 00:10:02.302422 dockerd[1813]: time="2025-07-12T00:10:02.302328009Z" level=info msg="Loading containers: start." Jul 12 00:10:02.314231 kernel: Initializing XFRM netlink socket Jul 12 00:10:02.645241 systemd-networkd[1507]: docker0: Link UP Jul 12 00:10:02.651289 dockerd[1813]: time="2025-07-12T00:10:02.651243229Z" level=info msg="Loading containers: done." Jul 12 00:10:02.682078 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2771724574-merged.mount: Deactivated successfully. Jul 12 00:10:02.682889 dockerd[1813]: time="2025-07-12T00:10:02.682833234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:10:02.683026 dockerd[1813]: time="2025-07-12T00:10:02.682977715Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 12 00:10:02.683208 dockerd[1813]: time="2025-07-12T00:10:02.683185595Z" level=info msg="Initializing buildkit" Jul 12 00:10:02.720574 dockerd[1813]: time="2025-07-12T00:10:02.720516761Z" level=info msg="Completed buildkit initialization" Jul 12 00:10:02.727430 dockerd[1813]: time="2025-07-12T00:10:02.727389745Z" level=info msg="Daemon has completed initialization" Jul 12 00:10:02.727565 dockerd[1813]: time="2025-07-12T00:10:02.727454707Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:10:02.727769 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:10:03.740848 containerd[1585]: time="2025-07-12T00:10:03.740796399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:10:04.346497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692033045.mount: Deactivated successfully. Jul 12 00:10:09.268780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:10:09.271391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:09.546441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:09.646776 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:10:10.048443 containerd[1585]: time="2025-07-12T00:10:10.048346954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:10.050535 containerd[1585]: time="2025-07-12T00:10:10.050465859Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 12 00:10:10.052405 containerd[1585]: time="2025-07-12T00:10:10.052339564Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:10.056281 containerd[1585]: time="2025-07-12T00:10:10.055629897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:10.057247 containerd[1585]: time="2025-07-12T00:10:10.057176719Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 6.316320828s" Jul 12 00:10:10.057247 containerd[1585]: time="2025-07-12T00:10:10.057241170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 12 00:10:10.058669 containerd[1585]: time="2025-07-12T00:10:10.058619295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:10:10.088512 kubelet[2088]: E0712 00:10:10.088395 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:10:10.097119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:10:10.097440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:10:10.097975 systemd[1]: kubelet.service: Consumed 344ms CPU time, 111.2M memory peak. Jul 12 00:10:12.426341 containerd[1585]: time="2025-07-12T00:10:12.426248938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:12.493094 containerd[1585]: time="2025-07-12T00:10:12.493004075Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 12 00:10:12.527070 containerd[1585]: time="2025-07-12T00:10:12.526984945Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:12.630800 containerd[1585]: time="2025-07-12T00:10:12.630716533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:12.632048 containerd[1585]: time="2025-07-12T00:10:12.631971007Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.573300505s" Jul 12 00:10:12.632048 containerd[1585]: time="2025-07-12T00:10:12.632019808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 12 00:10:12.632743 containerd[1585]: time="2025-07-12T00:10:12.632707769Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:10:14.472760 containerd[1585]: time="2025-07-12T00:10:14.472640614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:14.474770 containerd[1585]: time="2025-07-12T00:10:14.474726057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 12 00:10:14.477064 containerd[1585]: time="2025-07-12T00:10:14.476998310Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:14.479905 containerd[1585]: time="2025-07-12T00:10:14.479811538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:14.480844 containerd[1585]: time="2025-07-12T00:10:14.480750519Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.84797876s" Jul 12 00:10:14.480844 containerd[1585]: time="2025-07-12T00:10:14.480834106Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 12 00:10:14.481492 containerd[1585]: time="2025-07-12T00:10:14.481415216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:10:15.863053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791219802.mount: Deactivated successfully. Jul 12 00:10:16.853543 containerd[1585]: time="2025-07-12T00:10:16.853428773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:16.857148 containerd[1585]: time="2025-07-12T00:10:16.857093869Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 12 00:10:16.859204 containerd[1585]: time="2025-07-12T00:10:16.859042575Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:16.862476 containerd[1585]: time="2025-07-12T00:10:16.862331545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:16.863045 containerd[1585]: time="2025-07-12T00:10:16.863012643Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.381511005s" Jul 12 00:10:16.863045 containerd[1585]: time="2025-07-12T00:10:16.863052017Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 12 00:10:16.863592 containerd[1585]: time="2025-07-12T00:10:16.863556754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:10:17.753436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182337015.mount: Deactivated successfully. Jul 12 00:10:19.701856 containerd[1585]: time="2025-07-12T00:10:19.701754064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:19.703147 containerd[1585]: time="2025-07-12T00:10:19.703072999Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 12 00:10:19.706449 containerd[1585]: time="2025-07-12T00:10:19.706380815Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:19.709367 containerd[1585]: time="2025-07-12T00:10:19.709272389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:19.710458 containerd[1585]: time="2025-07-12T00:10:19.710407820Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.846814397s" Jul 12 00:10:19.710458 containerd[1585]: time="2025-07-12T00:10:19.710444098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 12 00:10:19.711139 containerd[1585]: time="2025-07-12T00:10:19.711109075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:10:20.268724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:10:20.270593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:20.512853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:20.518259 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:10:21.007380 kubelet[2173]: E0712 00:10:21.007303 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:10:21.011352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:10:21.011577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:10:21.011995 systemd[1]: kubelet.service: Consumed 349ms CPU time, 110.8M memory peak. Jul 12 00:10:21.975345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639241180.mount: Deactivated successfully. Jul 12 00:10:21.985251 containerd[1585]: time="2025-07-12T00:10:21.985181087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:21.986196 containerd[1585]: time="2025-07-12T00:10:21.986168469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 12 00:10:21.987432 containerd[1585]: time="2025-07-12T00:10:21.987399909Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:21.992738 containerd[1585]: time="2025-07-12T00:10:21.992633197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:10:21.993314 containerd[1585]: time="2025-07-12T00:10:21.993271765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.282132544s" Jul 12 00:10:21.993354 containerd[1585]: time="2025-07-12T00:10:21.993312231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 12 00:10:21.993929 containerd[1585]: time="2025-07-12T00:10:21.993862824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:10:22.585053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703667052.mount: Deactivated successfully. Jul 12 00:10:24.732790 containerd[1585]: time="2025-07-12T00:10:24.732691001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:24.740112 containerd[1585]: time="2025-07-12T00:10:24.740014922Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 12 00:10:24.741680 containerd[1585]: time="2025-07-12T00:10:24.741582512Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:24.748073 containerd[1585]: time="2025-07-12T00:10:24.747983491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:10:24.749144 containerd[1585]: time="2025-07-12T00:10:24.749081221Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.755155198s" Jul 12 00:10:24.749144 containerd[1585]: time="2025-07-12T00:10:24.749130713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 12 00:10:27.014911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:27.015192 systemd[1]: kubelet.service: Consumed 349ms CPU time, 110.8M memory peak. Jul 12 00:10:27.018763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:27.441213 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-7.scope)... Jul 12 00:10:27.441234 systemd[1]: Reloading... Jul 12 00:10:27.829090 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1146105458 wd_nsec: 1146104914 Jul 12 00:10:27.918930 zram_generator::config[2314]: No configuration found. Jul 12 00:10:28.031431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:28.154698 systemd[1]: Reloading finished in 712 ms. Jul 12 00:10:28.211019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:10:28.211146 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:10:28.211498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:28.214113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:28.472822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:28.488298 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:10:28.542668 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:28.542668 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:10:28.542668 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:28.543324 kubelet[2358]: I0712 00:10:28.542830 2358 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:10:28.913994 kubelet[2358]: I0712 00:10:28.913941 2358 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:10:28.913994 kubelet[2358]: I0712 00:10:28.913974 2358 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:10:28.914255 kubelet[2358]: I0712 00:10:28.914235 2358 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:10:29.057619 kubelet[2358]: E0712 00:10:29.056315 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:29.069381 kubelet[2358]: I0712 00:10:29.069314 2358 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:10:29.136621 kubelet[2358]: I0712 00:10:29.136580 2358 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:10:29.144671 kubelet[2358]: I0712 00:10:29.144621 2358 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:10:29.149644 kubelet[2358]: I0712 00:10:29.149595 2358 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:10:29.149883 kubelet[2358]: I0712 00:10:29.149815 2358 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:10:29.150167 kubelet[2358]: I0712 00:10:29.149864 2358 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:10:29.150357 kubelet[2358]: I0712 00:10:29.150175 2358 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:10:29.150357 kubelet[2358]: I0712 00:10:29.150188 2358 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:10:29.150441 kubelet[2358]: I0712 00:10:29.150393 2358 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:29.157249 kubelet[2358]: I0712 00:10:29.157173 2358 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:10:29.157317 kubelet[2358]: I0712 00:10:29.157270 2358 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:10:29.157389 kubelet[2358]: I0712 00:10:29.157354 2358 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:10:29.157420 kubelet[2358]: I0712 00:10:29.157406 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:10:29.170150 kubelet[2358]: I0712 00:10:29.169960 2358 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:10:29.175639 kubelet[2358]: W0712 00:10:29.175555 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:29.249832 kubelet[2358]: E0712 00:10:29.248780 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:29.249832 kubelet[2358]: I0712 00:10:29.249241 2358 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:10:29.249832 kubelet[2358]: W0712 00:10:29.249348 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:10:29.259497 kubelet[2358]: W0712 00:10:29.259418 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:29.259542 kubelet[2358]: E0712 00:10:29.259526 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:29.275319 kubelet[2358]: I0712 00:10:29.275277 2358 server.go:1274] "Started kubelet" Jul 12 00:10:29.275933 kubelet[2358]: I0712 00:10:29.275850 2358 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:10:29.277520 kubelet[2358]: I0712 00:10:29.277490 2358 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:10:29.278280 kubelet[2358]: I0712 00:10:29.278263 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:10:29.278568 kubelet[2358]: I0712 00:10:29.278536 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:10:29.278833 kubelet[2358]: I0712 00:10:29.278816 2358 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:10:29.278958 kubelet[2358]: E0712 00:10:29.278941 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.279351 kubelet[2358]: I0712 00:10:29.279335 2358 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:10:29.279436 kubelet[2358]: I0712 00:10:29.279422 2358 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:10:29.299106 kubelet[2358]: E0712 00:10:29.299048 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Jul 12 00:10:29.326014 kubelet[2358]: I0712 00:10:29.325163 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:10:29.326014 kubelet[2358]: I0712 00:10:29.325091 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:10:29.326014 kubelet[2358]: I0712 00:10:29.325925 2358 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:10:29.326217 kubelet[2358]: I0712 00:10:29.326098 2358 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:10:29.326217 kubelet[2358]: I0712 00:10:29.326191 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:10:29.326804 kubelet[2358]: W0712 00:10:29.298957 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:29.326855 kubelet[2358]: E0712 00:10:29.326817 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:29.327126 kubelet[2358]: I0712 00:10:29.327088 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:10:29.327341 kubelet[2358]: I0712 00:10:29.327323 2358 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:10:29.327391 kubelet[2358]: I0712 00:10:29.327358 2358 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:10:29.327432 kubelet[2358]: E0712 00:10:29.327402 2358 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:10:29.332830 kubelet[2358]: W0712 00:10:29.332656 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:29.333036 kubelet[2358]: E0712 00:10:29.332813 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:29.334262 kubelet[2358]: E0712 00:10:29.334232 2358 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:10:29.334353 kubelet[2358]: I0712 00:10:29.334272 2358 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:10:29.345460 kubelet[2358]: E0712 00:10:29.343634 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851587e3f512ca1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:10:29.275241633 +0000 UTC m=+0.781622544,LastTimestamp:2025-07-12 00:10:29.275241633 +0000 UTC m=+0.781622544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:10:29.356718 kubelet[2358]: I0712 00:10:29.356690 2358 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:10:29.356718 kubelet[2358]: I0712 00:10:29.356711 2358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:10:29.356824 kubelet[2358]: I0712 00:10:29.356738 2358 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:29.380009 kubelet[2358]: E0712 00:10:29.379957 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.428516 kubelet[2358]: E0712 00:10:29.428386 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:10:29.480803 kubelet[2358]: E0712 00:10:29.480748 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.500555 kubelet[2358]: E0712 00:10:29.500499 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Jul 12 00:10:29.581339 kubelet[2358]: E0712 00:10:29.581278 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.629548 kubelet[2358]: E0712 00:10:29.629468 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:10:29.682367 kubelet[2358]: E0712 00:10:29.682153 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.782953 kubelet[2358]: E0712 00:10:29.782859 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.883531 kubelet[2358]: E0712 00:10:29.883441 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:29.901084 kubelet[2358]: E0712 00:10:29.901049 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Jul 12 00:10:29.984670 kubelet[2358]: E0712 00:10:29.984480 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.030671 kubelet[2358]: E0712 00:10:30.030603 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:10:30.085513 kubelet[2358]: E0712 00:10:30.085433 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.186261 kubelet[2358]: E0712 00:10:30.186185 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.218301 kubelet[2358]: W0712 00:10:30.218228 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:30.218486 kubelet[2358]: E0712 00:10:30.218304 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:30.286713 kubelet[2358]: E0712 00:10:30.286646 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.387729 kubelet[2358]: E0712 00:10:30.387662 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.421414 kubelet[2358]: W0712 00:10:30.421327 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:30.421414 kubelet[2358]: E0712 00:10:30.421407 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:30.488126 kubelet[2358]: E0712 00:10:30.488056 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.588845 kubelet[2358]: E0712 00:10:30.588732 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.655334 kubelet[2358]: W0712 00:10:30.655280 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:30.655334 kubelet[2358]: E0712 00:10:30.655324 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:30.689203 kubelet[2358]: E0712 00:10:30.689126 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.701824 kubelet[2358]: E0712 00:10:30.701776 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" Jul 12 00:10:30.789514 kubelet[2358]: E0712 00:10:30.789442 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.831637 kubelet[2358]: E0712 00:10:30.831566 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:10:30.890142 kubelet[2358]: E0712 00:10:30.890025 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:30.904845 kubelet[2358]: W0712 00:10:30.904768 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:30.904979 kubelet[2358]: E0712 00:10:30.904855 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:30.990787 kubelet[2358]: E0712 00:10:30.990711 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.091446 kubelet[2358]: E0712 00:10:31.091373 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.093578 kubelet[2358]: E0712 00:10:31.093526 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:31.191956 kubelet[2358]: E0712 00:10:31.191782 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.292887 kubelet[2358]: E0712 00:10:31.292804 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.393899 kubelet[2358]: E0712 00:10:31.393801 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.494653 kubelet[2358]: E0712 00:10:31.494441 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.595169 kubelet[2358]: E0712 00:10:31.595082 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.638369 kubelet[2358]: I0712 00:10:31.638289 2358 policy_none.go:49] "None policy: Start" Jul 12 00:10:31.639188 kubelet[2358]: I0712 00:10:31.639170 2358 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:10:31.639261 kubelet[2358]: I0712 00:10:31.639202 2358 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:10:31.695954 kubelet[2358]: E0712 00:10:31.695894 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.796782 kubelet[2358]: E0712 00:10:31.796709 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:31.834790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:10:31.850982 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:10:31.854744 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:10:31.874484 kubelet[2358]: I0712 00:10:31.874413 2358 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:10:31.874770 kubelet[2358]: I0712 00:10:31.874750 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:10:31.874859 kubelet[2358]: I0712 00:10:31.874775 2358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:10:31.875445 kubelet[2358]: I0712 00:10:31.875123 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:10:31.876575 kubelet[2358]: E0712 00:10:31.876544 2358 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:10:31.978512 kubelet[2358]: I0712 00:10:31.978446 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:10:31.979009 kubelet[2358]: E0712 00:10:31.978977 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Jul 12 00:10:32.181503 kubelet[2358]: I0712 00:10:32.181369 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:10:32.181819 kubelet[2358]: E0712 00:10:32.181790 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Jul 12 00:10:32.303275 kubelet[2358]: E0712 00:10:32.303174 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="3.2s" Jul 12 00:10:32.444270 systemd[1]: Created slice kubepods-burstable-pod55575e45dd7c26dee14a9a4c8f182e51.slice - libcontainer container kubepods-burstable-pod55575e45dd7c26dee14a9a4c8f182e51.slice. Jul 12 00:10:32.462455 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 12 00:10:32.467509 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 12 00:10:32.502498 kubelet[2358]: I0712 00:10:32.502397 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55575e45dd7c26dee14a9a4c8f182e51-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"55575e45dd7c26dee14a9a4c8f182e51\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:32.502498 kubelet[2358]: I0712 00:10:32.502490 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:32.502726 kubelet[2358]: I0712 00:10:32.502521 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:32.502726 kubelet[2358]: I0712 00:10:32.502551 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:32.502726 kubelet[2358]: I0712 00:10:32.502572 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:32.502726 kubelet[2358]: I0712 00:10:32.502593 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:32.502726 kubelet[2358]: I0712 00:10:32.502620 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55575e45dd7c26dee14a9a4c8f182e51-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"55575e45dd7c26dee14a9a4c8f182e51\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:32.503000 kubelet[2358]: I0712 00:10:32.502641 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55575e45dd7c26dee14a9a4c8f182e51-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"55575e45dd7c26dee14a9a4c8f182e51\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:32.503000 kubelet[2358]: I0712 00:10:32.502661 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:32.584214 kubelet[2358]: I0712 00:10:32.584171 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:10:32.584735 kubelet[2358]: E0712 00:10:32.584676 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Jul 12 00:10:32.759799 kubelet[2358]: E0712 00:10:32.759597 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:32.760612 containerd[1585]: time="2025-07-12T00:10:32.760569610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:55575e45dd7c26dee14a9a4c8f182e51,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:32.766033 kubelet[2358]: E0712 00:10:32.765978 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:32.766635 containerd[1585]: time="2025-07-12T00:10:32.766590630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:32.771055 kubelet[2358]: E0712 00:10:32.770985 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:32.771950 containerd[1585]: time="2025-07-12T00:10:32.771909112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:32.855877 containerd[1585]: time="2025-07-12T00:10:32.855798553Z" level=info msg="connecting to shim 3433f6414db87c07a9d5557bc95a142d6a06f297b55a67556632153017d5ea50" address="unix:///run/containerd/s/3318f20ce151c0c516789d32d0422f241b9db5971d91ba946ee5e16b9e33b99d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:10:32.860627 containerd[1585]: time="2025-07-12T00:10:32.860585026Z" level=info msg="connecting to shim 7eefb364b01cf8579058cc17ada5183179d38bc9bae6133d9451ee353474fc44" address="unix:///run/containerd/s/01ad5b39ebe0296df480bd9517afa77c763726427edad3c36958d29b1f034d06" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:10:32.880399 containerd[1585]: time="2025-07-12T00:10:32.880328754Z" level=info msg="connecting to shim ba9063ae98aa18bae5f34e053a5e04564cc06ca8a43da8a28d548691da716f73" address="unix:///run/containerd/s/4e04055833372400ea93a98900aebe611caaa1edc818856a613d07c3f92c4a37" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:10:32.917040 systemd[1]: Started cri-containerd-7eefb364b01cf8579058cc17ada5183179d38bc9bae6133d9451ee353474fc44.scope - libcontainer container 7eefb364b01cf8579058cc17ada5183179d38bc9bae6133d9451ee353474fc44. Jul 12 00:10:32.923103 systemd[1]: Started cri-containerd-3433f6414db87c07a9d5557bc95a142d6a06f297b55a67556632153017d5ea50.scope - libcontainer container 3433f6414db87c07a9d5557bc95a142d6a06f297b55a67556632153017d5ea50. Jul 12 00:10:32.925266 systemd[1]: Started cri-containerd-ba9063ae98aa18bae5f34e053a5e04564cc06ca8a43da8a28d548691da716f73.scope - libcontainer container ba9063ae98aa18bae5f34e053a5e04564cc06ca8a43da8a28d548691da716f73. Jul 12 00:10:33.016933 containerd[1585]: time="2025-07-12T00:10:33.016569509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba9063ae98aa18bae5f34e053a5e04564cc06ca8a43da8a28d548691da716f73\"" Jul 12 00:10:33.018284 kubelet[2358]: E0712 00:10:33.018254 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:33.020901 containerd[1585]: time="2025-07-12T00:10:33.020730201Z" level=info msg="CreateContainer within sandbox \"ba9063ae98aa18bae5f34e053a5e04564cc06ca8a43da8a28d548691da716f73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:10:33.022017 containerd[1585]: time="2025-07-12T00:10:33.021985885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eefb364b01cf8579058cc17ada5183179d38bc9bae6133d9451ee353474fc44\"" Jul 12 00:10:33.022619 kubelet[2358]: E0712 00:10:33.022550 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:33.024373 containerd[1585]: time="2025-07-12T00:10:33.024333298Z" level=info msg="CreateContainer within sandbox \"7eefb364b01cf8579058cc17ada5183179d38bc9bae6133d9451ee353474fc44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:10:33.031731 containerd[1585]: time="2025-07-12T00:10:33.031684748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:55575e45dd7c26dee14a9a4c8f182e51,Namespace:kube-system,Attempt:0,} returns sandbox id \"3433f6414db87c07a9d5557bc95a142d6a06f297b55a67556632153017d5ea50\"" Jul 12 00:10:33.032277 kubelet[2358]: E0712 00:10:33.032243 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:33.033894 containerd[1585]: time="2025-07-12T00:10:33.033561270Z" level=info msg="CreateContainer within sandbox \"3433f6414db87c07a9d5557bc95a142d6a06f297b55a67556632153017d5ea50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:10:33.040960 containerd[1585]: time="2025-07-12T00:10:33.040922288Z" level=info msg="Container 86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:10:33.056961 containerd[1585]: time="2025-07-12T00:10:33.056912402Z" level=info msg="CreateContainer within sandbox \"ba9063ae98aa18bae5f34e053a5e04564cc06ca8a43da8a28d548691da716f73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd\"" Jul 12 00:10:33.057719 containerd[1585]: time="2025-07-12T00:10:33.057677707Z" level=info msg="StartContainer for \"86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd\"" Jul 12 00:10:33.058895 containerd[1585]: time="2025-07-12T00:10:33.058749207Z" level=info msg="Container 31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:10:33.059896 containerd[1585]: time="2025-07-12T00:10:33.059594816Z" level=info msg="connecting to shim 86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd" address="unix:///run/containerd/s/4e04055833372400ea93a98900aebe611caaa1edc818856a613d07c3f92c4a37" protocol=ttrpc version=3 Jul 12 00:10:33.064167 containerd[1585]: time="2025-07-12T00:10:33.064114647Z" level=info msg="Container e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:10:33.070578 containerd[1585]: time="2025-07-12T00:10:33.070506891Z" level=info msg="CreateContainer within sandbox \"7eefb364b01cf8579058cc17ada5183179d38bc9bae6133d9451ee353474fc44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123\"" Jul 12 00:10:33.071978 containerd[1585]: time="2025-07-12T00:10:33.071951866Z" level=info msg="StartContainer for \"31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123\"" Jul 12 00:10:33.074999 containerd[1585]: time="2025-07-12T00:10:33.074919727Z" level=info msg="CreateContainer within sandbox \"3433f6414db87c07a9d5557bc95a142d6a06f297b55a67556632153017d5ea50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602\"" Jul 12 00:10:33.074999 containerd[1585]: time="2025-07-12T00:10:33.074966256Z" level=info msg="connecting to shim 31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123" address="unix:///run/containerd/s/01ad5b39ebe0296df480bd9517afa77c763726427edad3c36958d29b1f034d06" protocol=ttrpc version=3 Jul 12 00:10:33.076146 containerd[1585]: time="2025-07-12T00:10:33.076024321Z" level=info msg="StartContainer for \"e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602\"" Jul 12 00:10:33.078021 containerd[1585]: time="2025-07-12T00:10:33.077983811Z" level=info msg="connecting to shim e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602" address="unix:///run/containerd/s/3318f20ce151c0c516789d32d0422f241b9db5971d91ba946ee5e16b9e33b99d" protocol=ttrpc version=3 Jul 12 00:10:33.085164 systemd[1]: Started cri-containerd-86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd.scope - libcontainer container 86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd. Jul 12 00:10:33.111070 systemd[1]: Started cri-containerd-31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123.scope - libcontainer container 31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123. Jul 12 00:10:33.112637 systemd[1]: Started cri-containerd-e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602.scope - libcontainer container e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602. Jul 12 00:10:33.113021 kubelet[2358]: W0712 00:10:33.112943 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:33.113220 kubelet[2358]: E0712 00:10:33.113159 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:33.132099 kubelet[2358]: W0712 00:10:33.131993 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:33.132228 kubelet[2358]: E0712 00:10:33.132098 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:33.234593 containerd[1585]: time="2025-07-12T00:10:33.234442132Z" level=info msg="StartContainer for \"86b1f18a308fdb76ff0fe2c6eb047b357a84433042c9a69112d36f99943fa4dd\" returns successfully" Jul 12 00:10:33.248279 containerd[1585]: time="2025-07-12T00:10:33.248214020Z" level=info msg="StartContainer for \"31d0dadf0e05d0b8c287c2f89585e59eea105cc506de43ce8511e4bc4a39c123\" returns successfully" Jul 12 00:10:33.263904 containerd[1585]: time="2025-07-12T00:10:33.263834897Z" level=info msg="StartContainer for \"e97d58c1c7f5581e82eed7cd4965cadbb3cf27a6de2562fe323b40c3c40df602\" returns successfully" Jul 12 00:10:33.324774 kubelet[2358]: W0712 00:10:33.324661 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Jul 12 00:10:33.324774 kubelet[2358]: E0712 00:10:33.324742 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.57:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:10:33.343839 kubelet[2358]: E0712 00:10:33.343513 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:33.346206 kubelet[2358]: E0712 00:10:33.346120 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:33.348618 kubelet[2358]: E0712 00:10:33.348601 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:33.387562 kubelet[2358]: I0712 00:10:33.387096 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:10:34.350603 kubelet[2358]: E0712 00:10:34.350565 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:34.781447 kubelet[2358]: E0712 00:10:34.780611 2358 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1851587e3f512ca1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:10:29.275241633 +0000 UTC m=+0.781622544,LastTimestamp:2025-07-12 00:10:29.275241633 +0000 UTC m=+0.781622544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:10:35.084584 kubelet[2358]: E0712 00:10:35.084240 2358 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1851587e42d1d6b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:10:29.334005428 +0000 UTC m=+0.840386349,LastTimestamp:2025-07-12 00:10:29.334005428 +0000 UTC m=+0.840386349,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:10:35.327854 kubelet[2358]: I0712 00:10:35.327514 2358 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:10:35.327854 kubelet[2358]: E0712 00:10:35.327602 2358 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:10:35.344413 kubelet[2358]: E0712 00:10:35.344240 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:35.353768 kubelet[2358]: E0712 00:10:35.353714 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:35.445403 kubelet[2358]: E0712 00:10:35.445310 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:35.546105 kubelet[2358]: E0712 00:10:35.546040 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:35.647352 kubelet[2358]: E0712 00:10:35.647141 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:35.748386 kubelet[2358]: E0712 00:10:35.748192 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:35.849479 kubelet[2358]: E0712 00:10:35.849379 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:35.949900 kubelet[2358]: E0712 00:10:35.949590 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.051068 kubelet[2358]: E0712 00:10:36.050785 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.152641 kubelet[2358]: E0712 00:10:36.152552 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.255215 kubelet[2358]: E0712 00:10:36.255000 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.362891 kubelet[2358]: E0712 00:10:36.356701 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.457702 kubelet[2358]: E0712 00:10:36.457489 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.557911 kubelet[2358]: E0712 00:10:36.557685 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.658193 kubelet[2358]: E0712 00:10:36.658024 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.761645 kubelet[2358]: E0712 00:10:36.758802 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.862923 kubelet[2358]: E0712 00:10:36.862701 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:36.966971 kubelet[2358]: E0712 00:10:36.966894 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.077769 kubelet[2358]: E0712 00:10:37.075592 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.179009 kubelet[2358]: E0712 00:10:37.178827 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.279043 kubelet[2358]: E0712 00:10:37.278972 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.386823 kubelet[2358]: E0712 00:10:37.386766 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.498083 kubelet[2358]: E0712 00:10:37.491365 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.601453 kubelet[2358]: E0712 00:10:37.599419 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.699816 kubelet[2358]: E0712 00:10:37.699738 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.800036 kubelet[2358]: E0712 00:10:37.799973 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:37.902746 kubelet[2358]: E0712 00:10:37.900979 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:38.003128 kubelet[2358]: E0712 00:10:38.003047 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:38.115432 kubelet[2358]: E0712 00:10:38.113091 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:38.213345 kubelet[2358]: E0712 00:10:38.213258 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:38.313520 kubelet[2358]: E0712 00:10:38.313431 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:38.413974 kubelet[2358]: E0712 00:10:38.413769 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:39.037250 systemd[1]: Reload requested from client PID 2634 ('systemctl') (unit session-7.scope)... Jul 12 00:10:39.037269 systemd[1]: Reloading... Jul 12 00:10:39.167920 zram_generator::config[2676]: No configuration found. Jul 12 00:10:39.216143 kubelet[2358]: I0712 00:10:39.216041 2358 apiserver.go:52] "Watching apiserver" Jul 12 00:10:39.275059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:39.280307 kubelet[2358]: I0712 00:10:39.280253 2358 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:10:39.456382 systemd[1]: Reloading finished in 418 ms. Jul 12 00:10:39.490712 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:39.517751 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:10:39.518199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:39.518293 systemd[1]: kubelet.service: Consumed 1.666s CPU time, 133.9M memory peak. Jul 12 00:10:39.522113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:39.779683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:39.793240 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:10:39.846917 kubelet[2722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:39.846917 kubelet[2722]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:10:39.846917 kubelet[2722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:10:39.847494 kubelet[2722]: I0712 00:10:39.846939 2722 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:10:39.854534 kubelet[2722]: I0712 00:10:39.854478 2722 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:10:39.854534 kubelet[2722]: I0712 00:10:39.854513 2722 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:10:39.854853 kubelet[2722]: I0712 00:10:39.854824 2722 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:10:39.856628 kubelet[2722]: I0712 00:10:39.856595 2722 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:10:39.858864 kubelet[2722]: I0712 00:10:39.858799 2722 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:10:39.864055 kubelet[2722]: I0712 00:10:39.864030 2722 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:10:39.869739 kubelet[2722]: I0712 00:10:39.869690 2722 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:10:39.869907 kubelet[2722]: I0712 00:10:39.869846 2722 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:10:39.870036 kubelet[2722]: I0712 00:10:39.869992 2722 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:10:39.870228 kubelet[2722]: I0712 00:10:39.870019 2722 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:10:39.870355 kubelet[2722]: I0712 00:10:39.870236 2722 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:10:39.870355 kubelet[2722]: I0712 00:10:39.870246 2722 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:10:39.870355 kubelet[2722]: I0712 00:10:39.870287 2722 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:39.870437 kubelet[2722]: I0712 00:10:39.870408 2722 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:10:39.870437 kubelet[2722]: I0712 00:10:39.870422 2722 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:10:39.870502 kubelet[2722]: I0712 00:10:39.870480 2722 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:10:39.870532 kubelet[2722]: I0712 00:10:39.870502 2722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:10:39.871665 kubelet[2722]: I0712 00:10:39.871639 2722 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:10:39.872187 kubelet[2722]: I0712 00:10:39.872127 2722 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:10:39.872891 kubelet[2722]: I0712 00:10:39.872852 2722 server.go:1274] "Started kubelet" Jul 12 00:10:39.874602 kubelet[2722]: I0712 00:10:39.874517 2722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:10:39.874861 kubelet[2722]: I0712 00:10:39.874814 2722 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:10:39.876685 kubelet[2722]: I0712 00:10:39.874983 2722 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:10:39.876685 kubelet[2722]: I0712 00:10:39.875907 2722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:10:39.876685 kubelet[2722]: I0712 00:10:39.876643 2722 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:10:39.878007 kubelet[2722]: I0712 00:10:39.877970 2722 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:10:39.878255 kubelet[2722]: E0712 00:10:39.878229 2722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:10:39.878317 kubelet[2722]: I0712 00:10:39.878279 2722 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:10:39.879424 kubelet[2722]: I0712 00:10:39.878510 2722 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:10:39.879424 kubelet[2722]: I0712 00:10:39.878694 2722 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:10:39.879502 kubelet[2722]: I0712 00:10:39.879489 2722 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:10:39.879653 kubelet[2722]: I0712 00:10:39.879600 2722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:10:39.884811 kubelet[2722]: I0712 00:10:39.884751 2722 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:10:39.893117 kubelet[2722]: E0712 00:10:39.893035 2722 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:10:39.894751 kubelet[2722]: I0712 00:10:39.894717 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:10:39.896413 kubelet[2722]: I0712 00:10:39.896394 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:10:39.896490 kubelet[2722]: I0712 00:10:39.896420 2722 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:10:39.896490 kubelet[2722]: I0712 00:10:39.896441 2722 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:10:39.896544 kubelet[2722]: E0712 00:10:39.896499 2722 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:10:39.930316 kubelet[2722]: I0712 00:10:39.930265 2722 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:10:39.930316 kubelet[2722]: I0712 00:10:39.930286 2722 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:10:39.930316 kubelet[2722]: I0712 00:10:39.930307 2722 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:10:39.930491 kubelet[2722]: I0712 00:10:39.930459 2722 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:10:39.930491 kubelet[2722]: I0712 00:10:39.930474 2722 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:10:39.930554 kubelet[2722]: I0712 00:10:39.930497 2722 policy_none.go:49] "None policy: Start" Jul 12 00:10:39.932264 kubelet[2722]: I0712 00:10:39.931265 2722 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:10:39.932264 kubelet[2722]: I0712 00:10:39.931300 2722 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:10:39.932264 kubelet[2722]: I0712 00:10:39.931452 2722 state_mem.go:75] "Updated machine memory state" Jul 12 00:10:39.936730 kubelet[2722]: I0712 00:10:39.936685 2722 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:10:39.937009 kubelet[2722]: I0712 00:10:39.936974 2722 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:10:39.937053 kubelet[2722]: I0712 00:10:39.936997 2722 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:10:39.937889 kubelet[2722]: I0712 00:10:39.937592 2722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:10:40.043582 kubelet[2722]: I0712 00:10:40.043458 2722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:10:40.179585 kubelet[2722]: I0712 00:10:40.179512 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55575e45dd7c26dee14a9a4c8f182e51-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"55575e45dd7c26dee14a9a4c8f182e51\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:40.179585 kubelet[2722]: I0712 00:10:40.179566 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55575e45dd7c26dee14a9a4c8f182e51-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"55575e45dd7c26dee14a9a4c8f182e51\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:40.179585 kubelet[2722]: I0712 00:10:40.179593 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:40.179828 kubelet[2722]: I0712 00:10:40.179644 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:40.179828 kubelet[2722]: I0712 00:10:40.179671 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:40.179828 kubelet[2722]: I0712 00:10:40.179688 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:40.179828 kubelet[2722]: I0712 00:10:40.179719 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55575e45dd7c26dee14a9a4c8f182e51-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"55575e45dd7c26dee14a9a4c8f182e51\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:40.179828 kubelet[2722]: I0712 00:10:40.179761 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:10:40.180012 kubelet[2722]: I0712 00:10:40.179789 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:10:40.515365 kubelet[2722]: E0712 00:10:40.515187 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:40.515365 kubelet[2722]: E0712 00:10:40.515357 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:40.516394 kubelet[2722]: E0712 00:10:40.516358 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:40.640783 update_engine[1571]: I20250712 00:10:40.640635 1571 update_attempter.cc:509] Updating boot flags... Jul 12 00:10:40.677026 kubelet[2722]: I0712 00:10:40.676934 2722 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:10:40.679135 kubelet[2722]: I0712 00:10:40.679061 2722 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:10:40.872155 kubelet[2722]: I0712 00:10:40.871685 2722 apiserver.go:52] "Watching apiserver" Jul 12 00:10:40.878815 kubelet[2722]: I0712 00:10:40.878776 2722 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:10:40.914757 kubelet[2722]: E0712 00:10:40.914707 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:40.915128 kubelet[2722]: E0712 00:10:40.915088 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:41.261609 kubelet[2722]: E0712 00:10:41.261443 2722 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:10:41.261739 kubelet[2722]: E0712 00:10:41.261717 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:41.275737 kubelet[2722]: I0712 00:10:41.275628 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.275466714 podStartE2EDuration="1.275466714s" podCreationTimestamp="2025-07-12 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:10:41.275358999 +0000 UTC m=+1.475998715" watchObservedRunningTime="2025-07-12 00:10:41.275466714 +0000 UTC m=+1.476106420" Jul 12 00:10:41.509483 kubelet[2722]: I0712 00:10:41.509413 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.509391107 podStartE2EDuration="1.509391107s" podCreationTimestamp="2025-07-12 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:10:41.509281058 +0000 UTC m=+1.709920764" watchObservedRunningTime="2025-07-12 00:10:41.509391107 +0000 UTC m=+1.710030813" Jul 12 00:10:41.708639 kubelet[2722]: I0712 00:10:41.708254 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7081911619999999 podStartE2EDuration="1.708191162s" podCreationTimestamp="2025-07-12 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:10:41.707572317 +0000 UTC m=+1.908212023" watchObservedRunningTime="2025-07-12 00:10:41.708191162 +0000 UTC m=+1.908830868" Jul 12 00:10:41.795529 sudo[2773]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:10:41.795939 sudo[2773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:10:41.916818 kubelet[2722]: E0712 00:10:41.916678 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:42.633720 sudo[2773]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:45.338187 kubelet[2722]: E0712 00:10:45.338080 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:45.410557 kubelet[2722]: E0712 00:10:45.410489 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:45.922750 kubelet[2722]: E0712 00:10:45.922684 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:45.923399 kubelet[2722]: E0712 00:10:45.923346 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:46.243244 kubelet[2722]: E0712 00:10:46.243077 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:46.923662 kubelet[2722]: E0712 00:10:46.923612 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:47.136531 sudo[1792]: pam_unix(sudo:session): session closed for user root Jul 12 00:10:47.139013 sshd[1791]: Connection closed by 10.0.0.1 port 58136 Jul 12 00:10:47.139631 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:47.145415 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:10:47.145667 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:58136.service: Deactivated successfully. Jul 12 00:10:47.148560 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:10:47.148815 systemd[1]: session-7.scope: Consumed 5.674s CPU time, 263M memory peak. Jul 12 00:10:47.153266 systemd-logind[1567]: Removed session 7. Jul 12 00:10:47.270995 kubelet[2722]: I0712 00:10:47.270965 2722 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:10:47.271467 containerd[1585]: time="2025-07-12T00:10:47.271416149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:10:47.271944 kubelet[2722]: I0712 00:10:47.271636 2722 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:10:48.029861 systemd[1]: Created slice kubepods-burstable-pod01eb5897_18aa_4dbe_945c_a323f721c1d4.slice - libcontainer container kubepods-burstable-pod01eb5897_18aa_4dbe_945c_a323f721c1d4.slice. Jul 12 00:10:48.040115 systemd[1]: Created slice kubepods-besteffort-pod70dace59_0f56_4468_ac34_6801c3d2c926.slice - libcontainer container kubepods-besteffort-pod70dace59_0f56_4468_ac34_6801c3d2c926.slice. Jul 12 00:10:48.126754 kubelet[2722]: I0712 00:10:48.126701 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-bpf-maps\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.126754 kubelet[2722]: I0712 00:10:48.126750 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-hostproc\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127361 kubelet[2722]: I0712 00:10:48.126779 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-etc-cni-netd\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127361 kubelet[2722]: I0712 00:10:48.126800 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-kernel\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127361 kubelet[2722]: I0712 00:10:48.126820 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbh9f\" (UniqueName: \"kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-kube-api-access-nbh9f\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127361 kubelet[2722]: I0712 00:10:48.126840 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70dace59-0f56-4468-ac34-6801c3d2c926-xtables-lock\") pod \"kube-proxy-9ll49\" (UID: \"70dace59-0f56-4468-ac34-6801c3d2c926\") " pod="kube-system/kube-proxy-9ll49" Jul 12 00:10:48.127361 kubelet[2722]: I0712 00:10:48.126862 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70dace59-0f56-4468-ac34-6801c3d2c926-lib-modules\") pod \"kube-proxy-9ll49\" (UID: \"70dace59-0f56-4468-ac34-6801c3d2c926\") " pod="kube-system/kube-proxy-9ll49" Jul 12 00:10:48.127516 kubelet[2722]: I0712 00:10:48.126949 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfg4\" (UniqueName: \"kubernetes.io/projected/70dace59-0f56-4468-ac34-6801c3d2c926-kube-api-access-jgfg4\") pod \"kube-proxy-9ll49\" (UID: \"70dace59-0f56-4468-ac34-6801c3d2c926\") " pod="kube-system/kube-proxy-9ll49" Jul 12 00:10:48.127516 kubelet[2722]: I0712 00:10:48.126973 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01eb5897-18aa-4dbe-945c-a323f721c1d4-clustermesh-secrets\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127516 kubelet[2722]: I0712 00:10:48.126992 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-cgroup\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127516 kubelet[2722]: I0712 00:10:48.127008 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-xtables-lock\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127516 kubelet[2722]: I0712 00:10:48.127028 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-run\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127516 kubelet[2722]: I0712 00:10:48.127045 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cni-path\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127683 kubelet[2722]: I0712 00:10:48.127073 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-hubble-tls\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127683 kubelet[2722]: I0712 00:10:48.127095 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70dace59-0f56-4468-ac34-6801c3d2c926-kube-proxy\") pod \"kube-proxy-9ll49\" (UID: \"70dace59-0f56-4468-ac34-6801c3d2c926\") " pod="kube-system/kube-proxy-9ll49" Jul 12 00:10:48.127683 kubelet[2722]: I0712 00:10:48.127115 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-lib-modules\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127683 kubelet[2722]: I0712 00:10:48.127137 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-config-path\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.127683 kubelet[2722]: I0712 00:10:48.127158 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-net\") pod \"cilium-8qglz\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " pod="kube-system/cilium-8qglz" Jul 12 00:10:48.148452 systemd[1]: Created slice kubepods-besteffort-pod446f5b44_6e01_4c1f_961b_905fb950a9dd.slice - libcontainer container kubepods-besteffort-pod446f5b44_6e01_4c1f_961b_905fb950a9dd.slice. Jul 12 00:10:48.328799 kubelet[2722]: I0712 00:10:48.328618 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/446f5b44-6e01-4c1f-961b-905fb950a9dd-cilium-config-path\") pod \"cilium-operator-5d85765b45-x9ffs\" (UID: \"446f5b44-6e01-4c1f-961b-905fb950a9dd\") " pod="kube-system/cilium-operator-5d85765b45-x9ffs" Jul 12 00:10:48.328799 kubelet[2722]: I0712 00:10:48.328677 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjsb\" (UniqueName: \"kubernetes.io/projected/446f5b44-6e01-4c1f-961b-905fb950a9dd-kube-api-access-6mjsb\") pod \"cilium-operator-5d85765b45-x9ffs\" (UID: \"446f5b44-6e01-4c1f-961b-905fb950a9dd\") " pod="kube-system/cilium-operator-5d85765b45-x9ffs" Jul 12 00:10:48.335930 kubelet[2722]: E0712 00:10:48.335863 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:48.338893 containerd[1585]: time="2025-07-12T00:10:48.338127414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qglz,Uid:01eb5897-18aa-4dbe-945c-a323f721c1d4,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:48.352094 kubelet[2722]: E0712 00:10:48.352037 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:48.352706 containerd[1585]: time="2025-07-12T00:10:48.352657873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ll49,Uid:70dace59-0f56-4468-ac34-6801c3d2c926,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:49.052755 kubelet[2722]: E0712 00:10:49.052690 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:49.053454 containerd[1585]: time="2025-07-12T00:10:49.053395827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x9ffs,Uid:446f5b44-6e01-4c1f-961b-905fb950a9dd,Namespace:kube-system,Attempt:0,}" Jul 12 00:10:50.742720 containerd[1585]: time="2025-07-12T00:10:50.742666841Z" level=info msg="connecting to shim 6d9ede83326c4ab20782e2851568612c5886781c3ba76fbaa9c9ab8b81286173" address="unix:///run/containerd/s/4084a93c9f78d065cf6eb4019ee5d0a567a0bfc2e79c27232fa35aa9966d3bed" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:10:50.772038 systemd[1]: Started cri-containerd-6d9ede83326c4ab20782e2851568612c5886781c3ba76fbaa9c9ab8b81286173.scope - libcontainer container 6d9ede83326c4ab20782e2851568612c5886781c3ba76fbaa9c9ab8b81286173. Jul 12 00:10:51.053890 containerd[1585]: time="2025-07-12T00:10:51.053809247Z" level=info msg="connecting to shim b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:10:51.067291 containerd[1585]: time="2025-07-12T00:10:51.067235094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ll49,Uid:70dace59-0f56-4468-ac34-6801c3d2c926,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d9ede83326c4ab20782e2851568612c5886781c3ba76fbaa9c9ab8b81286173\"" Jul 12 00:10:51.068301 kubelet[2722]: E0712 00:10:51.068273 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:51.073590 containerd[1585]: time="2025-07-12T00:10:51.073493049Z" level=info msg="CreateContainer within sandbox \"6d9ede83326c4ab20782e2851568612c5886781c3ba76fbaa9c9ab8b81286173\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:10:51.091033 systemd[1]: Started cri-containerd-b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818.scope - libcontainer container b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818. Jul 12 00:10:51.154651 containerd[1585]: time="2025-07-12T00:10:51.154569664Z" level=info msg="connecting to shim 2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00" address="unix:///run/containerd/s/caa7f794e764fc279ca21e4935558e0371fe5c38e1a2799835b23fd30ca11c46" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:10:51.191158 systemd[1]: Started cri-containerd-2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00.scope - libcontainer container 2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00. Jul 12 00:10:51.245007 containerd[1585]: time="2025-07-12T00:10:51.244925169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qglz,Uid:01eb5897-18aa-4dbe-945c-a323f721c1d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\"" Jul 12 00:10:51.246233 kubelet[2722]: E0712 00:10:51.246195 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:51.261996 containerd[1585]: time="2025-07-12T00:10:51.261926256Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:10:51.367434 containerd[1585]: time="2025-07-12T00:10:51.367264545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x9ffs,Uid:446f5b44-6e01-4c1f-961b-905fb950a9dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\"" Jul 12 00:10:51.368416 kubelet[2722]: E0712 00:10:51.368358 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:51.904368 containerd[1585]: time="2025-07-12T00:10:51.904098325Z" level=info msg="Container c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:10:52.886547 containerd[1585]: time="2025-07-12T00:10:52.886352560Z" level=info msg="CreateContainer within sandbox \"6d9ede83326c4ab20782e2851568612c5886781c3ba76fbaa9c9ab8b81286173\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb\"" Jul 12 00:10:52.895327 containerd[1585]: time="2025-07-12T00:10:52.893053998Z" level=info msg="StartContainer for \"c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb\"" Jul 12 00:10:52.898759 containerd[1585]: time="2025-07-12T00:10:52.897261949Z" level=info msg="connecting to shim c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb" address="unix:///run/containerd/s/4084a93c9f78d065cf6eb4019ee5d0a567a0bfc2e79c27232fa35aa9966d3bed" protocol=ttrpc version=3 Jul 12 00:10:52.988318 systemd[1]: Started cri-containerd-c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb.scope - libcontainer container c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb. Jul 12 00:10:53.186557 containerd[1585]: time="2025-07-12T00:10:53.185517231Z" level=info msg="StartContainer for \"c3e82dd4ca26a59de750dcbb30d6913da0e45f85ad53c37fd88b8e21e6ca81fb\" returns successfully" Jul 12 00:10:54.055791 kubelet[2722]: E0712 00:10:54.055735 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:55.057479 kubelet[2722]: E0712 00:10:55.057442 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:10:59.689250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055922679.mount: Deactivated successfully. Jul 12 00:11:09.778366 containerd[1585]: time="2025-07-12T00:11:09.778261073Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:09.900197 containerd[1585]: time="2025-07-12T00:11:09.900139387Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 12 00:11:10.070172 containerd[1585]: time="2025-07-12T00:11:10.069996255Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:10.072119 containerd[1585]: time="2025-07-12T00:11:10.072081654Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.810092611s" Jul 12 00:11:10.072119 containerd[1585]: time="2025-07-12T00:11:10.072114406Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 12 00:11:10.075564 containerd[1585]: time="2025-07-12T00:11:10.075538922Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:11:10.083155 containerd[1585]: time="2025-07-12T00:11:10.083117226Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:11:11.202980 containerd[1585]: time="2025-07-12T00:11:11.202907020Z" level=info msg="Container 87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:11.207041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642649586.mount: Deactivated successfully. Jul 12 00:11:11.637374 containerd[1585]: time="2025-07-12T00:11:11.637309926Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\"" Jul 12 00:11:11.637999 containerd[1585]: time="2025-07-12T00:11:11.637951963Z" level=info msg="StartContainer for \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\"" Jul 12 00:11:11.639237 containerd[1585]: time="2025-07-12T00:11:11.639177506Z" level=info msg="connecting to shim 87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" protocol=ttrpc version=3 Jul 12 00:11:11.705189 systemd[1]: Started cri-containerd-87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf.scope - libcontainer container 87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf. Jul 12 00:11:11.757704 systemd[1]: cri-containerd-87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf.scope: Deactivated successfully. Jul 12 00:11:11.760691 containerd[1585]: time="2025-07-12T00:11:11.760630377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" id:\"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" pid:3162 exited_at:{seconds:1752279071 nanos:760107794}" Jul 12 00:11:11.839210 containerd[1585]: time="2025-07-12T00:11:11.839130334Z" level=info msg="received exit event container_id:\"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" id:\"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" pid:3162 exited_at:{seconds:1752279071 nanos:760107794}" Jul 12 00:11:11.840335 containerd[1585]: time="2025-07-12T00:11:11.840224040Z" level=info msg="StartContainer for \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" returns successfully" Jul 12 00:11:11.865309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf-rootfs.mount: Deactivated successfully. Jul 12 00:11:12.094399 kubelet[2722]: E0712 00:11:12.094358 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:12.476019 kubelet[2722]: I0712 00:11:12.474069 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9ll49" podStartSLOduration=25.474045212 podStartE2EDuration="25.474045212s" podCreationTimestamp="2025-07-12 00:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:10:54.265954347 +0000 UTC m=+14.466594053" watchObservedRunningTime="2025-07-12 00:11:12.474045212 +0000 UTC m=+32.674684918" Jul 12 00:11:13.099083 kubelet[2722]: E0712 00:11:13.098619 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:13.102440 containerd[1585]: time="2025-07-12T00:11:13.102350139Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:11:13.709011 containerd[1585]: time="2025-07-12T00:11:13.708929331Z" level=info msg="Container 8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:13.713942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705983226.mount: Deactivated successfully. Jul 12 00:11:14.086569 containerd[1585]: time="2025-07-12T00:11:14.086514113Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\"" Jul 12 00:11:14.087243 containerd[1585]: time="2025-07-12T00:11:14.087191315Z" level=info msg="StartContainer for \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\"" Jul 12 00:11:14.088299 containerd[1585]: time="2025-07-12T00:11:14.088269020Z" level=info msg="connecting to shim 8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" protocol=ttrpc version=3 Jul 12 00:11:14.123054 systemd[1]: Started cri-containerd-8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18.scope - libcontainer container 8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18. Jul 12 00:11:14.386474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:11:14.386803 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:14.387068 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:14.389207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:14.391669 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:11:14.392407 systemd[1]: cri-containerd-8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18.scope: Deactivated successfully. Jul 12 00:11:14.395125 containerd[1585]: time="2025-07-12T00:11:14.395025009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" id:\"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" pid:3207 exited_at:{seconds:1752279074 nanos:394379236}" Jul 12 00:11:14.427002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:14.458801 containerd[1585]: time="2025-07-12T00:11:14.458729275Z" level=info msg="received exit event container_id:\"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" id:\"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" pid:3207 exited_at:{seconds:1752279074 nanos:394379236}" Jul 12 00:11:14.467607 containerd[1585]: time="2025-07-12T00:11:14.467562900Z" level=info msg="StartContainer for \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" returns successfully" Jul 12 00:11:14.710759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18-rootfs.mount: Deactivated successfully. Jul 12 00:11:15.804143 kubelet[2722]: E0712 00:11:15.804096 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:15.805983 containerd[1585]: time="2025-07-12T00:11:15.805895748Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:11:18.700309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269773570.mount: Deactivated successfully. Jul 12 00:11:19.531456 containerd[1585]: time="2025-07-12T00:11:19.530375825Z" level=info msg="Container d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:20.795904 containerd[1585]: time="2025-07-12T00:11:20.795813059Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\"" Jul 12 00:11:20.796895 containerd[1585]: time="2025-07-12T00:11:20.796356359Z" level=info msg="StartContainer for \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\"" Jul 12 00:11:20.797725 containerd[1585]: time="2025-07-12T00:11:20.797702047Z" level=info msg="connecting to shim d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" protocol=ttrpc version=3 Jul 12 00:11:20.824029 systemd[1]: Started cri-containerd-d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6.scope - libcontainer container d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6. Jul 12 00:11:20.893985 systemd[1]: cri-containerd-d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6.scope: Deactivated successfully. Jul 12 00:11:20.897208 containerd[1585]: time="2025-07-12T00:11:20.897156021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" id:\"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" pid:3258 exited_at:{seconds:1752279080 nanos:896703561}" Jul 12 00:11:21.178618 containerd[1585]: time="2025-07-12T00:11:21.178468175Z" level=info msg="received exit event container_id:\"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" id:\"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" pid:3258 exited_at:{seconds:1752279080 nanos:896703561}" Jul 12 00:11:21.189269 containerd[1585]: time="2025-07-12T00:11:21.189230966Z" level=info msg="StartContainer for \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" returns successfully" Jul 12 00:11:21.204615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6-rootfs.mount: Deactivated successfully. Jul 12 00:11:22.098281 kubelet[2722]: E0712 00:11:22.098237 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:22.100883 containerd[1585]: time="2025-07-12T00:11:22.100823056Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:11:23.229447 containerd[1585]: time="2025-07-12T00:11:23.229397778Z" level=info msg="Container 847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:23.231167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463357394.mount: Deactivated successfully. Jul 12 00:11:23.931275 containerd[1585]: time="2025-07-12T00:11:23.931131635Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\"" Jul 12 00:11:23.932319 containerd[1585]: time="2025-07-12T00:11:23.931738274Z" level=info msg="StartContainer for \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\"" Jul 12 00:11:23.932865 containerd[1585]: time="2025-07-12T00:11:23.932829704Z" level=info msg="connecting to shim 847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" protocol=ttrpc version=3 Jul 12 00:11:23.957089 systemd[1]: Started cri-containerd-847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817.scope - libcontainer container 847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817. Jul 12 00:11:23.996292 systemd[1]: cri-containerd-847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817.scope: Deactivated successfully. Jul 12 00:11:23.996806 containerd[1585]: time="2025-07-12T00:11:23.996719601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" id:\"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" pid:3308 exited_at:{seconds:1752279083 nanos:996457930}" Jul 12 00:11:24.216181 containerd[1585]: time="2025-07-12T00:11:24.214970233Z" level=info msg="received exit event container_id:\"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" id:\"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" pid:3308 exited_at:{seconds:1752279083 nanos:996457930}" Jul 12 00:11:24.217068 containerd[1585]: time="2025-07-12T00:11:24.217015974Z" level=info msg="StartContainer for \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" returns successfully" Jul 12 00:11:24.236703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817-rootfs.mount: Deactivated successfully. Jul 12 00:11:25.221076 kubelet[2722]: E0712 00:11:25.221034 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:26.226208 kubelet[2722]: E0712 00:11:26.226011 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:26.227857 containerd[1585]: time="2025-07-12T00:11:26.227809281Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:11:26.750205 containerd[1585]: time="2025-07-12T00:11:26.750069616Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:26.820031 containerd[1585]: time="2025-07-12T00:11:26.819949192Z" level=info msg="Container 7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:26.883241 containerd[1585]: time="2025-07-12T00:11:26.883169786Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 12 00:11:27.013214 containerd[1585]: time="2025-07-12T00:11:27.013044059Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:27.070493 containerd[1585]: time="2025-07-12T00:11:27.070415138Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 16.99484634s" Jul 12 00:11:27.070493 containerd[1585]: time="2025-07-12T00:11:27.070460153Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 12 00:11:27.074388 containerd[1585]: time="2025-07-12T00:11:27.074345776Z" level=info msg="CreateContainer within sandbox \"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:11:27.231015 containerd[1585]: time="2025-07-12T00:11:27.230923479Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\"" Jul 12 00:11:27.231747 containerd[1585]: time="2025-07-12T00:11:27.231690318Z" level=info msg="StartContainer for \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\"" Jul 12 00:11:27.232798 containerd[1585]: time="2025-07-12T00:11:27.232748736Z" level=info msg="connecting to shim 7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" protocol=ttrpc version=3 Jul 12 00:11:27.259026 systemd[1]: Started cri-containerd-7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2.scope - libcontainer container 7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2. Jul 12 00:11:27.881408 containerd[1585]: time="2025-07-12T00:11:27.881348719Z" level=info msg="StartContainer for \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" returns successfully" Jul 12 00:11:28.203100 containerd[1585]: time="2025-07-12T00:11:28.202906933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" id:\"ce06a565acc1fb0a585014cbd9ab2abd7a750d593168d777189f76ec93383dfa\" pid:3424 exited_at:{seconds:1752279088 nanos:202447291}" Jul 12 00:11:28.731063 kubelet[2722]: I0712 00:11:28.731003 2722 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:11:28.837527 containerd[1585]: time="2025-07-12T00:11:28.837452527Z" level=info msg="Container fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:29.235077 kubelet[2722]: E0712 00:11:29.235041 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:29.761215 containerd[1585]: time="2025-07-12T00:11:29.761167916Z" level=info msg="CreateContainer within sandbox \"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\"" Jul 12 00:11:29.762136 containerd[1585]: time="2025-07-12T00:11:29.761852581Z" level=info msg="StartContainer for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\"" Jul 12 00:11:29.762808 containerd[1585]: time="2025-07-12T00:11:29.762779460Z" level=info msg="connecting to shim fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41" address="unix:///run/containerd/s/caa7f794e764fc279ca21e4935558e0371fe5c38e1a2799835b23fd30ca11c46" protocol=ttrpc version=3 Jul 12 00:11:29.793168 systemd[1]: Started cri-containerd-fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41.scope - libcontainer container fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41. Jul 12 00:11:30.541438 containerd[1585]: time="2025-07-12T00:11:30.541390059Z" level=info msg="StartContainer for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" returns successfully" Jul 12 00:11:30.545894 kubelet[2722]: E0712 00:11:30.545778 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:30.549590 systemd[1]: Created slice kubepods-burstable-pod8ceb7649_78f4_4cde_b59b_0722cb99a876.slice - libcontainer container kubepods-burstable-pod8ceb7649_78f4_4cde_b59b_0722cb99a876.slice. Jul 12 00:11:30.555965 systemd[1]: Created slice kubepods-burstable-pod4fc842ad_1e23_467e_83aa_5973b291e8ad.slice - libcontainer container kubepods-burstable-pod4fc842ad_1e23_467e_83aa_5973b291e8ad.slice. Jul 12 00:11:30.652126 kubelet[2722]: I0712 00:11:30.652056 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wv7\" (UniqueName: \"kubernetes.io/projected/4fc842ad-1e23-467e-83aa-5973b291e8ad-kube-api-access-k2wv7\") pod \"coredns-7c65d6cfc9-62hpp\" (UID: \"4fc842ad-1e23-467e-83aa-5973b291e8ad\") " pod="kube-system/coredns-7c65d6cfc9-62hpp" Jul 12 00:11:30.652126 kubelet[2722]: I0712 00:11:30.652101 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvcs7\" (UniqueName: \"kubernetes.io/projected/8ceb7649-78f4-4cde-b59b-0722cb99a876-kube-api-access-hvcs7\") pod \"coredns-7c65d6cfc9-bjqv6\" (UID: \"8ceb7649-78f4-4cde-b59b-0722cb99a876\") " pod="kube-system/coredns-7c65d6cfc9-bjqv6" Jul 12 00:11:30.652126 kubelet[2722]: I0712 00:11:30.652121 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fc842ad-1e23-467e-83aa-5973b291e8ad-config-volume\") pod \"coredns-7c65d6cfc9-62hpp\" (UID: \"4fc842ad-1e23-467e-83aa-5973b291e8ad\") " pod="kube-system/coredns-7c65d6cfc9-62hpp" Jul 12 00:11:30.652126 kubelet[2722]: I0712 00:11:30.652145 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ceb7649-78f4-4cde-b59b-0722cb99a876-config-volume\") pod \"coredns-7c65d6cfc9-bjqv6\" (UID: \"8ceb7649-78f4-4cde-b59b-0722cb99a876\") " pod="kube-system/coredns-7c65d6cfc9-bjqv6" Jul 12 00:11:31.453195 kubelet[2722]: E0712 00:11:31.453134 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:31.458380 containerd[1585]: time="2025-07-12T00:11:31.458342126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bjqv6,Uid:8ceb7649-78f4-4cde-b59b-0722cb99a876,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:31.459765 kubelet[2722]: E0712 00:11:31.459733 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:31.460399 containerd[1585]: time="2025-07-12T00:11:31.460332251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-62hpp,Uid:4fc842ad-1e23-467e-83aa-5973b291e8ad,Namespace:kube-system,Attempt:0,}" Jul 12 00:11:31.547121 kubelet[2722]: E0712 00:11:31.547070 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:32.566408 kubelet[2722]: E0712 00:11:32.566363 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:32.903266 kubelet[2722]: I0712 00:11:32.903112 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8qglz" podStartSLOduration=26.078463049 podStartE2EDuration="44.903059978s" podCreationTimestamp="2025-07-12 00:10:48 +0000 UTC" firstStartedPulling="2025-07-12 00:10:51.249222822 +0000 UTC m=+11.449862518" lastFinishedPulling="2025-07-12 00:11:10.073819751 +0000 UTC m=+30.274459447" observedRunningTime="2025-07-12 00:11:31.330119346 +0000 UTC m=+51.530759052" watchObservedRunningTime="2025-07-12 00:11:32.903059978 +0000 UTC m=+53.103699684" Jul 12 00:11:34.089603 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:32970.service - OpenSSH per-connection server daemon (10.0.0.1:32970). Jul 12 00:11:34.181076 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 32970 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:11:34.183220 sshd-session[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:34.193931 systemd-logind[1567]: New session 8 of user core. Jul 12 00:11:34.204103 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:11:34.421608 sshd[3513]: Connection closed by 10.0.0.1 port 32970 Jul 12 00:11:34.425052 sshd-session[3511]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:34.429207 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:32970.service: Deactivated successfully. Jul 12 00:11:34.433480 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:11:34.437728 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:11:34.439087 systemd-logind[1567]: Removed session 8. Jul 12 00:11:35.115938 systemd[1]: cri-containerd-7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2.scope: Deactivated successfully. Jul 12 00:11:35.117571 systemd[1]: cri-containerd-7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2.scope: Consumed 773ms CPU time, 71.4M memory peak, 128K read from disk, 13.3M written to disk. Jul 12 00:11:35.135189 containerd[1585]: time="2025-07-12T00:11:35.117952756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" id:\"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" pid:3349 exit_status:1 exited_at:{seconds:1752279095 nanos:117520760}" Jul 12 00:11:35.135189 containerd[1585]: time="2025-07-12T00:11:35.118228449Z" level=info msg="received exit event container_id:\"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" id:\"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" pid:3349 exit_status:1 exited_at:{seconds:1752279095 nanos:117520760}" Jul 12 00:11:35.238979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2-rootfs.mount: Deactivated successfully. Jul 12 00:11:35.582847 kubelet[2722]: I0712 00:11:35.582796 2722 scope.go:117] "RemoveContainer" containerID="7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2" Jul 12 00:11:35.583541 kubelet[2722]: E0712 00:11:35.582908 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:35.585383 containerd[1585]: time="2025-07-12T00:11:35.585335226Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for container &ContainerMetadata{Name:cilium-agent,Attempt:1,}" Jul 12 00:11:35.730738 kubelet[2722]: I0712 00:11:35.730482 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-x9ffs" podStartSLOduration=12.02692982 podStartE2EDuration="47.730464174s" podCreationTimestamp="2025-07-12 00:10:48 +0000 UTC" firstStartedPulling="2025-07-12 00:10:51.368950068 +0000 UTC m=+11.569589774" lastFinishedPulling="2025-07-12 00:11:27.072484422 +0000 UTC m=+47.273124128" observedRunningTime="2025-07-12 00:11:33.658770117 +0000 UTC m=+53.859409853" watchObservedRunningTime="2025-07-12 00:11:35.730464174 +0000 UTC m=+55.931103880" Jul 12 00:11:35.813927 containerd[1585]: time="2025-07-12T00:11:35.813304057Z" level=info msg="Container 4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:36.050145 containerd[1585]: time="2025-07-12T00:11:36.050089605Z" level=info msg="CreateContainer within sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" for &ContainerMetadata{Name:cilium-agent,Attempt:1,} returns container id \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\"" Jul 12 00:11:36.050780 containerd[1585]: time="2025-07-12T00:11:36.050740012Z" level=info msg="StartContainer for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\"" Jul 12 00:11:36.051933 containerd[1585]: time="2025-07-12T00:11:36.051905555Z" level=info msg="connecting to shim 4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b" address="unix:///run/containerd/s/eb2d37f10c0b6d9f0fbd5ce1ff2e90c377aa22c583d172139fddb13ccf50a335" protocol=ttrpc version=3 Jul 12 00:11:36.077052 systemd[1]: Started cri-containerd-4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b.scope - libcontainer container 4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b. Jul 12 00:11:36.194747 containerd[1585]: time="2025-07-12T00:11:36.194690831Z" level=info msg="StartContainer for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" returns successfully" Jul 12 00:11:36.266000 containerd[1585]: time="2025-07-12T00:11:36.265924923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" id:\"b9411bd3cdfe1f931b68d994c2bd489e681b42137e965225336bca9909726b12\" pid:3588 exited_at:{seconds:1752279096 nanos:265566038}" Jul 12 00:11:36.585947 kubelet[2722]: E0712 00:11:36.585882 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:37.586723 kubelet[2722]: E0712 00:11:37.586677 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:38.548997 systemd-networkd[1507]: cilium_host: Link UP Jul 12 00:11:38.549181 systemd-networkd[1507]: cilium_net: Link UP Jul 12 00:11:38.549366 systemd-networkd[1507]: cilium_host: Gained carrier Jul 12 00:11:38.549547 systemd-networkd[1507]: cilium_net: Gained carrier Jul 12 00:11:38.591388 kubelet[2722]: E0712 00:11:38.589067 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:38.671108 systemd-networkd[1507]: cilium_vxlan: Link UP Jul 12 00:11:38.671121 systemd-networkd[1507]: cilium_vxlan: Gained carrier Jul 12 00:11:38.745142 systemd-networkd[1507]: cilium_host: Gained IPv6LL Jul 12 00:11:39.105069 systemd-networkd[1507]: cilium_net: Gained IPv6LL Jul 12 00:11:39.306922 kernel: NET: Registered PF_ALG protocol family Jul 12 00:11:39.437620 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:47242.service - OpenSSH per-connection server daemon (10.0.0.1:47242). Jul 12 00:11:39.502766 sshd[3753]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:11:39.503680 sshd-session[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:39.510932 systemd-logind[1567]: New session 9 of user core. Jul 12 00:11:39.521141 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:11:39.651716 sshd[3766]: Connection closed by 10.0.0.1 port 47242 Jul 12 00:11:39.652126 sshd-session[3753]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:39.658565 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:47242.service: Deactivated successfully. Jul 12 00:11:39.661062 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:11:39.663084 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:11:39.665804 systemd-logind[1567]: Removed session 9. Jul 12 00:11:39.874416 systemd-networkd[1507]: cilium_vxlan: Gained IPv6LL Jul 12 00:11:40.063298 systemd-networkd[1507]: lxc_health: Link UP Jul 12 00:11:40.073386 systemd-networkd[1507]: lxc_health: Gained carrier Jul 12 00:11:40.338917 kubelet[2722]: E0712 00:11:40.338816 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:40.442746 systemd-networkd[1507]: lxc7c81d84bcc1f: Link UP Jul 12 00:11:40.443899 kernel: eth0: renamed from tmpb998d Jul 12 00:11:40.444336 systemd-networkd[1507]: lxc7c81d84bcc1f: Gained carrier Jul 12 00:11:40.594022 systemd-networkd[1507]: lxc7d0e8f82bf3a: Link UP Jul 12 00:11:40.658954 kernel: eth0: renamed from tmp5d059 Jul 12 00:11:40.667688 systemd-networkd[1507]: lxc7d0e8f82bf3a: Gained carrier Jul 12 00:11:40.670158 kubelet[2722]: E0712 00:11:40.670130 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:41.153029 systemd-networkd[1507]: lxc_health: Gained IPv6LL Jul 12 00:11:41.665049 systemd-networkd[1507]: lxc7d0e8f82bf3a: Gained IPv6LL Jul 12 00:11:42.305154 systemd-networkd[1507]: lxc7c81d84bcc1f: Gained IPv6LL Jul 12 00:11:44.677853 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:47254.service - OpenSSH per-connection server daemon (10.0.0.1:47254). Jul 12 00:11:44.733565 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 47254 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:11:44.735383 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:44.740664 systemd-logind[1567]: New session 10 of user core. Jul 12 00:11:44.751028 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:11:44.882315 sshd[4048]: Connection closed by 10.0.0.1 port 47254 Jul 12 00:11:44.882699 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:44.886501 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:47254.service: Deactivated successfully. Jul 12 00:11:44.888945 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:11:44.890857 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:11:44.892715 systemd-logind[1567]: Removed session 10. Jul 12 00:11:45.274077 containerd[1585]: time="2025-07-12T00:11:45.274016851Z" level=info msg="connecting to shim b998deea78c263ad53d6d644cd760f1c3e9fba0659d4982e79fccadee78f1df2" address="unix:///run/containerd/s/973b1c3f42f1ce1c2e00bd4cb50ab83bc8ceac7d6d438552df605db434e27f6b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:11:45.275490 containerd[1585]: time="2025-07-12T00:11:45.275440194Z" level=info msg="connecting to shim 5d059825617e7cabc322ec25209a2618bb391cd4954d69ca350dd3a7bc9579e6" address="unix:///run/containerd/s/079260ef95bd8fc9b08ea92a8f309ba24b64f98a8f0fb8e1b21ea735e7230fda" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:11:45.311222 systemd[1]: Started cri-containerd-5d059825617e7cabc322ec25209a2618bb391cd4954d69ca350dd3a7bc9579e6.scope - libcontainer container 5d059825617e7cabc322ec25209a2618bb391cd4954d69ca350dd3a7bc9579e6. Jul 12 00:11:45.315663 systemd[1]: Started cri-containerd-b998deea78c263ad53d6d644cd760f1c3e9fba0659d4982e79fccadee78f1df2.scope - libcontainer container b998deea78c263ad53d6d644cd760f1c3e9fba0659d4982e79fccadee78f1df2. Jul 12 00:11:45.330191 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:11:45.331547 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:11:45.498676 containerd[1585]: time="2025-07-12T00:11:45.498613705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bjqv6,Uid:8ceb7649-78f4-4cde-b59b-0722cb99a876,Namespace:kube-system,Attempt:0,} returns sandbox id \"b998deea78c263ad53d6d644cd760f1c3e9fba0659d4982e79fccadee78f1df2\"" Jul 12 00:11:45.499552 kubelet[2722]: E0712 00:11:45.499528 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:45.509805 containerd[1585]: time="2025-07-12T00:11:45.509754479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-62hpp,Uid:4fc842ad-1e23-467e-83aa-5973b291e8ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d059825617e7cabc322ec25209a2618bb391cd4954d69ca350dd3a7bc9579e6\"" Jul 12 00:11:45.510399 kubelet[2722]: E0712 00:11:45.510372 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:45.510479 containerd[1585]: time="2025-07-12T00:11:45.510372385Z" level=info msg="CreateContainer within sandbox \"b998deea78c263ad53d6d644cd760f1c3e9fba0659d4982e79fccadee78f1df2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:11:45.511824 containerd[1585]: time="2025-07-12T00:11:45.511782523Z" level=info msg="CreateContainer within sandbox \"5d059825617e7cabc322ec25209a2618bb391cd4954d69ca350dd3a7bc9579e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:11:45.891155 containerd[1585]: time="2025-07-12T00:11:45.891088352Z" level=info msg="Container e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:46.242014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564155221.mount: Deactivated successfully. Jul 12 00:11:46.243681 containerd[1585]: time="2025-07-12T00:11:46.243621655Z" level=info msg="Container ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:11:46.467344 containerd[1585]: time="2025-07-12T00:11:46.467288573Z" level=info msg="CreateContainer within sandbox \"b998deea78c263ad53d6d644cd760f1c3e9fba0659d4982e79fccadee78f1df2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d\"" Jul 12 00:11:46.468165 containerd[1585]: time="2025-07-12T00:11:46.468118035Z" level=info msg="StartContainer for \"e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d\"" Jul 12 00:11:46.471315 containerd[1585]: time="2025-07-12T00:11:46.471270686Z" level=info msg="connecting to shim e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d" address="unix:///run/containerd/s/973b1c3f42f1ce1c2e00bd4cb50ab83bc8ceac7d6d438552df605db434e27f6b" protocol=ttrpc version=3 Jul 12 00:11:46.497030 systemd[1]: Started cri-containerd-e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d.scope - libcontainer container e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d. Jul 12 00:11:47.090459 containerd[1585]: time="2025-07-12T00:11:47.090405467Z" level=info msg="StartContainer for \"e85103a0e6c92415b160023cb3a378e605b6751dafc8dde25da04f5f5340d13d\" returns successfully" Jul 12 00:11:47.090609 containerd[1585]: time="2025-07-12T00:11:47.090468167Z" level=info msg="CreateContainer within sandbox \"5d059825617e7cabc322ec25209a2618bb391cd4954d69ca350dd3a7bc9579e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e\"" Jul 12 00:11:47.091213 containerd[1585]: time="2025-07-12T00:11:47.091186615Z" level=info msg="StartContainer for \"ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e\"" Jul 12 00:11:47.092285 containerd[1585]: time="2025-07-12T00:11:47.092244324Z" level=info msg="connecting to shim ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e" address="unix:///run/containerd/s/079260ef95bd8fc9b08ea92a8f309ba24b64f98a8f0fb8e1b21ea735e7230fda" protocol=ttrpc version=3 Jul 12 00:11:47.114073 systemd[1]: Started cri-containerd-ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e.scope - libcontainer container ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e. Jul 12 00:11:47.670744 containerd[1585]: time="2025-07-12T00:11:47.670683414Z" level=info msg="StartContainer for \"ef1af9a1745b10c85ddd2d99e8ab73f81d7b70a03703cd0e75fcf7790a51688e\" returns successfully" Jul 12 00:11:48.096844 kubelet[2722]: E0712 00:11:48.096558 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:48.096844 kubelet[2722]: E0712 00:11:48.096731 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:48.636528 kubelet[2722]: I0712 00:11:48.636458 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-62hpp" podStartSLOduration=60.636436679 podStartE2EDuration="1m0.636436679s" podCreationTimestamp="2025-07-12 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:48.636073674 +0000 UTC m=+68.836713390" watchObservedRunningTime="2025-07-12 00:11:48.636436679 +0000 UTC m=+68.837076385" Jul 12 00:11:48.636752 kubelet[2722]: I0712 00:11:48.636549 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bjqv6" podStartSLOduration=60.636544777 podStartE2EDuration="1m0.636544777s" podCreationTimestamp="2025-07-12 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:11:48.208609681 +0000 UTC m=+68.409249407" watchObservedRunningTime="2025-07-12 00:11:48.636544777 +0000 UTC m=+68.837184473" Jul 12 00:11:49.099352 kubelet[2722]: E0712 00:11:49.098925 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:49.099352 kubelet[2722]: E0712 00:11:49.099121 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:49.894818 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:58996.service - OpenSSH per-connection server daemon (10.0.0.1:58996). Jul 12 00:11:49.958061 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:11:49.959515 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:49.964140 systemd-logind[1567]: New session 11 of user core. Jul 12 00:11:49.973013 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:11:50.091241 sshd[4227]: Connection closed by 10.0.0.1 port 58996 Jul 12 00:11:50.091585 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:50.095624 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:58996.service: Deactivated successfully. Jul 12 00:11:50.097807 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:11:50.098793 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:11:50.100258 kubelet[2722]: E0712 00:11:50.100235 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:50.101428 kubelet[2722]: E0712 00:11:50.101080 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:50.101205 systemd-logind[1567]: Removed session 11. Jul 12 00:11:51.102475 kubelet[2722]: E0712 00:11:51.102439 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:11:55.107201 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:58998.service - OpenSSH per-connection server daemon (10.0.0.1:58998). Jul 12 00:11:55.159274 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 58998 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:11:55.161244 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:55.166425 systemd-logind[1567]: New session 12 of user core. Jul 12 00:11:55.180092 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:11:55.372864 sshd[4252]: Connection closed by 10.0.0.1 port 58998 Jul 12 00:11:55.373234 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:55.378238 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:58998.service: Deactivated successfully. Jul 12 00:11:55.380734 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:11:55.382050 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:11:55.383407 systemd-logind[1567]: Removed session 12. Jul 12 00:12:00.387782 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:33152.service - OpenSSH per-connection server daemon (10.0.0.1:33152). Jul 12 00:12:00.454499 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 33152 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:00.456043 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:00.460750 systemd-logind[1567]: New session 13 of user core. Jul 12 00:12:00.478113 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:12:00.635571 sshd[4269]: Connection closed by 10.0.0.1 port 33152 Jul 12 00:12:00.635982 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:00.641523 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:33152.service: Deactivated successfully. Jul 12 00:12:00.644086 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:12:00.645115 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:12:00.646991 systemd-logind[1567]: Removed session 13. Jul 12 00:12:04.898063 kubelet[2722]: E0712 00:12:04.897969 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:05.652335 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:33154.service - OpenSSH per-connection server daemon (10.0.0.1:33154). Jul 12 00:12:05.710517 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:05.712651 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:05.717824 systemd-logind[1567]: New session 14 of user core. Jul 12 00:12:05.731050 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:12:05.852936 sshd[4286]: Connection closed by 10.0.0.1 port 33154 Jul 12 00:12:05.853344 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:05.857769 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:33154.service: Deactivated successfully. Jul 12 00:12:05.860037 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:12:05.861172 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:12:05.862604 systemd-logind[1567]: Removed session 14. Jul 12 00:12:06.897718 kubelet[2722]: E0712 00:12:06.897641 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:10.874064 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:43534.service - OpenSSH per-connection server daemon (10.0.0.1:43534). Jul 12 00:12:10.930096 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 43534 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:10.931731 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:10.937113 systemd-logind[1567]: New session 15 of user core. Jul 12 00:12:10.948048 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:12:11.108715 sshd[4302]: Connection closed by 10.0.0.1 port 43534 Jul 12 00:12:11.109125 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:11.113692 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:43534.service: Deactivated successfully. Jul 12 00:12:11.115892 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:12:11.116849 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:12:11.118268 systemd-logind[1567]: Removed session 15. Jul 12 00:12:11.897919 kubelet[2722]: E0712 00:12:11.897845 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:12.897224 kubelet[2722]: E0712 00:12:12.897168 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:16.133808 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:46800.service - OpenSSH per-connection server daemon (10.0.0.1:46800). Jul 12 00:12:16.190240 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 46800 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:16.192181 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:16.198231 systemd-logind[1567]: New session 16 of user core. Jul 12 00:12:16.208125 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:12:16.323758 sshd[4319]: Connection closed by 10.0.0.1 port 46800 Jul 12 00:12:16.324191 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:16.328753 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:46800.service: Deactivated successfully. Jul 12 00:12:16.331586 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:12:16.332713 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:12:16.334721 systemd-logind[1567]: Removed session 16. Jul 12 00:12:21.345647 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:46812.service - OpenSSH per-connection server daemon (10.0.0.1:46812). Jul 12 00:12:21.422049 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 46812 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:21.423923 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:21.428482 systemd-logind[1567]: New session 17 of user core. Jul 12 00:12:21.446061 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:12:21.575374 sshd[4335]: Connection closed by 10.0.0.1 port 46812 Jul 12 00:12:21.575714 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:21.589047 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:46812.service: Deactivated successfully. Jul 12 00:12:21.591110 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:12:21.591991 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:12:21.595615 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:46826.service - OpenSSH per-connection server daemon (10.0.0.1:46826). Jul 12 00:12:21.596410 systemd-logind[1567]: Removed session 17. Jul 12 00:12:21.647899 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 46826 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:21.649807 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:21.654500 systemd-logind[1567]: New session 18 of user core. Jul 12 00:12:21.665209 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:12:21.816947 sshd[4351]: Connection closed by 10.0.0.1 port 46826 Jul 12 00:12:21.817154 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:21.831488 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:46826.service: Deactivated successfully. Jul 12 00:12:21.835165 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:12:21.836957 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:12:21.843463 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:46828.service - OpenSSH per-connection server daemon (10.0.0.1:46828). Jul 12 00:12:21.844827 systemd-logind[1567]: Removed session 18. Jul 12 00:12:21.900126 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 46828 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:21.901822 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:21.906574 systemd-logind[1567]: New session 19 of user core. Jul 12 00:12:21.913004 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:12:22.023343 sshd[4366]: Connection closed by 10.0.0.1 port 46828 Jul 12 00:12:22.023695 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:22.028016 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:46828.service: Deactivated successfully. Jul 12 00:12:22.030463 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:12:22.031391 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:12:22.032803 systemd-logind[1567]: Removed session 19. Jul 12 00:12:27.037166 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:49024.service - OpenSSH per-connection server daemon (10.0.0.1:49024). Jul 12 00:12:27.092903 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 49024 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:27.094557 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:27.099443 systemd-logind[1567]: New session 20 of user core. Jul 12 00:12:27.110011 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:12:27.232860 sshd[4384]: Connection closed by 10.0.0.1 port 49024 Jul 12 00:12:27.233275 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:27.237456 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:49024.service: Deactivated successfully. Jul 12 00:12:27.239963 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:12:27.241788 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:12:27.243713 systemd-logind[1567]: Removed session 20. Jul 12 00:12:32.255441 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:49054.service - OpenSSH per-connection server daemon (10.0.0.1:49054). Jul 12 00:12:32.313467 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 49054 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:32.315330 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:32.320546 systemd-logind[1567]: New session 21 of user core. Jul 12 00:12:32.330040 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:12:32.492968 sshd[4399]: Connection closed by 10.0.0.1 port 49054 Jul 12 00:12:32.493225 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:32.496940 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:49054.service: Deactivated successfully. Jul 12 00:12:32.499160 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:12:32.501049 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:12:32.502692 systemd-logind[1567]: Removed session 21. Jul 12 00:12:37.506012 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:60822.service - OpenSSH per-connection server daemon (10.0.0.1:60822). Jul 12 00:12:37.561813 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 60822 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:37.565232 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:37.569936 systemd-logind[1567]: New session 22 of user core. Jul 12 00:12:37.579167 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:12:37.689861 sshd[4415]: Connection closed by 10.0.0.1 port 60822 Jul 12 00:12:37.690185 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:37.698898 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:60822.service: Deactivated successfully. Jul 12 00:12:37.701068 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:12:37.701772 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:12:37.705329 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:60834.service - OpenSSH per-connection server daemon (10.0.0.1:60834). Jul 12 00:12:37.706060 systemd-logind[1567]: Removed session 22. Jul 12 00:12:37.766678 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 60834 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:37.768217 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:37.772908 systemd-logind[1567]: New session 23 of user core. Jul 12 00:12:37.781023 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:12:38.487311 sshd[4430]: Connection closed by 10.0.0.1 port 60834 Jul 12 00:12:38.487954 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:38.496812 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:60834.service: Deactivated successfully. Jul 12 00:12:38.498638 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:12:38.499417 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:12:38.502215 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:60854.service - OpenSSH per-connection server daemon (10.0.0.1:60854). Jul 12 00:12:38.503305 systemd-logind[1567]: Removed session 23. Jul 12 00:12:38.564691 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 60854 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:38.566650 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:38.571651 systemd-logind[1567]: New session 24 of user core. Jul 12 00:12:38.580020 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:12:41.609585 sshd[4443]: Connection closed by 10.0.0.1 port 60854 Jul 12 00:12:41.610025 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:41.622772 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:60854.service: Deactivated successfully. Jul 12 00:12:41.624977 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:12:41.625820 systemd-logind[1567]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:12:41.629777 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:60870.service - OpenSSH per-connection server daemon (10.0.0.1:60870). Jul 12 00:12:41.630448 systemd-logind[1567]: Removed session 24. Jul 12 00:12:41.681178 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 60870 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:41.682700 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:41.687281 systemd-logind[1567]: New session 25 of user core. Jul 12 00:12:41.700037 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:12:42.103067 sshd[4468]: Connection closed by 10.0.0.1 port 60870 Jul 12 00:12:42.103398 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:42.113550 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:60870.service: Deactivated successfully. Jul 12 00:12:42.115455 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:12:42.116327 systemd-logind[1567]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:12:42.119068 systemd[1]: Started sshd@25-10.0.0.57:22-10.0.0.1:60878.service - OpenSSH per-connection server daemon (10.0.0.1:60878). Jul 12 00:12:42.119712 systemd-logind[1567]: Removed session 25. Jul 12 00:12:42.172700 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 60878 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:42.174428 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:42.179070 systemd-logind[1567]: New session 26 of user core. Jul 12 00:12:42.187013 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:12:42.346205 sshd[4482]: Connection closed by 10.0.0.1 port 60878 Jul 12 00:12:42.346539 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:42.351212 systemd[1]: sshd@25-10.0.0.57:22-10.0.0.1:60878.service: Deactivated successfully. Jul 12 00:12:42.353351 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:12:42.354428 systemd-logind[1567]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:12:42.355831 systemd-logind[1567]: Removed session 26. Jul 12 00:12:46.897966 kubelet[2722]: E0712 00:12:46.897902 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:47.363197 systemd[1]: Started sshd@26-10.0.0.57:22-10.0.0.1:58362.service - OpenSSH per-connection server daemon (10.0.0.1:58362). Jul 12 00:12:47.425709 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 58362 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:47.427823 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:47.433113 systemd-logind[1567]: New session 27 of user core. Jul 12 00:12:47.444153 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:12:47.584994 sshd[4497]: Connection closed by 10.0.0.1 port 58362 Jul 12 00:12:47.585401 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:47.590645 systemd[1]: sshd@26-10.0.0.57:22-10.0.0.1:58362.service: Deactivated successfully. Jul 12 00:12:47.593007 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:12:47.594049 systemd-logind[1567]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:12:47.596367 systemd-logind[1567]: Removed session 27. Jul 12 00:12:47.898491 kubelet[2722]: E0712 00:12:47.898425 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:12:52.602535 systemd[1]: Started sshd@27-10.0.0.57:22-10.0.0.1:58366.service - OpenSSH per-connection server daemon (10.0.0.1:58366). Jul 12 00:12:52.656304 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 58366 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:52.658320 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:52.663833 systemd-logind[1567]: New session 28 of user core. Jul 12 00:12:52.674215 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 12 00:12:52.786796 sshd[4518]: Connection closed by 10.0.0.1 port 58366 Jul 12 00:12:52.787146 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:52.791675 systemd[1]: sshd@27-10.0.0.57:22-10.0.0.1:58366.service: Deactivated successfully. Jul 12 00:12:52.794188 systemd[1]: session-28.scope: Deactivated successfully. Jul 12 00:12:52.795029 systemd-logind[1567]: Session 28 logged out. Waiting for processes to exit. Jul 12 00:12:52.796554 systemd-logind[1567]: Removed session 28. Jul 12 00:12:57.809795 systemd[1]: Started sshd@28-10.0.0.57:22-10.0.0.1:59146.service - OpenSSH per-connection server daemon (10.0.0.1:59146). Jul 12 00:12:57.876827 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 59146 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:12:57.878591 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:57.883948 systemd-logind[1567]: New session 29 of user core. Jul 12 00:12:57.895023 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 12 00:12:58.017909 sshd[4536]: Connection closed by 10.0.0.1 port 59146 Jul 12 00:12:58.018279 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:58.023978 systemd[1]: sshd@28-10.0.0.57:22-10.0.0.1:59146.service: Deactivated successfully. Jul 12 00:12:58.026428 systemd[1]: session-29.scope: Deactivated successfully. Jul 12 00:12:58.027679 systemd-logind[1567]: Session 29 logged out. Waiting for processes to exit. Jul 12 00:12:58.029263 systemd-logind[1567]: Removed session 29. Jul 12 00:13:03.033062 systemd[1]: Started sshd@29-10.0.0.57:22-10.0.0.1:59158.service - OpenSSH per-connection server daemon (10.0.0.1:59158). Jul 12 00:13:03.079628 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:13:03.081558 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:13:03.087056 systemd-logind[1567]: New session 30 of user core. Jul 12 00:13:03.099183 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 12 00:13:03.208798 sshd[4552]: Connection closed by 10.0.0.1 port 59158 Jul 12 00:13:03.209138 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Jul 12 00:13:03.223487 systemd[1]: sshd@29-10.0.0.57:22-10.0.0.1:59158.service: Deactivated successfully. Jul 12 00:13:03.225378 systemd[1]: session-30.scope: Deactivated successfully. Jul 12 00:13:03.226271 systemd-logind[1567]: Session 30 logged out. Waiting for processes to exit. Jul 12 00:13:03.229563 systemd[1]: Started sshd@30-10.0.0.57:22-10.0.0.1:59162.service - OpenSSH per-connection server daemon (10.0.0.1:59162). Jul 12 00:13:03.230256 systemd-logind[1567]: Removed session 30. Jul 12 00:13:03.285865 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 59162 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:13:03.287673 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:13:03.292376 systemd-logind[1567]: New session 31 of user core. Jul 12 00:13:03.298999 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 12 00:13:05.596766 containerd[1585]: time="2025-07-12T00:13:05.596692045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" id:\"93bc36007e438fffda96bbb1de0ed38cc07a63cd15ffc1df99e41ba7e75013c9\" pid:4588 exited_at:{seconds:1752279185 nanos:595784053}" Jul 12 00:13:05.624297 containerd[1585]: time="2025-07-12T00:13:05.624191311Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:13:05.624451 containerd[1585]: time="2025-07-12T00:13:05.624331925Z" level=info msg="StopContainer for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" with timeout 30 (s)" Jul 12 00:13:05.624451 containerd[1585]: time="2025-07-12T00:13:05.624377260Z" level=info msg="StopContainer for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" with timeout 2 (s)" Jul 12 00:13:05.641801 containerd[1585]: time="2025-07-12T00:13:05.641741270Z" level=info msg="Stop container \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" with signal terminated" Jul 12 00:13:05.642822 containerd[1585]: time="2025-07-12T00:13:05.642790839Z" level=info msg="Stop container \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" with signal terminated" Jul 12 00:13:05.651035 systemd-networkd[1507]: lxc_health: Link DOWN Jul 12 00:13:05.651060 systemd-networkd[1507]: lxc_health: Lost carrier Jul 12 00:13:05.676428 systemd[1]: cri-containerd-fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41.scope: Deactivated successfully. Jul 12 00:13:05.678420 containerd[1585]: time="2025-07-12T00:13:05.678348153Z" level=info msg="received exit event container_id:\"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" id:\"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" pid:3459 exited_at:{seconds:1752279185 nanos:677942027}" Jul 12 00:13:05.678420 containerd[1585]: time="2025-07-12T00:13:05.678394700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" id:\"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" pid:3459 exited_at:{seconds:1752279185 nanos:677942027}" Jul 12 00:13:05.708538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41-rootfs.mount: Deactivated successfully. Jul 12 00:13:05.812693 systemd[1]: cri-containerd-4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b.scope: Deactivated successfully. Jul 12 00:13:05.813140 systemd[1]: cri-containerd-4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b.scope: Consumed 7.000s CPU time, 99M memory peak, 6.5M read from disk, 13.3M written to disk. Jul 12 00:13:05.815657 containerd[1585]: time="2025-07-12T00:13:05.815598983Z" level=info msg="received exit event container_id:\"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" id:\"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" pid:3551 exited_at:{seconds:1752279185 nanos:815317793}" Jul 12 00:13:05.816002 containerd[1585]: time="2025-07-12T00:13:05.815950877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" id:\"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" pid:3551 exited_at:{seconds:1752279185 nanos:815317793}" Jul 12 00:13:05.840845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b-rootfs.mount: Deactivated successfully. Jul 12 00:13:06.409686 containerd[1585]: time="2025-07-12T00:13:06.409630755Z" level=info msg="StopContainer for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" returns successfully" Jul 12 00:13:06.410572 containerd[1585]: time="2025-07-12T00:13:06.410486699Z" level=info msg="StopPodSandbox for \"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\"" Jul 12 00:13:06.410572 containerd[1585]: time="2025-07-12T00:13:06.410567652Z" level=info msg="Container to stop \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.419454 systemd[1]: cri-containerd-2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00.scope: Deactivated successfully. Jul 12 00:13:06.420721 containerd[1585]: time="2025-07-12T00:13:06.420682288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" id:\"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" pid:2949 exit_status:137 exited_at:{seconds:1752279186 nanos:420037762}" Jul 12 00:13:06.451105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00-rootfs.mount: Deactivated successfully. Jul 12 00:13:06.490292 containerd[1585]: time="2025-07-12T00:13:06.490232497Z" level=info msg="StopContainer for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" returns successfully" Jul 12 00:13:06.491050 containerd[1585]: time="2025-07-12T00:13:06.490949889Z" level=info msg="StopPodSandbox for \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\"" Jul 12 00:13:06.491050 containerd[1585]: time="2025-07-12T00:13:06.491039679Z" level=info msg="Container to stop \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.491144 containerd[1585]: time="2025-07-12T00:13:06.491059937Z" level=info msg="Container to stop \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.491144 containerd[1585]: time="2025-07-12T00:13:06.491071569Z" level=info msg="Container to stop \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.491144 containerd[1585]: time="2025-07-12T00:13:06.491085595Z" level=info msg="Container to stop \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.491144 containerd[1585]: time="2025-07-12T00:13:06.491096886Z" level=info msg="Container to stop \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.491144 containerd[1585]: time="2025-07-12T00:13:06.491109259Z" level=info msg="Container to stop \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:13:06.498660 systemd[1]: cri-containerd-b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818.scope: Deactivated successfully. Jul 12 00:13:06.525833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818-rootfs.mount: Deactivated successfully. Jul 12 00:13:06.649502 containerd[1585]: time="2025-07-12T00:13:06.649406420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" id:\"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" pid:2900 exit_status:137 exited_at:{seconds:1752279186 nanos:503158204}" Jul 12 00:13:06.650975 containerd[1585]: time="2025-07-12T00:13:06.650747429Z" level=info msg="shim disconnected" id=2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00 namespace=k8s.io Jul 12 00:13:06.650975 containerd[1585]: time="2025-07-12T00:13:06.650772317Z" level=warning msg="cleaning up after shim disconnected" id=2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00 namespace=k8s.io Jul 12 00:13:06.651614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00-shm.mount: Deactivated successfully. Jul 12 00:13:06.703485 containerd[1585]: time="2025-07-12T00:13:06.650779660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:13:06.703570 containerd[1585]: time="2025-07-12T00:13:06.668129942Z" level=info msg="received exit event sandbox_id:\"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" exit_status:137 exited_at:{seconds:1752279186 nanos:420037762}" Jul 12 00:13:06.703570 containerd[1585]: time="2025-07-12T00:13:06.679512910Z" level=info msg="TearDown network for sandbox \"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" successfully" Jul 12 00:13:06.703646 containerd[1585]: time="2025-07-12T00:13:06.703572775Z" level=info msg="StopPodSandbox for \"2cf7a33f99843260d32e42af649511112dcaeb387e2713605a1de1173a045c00\" returns successfully" Jul 12 00:13:06.754423 containerd[1585]: time="2025-07-12T00:13:06.753400327Z" level=info msg="received exit event sandbox_id:\"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" exit_status:137 exited_at:{seconds:1752279186 nanos:503158204}" Jul 12 00:13:06.754423 containerd[1585]: time="2025-07-12T00:13:06.753701996Z" level=info msg="shim disconnected" id=b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818 namespace=k8s.io Jul 12 00:13:06.754423 containerd[1585]: time="2025-07-12T00:13:06.753735500Z" level=warning msg="cleaning up after shim disconnected" id=b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818 namespace=k8s.io Jul 12 00:13:06.754423 containerd[1585]: time="2025-07-12T00:13:06.753745809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:13:06.754981 containerd[1585]: time="2025-07-12T00:13:06.754948157Z" level=info msg="TearDown network for sandbox \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" successfully" Jul 12 00:13:06.755141 containerd[1585]: time="2025-07-12T00:13:06.755069556Z" level=info msg="StopPodSandbox for \"b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818\" returns successfully" Jul 12 00:13:06.757523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b91defa41c61f2e6ccaddd08e0810ca52468c4c03bfc5f760c49fc303c3a7818-shm.mount: Deactivated successfully. Jul 12 00:13:06.767838 sshd[4567]: Connection closed by 10.0.0.1 port 59162 Jul 12 00:13:06.769775 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Jul 12 00:13:06.777848 systemd[1]: sshd@30-10.0.0.57:22-10.0.0.1:59162.service: Deactivated successfully. Jul 12 00:13:06.779911 systemd[1]: session-31.scope: Deactivated successfully. Jul 12 00:13:06.781019 systemd-logind[1567]: Session 31 logged out. Waiting for processes to exit. Jul 12 00:13:06.785189 systemd[1]: Started sshd@31-10.0.0.57:22-10.0.0.1:43542.service - OpenSSH per-connection server daemon (10.0.0.1:43542). Jul 12 00:13:06.786072 systemd-logind[1567]: Removed session 31. Jul 12 00:13:06.818012 kubelet[2722]: I0712 00:13:06.817969 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-run\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818012 kubelet[2722]: I0712 00:13:06.818005 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-net\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818507 kubelet[2722]: I0712 00:13:06.818029 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbh9f\" (UniqueName: \"kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-kube-api-access-nbh9f\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818507 kubelet[2722]: I0712 00:13:06.818042 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cni-path\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818507 kubelet[2722]: I0712 00:13:06.818059 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-config-path\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818507 kubelet[2722]: I0712 00:13:06.818078 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/446f5b44-6e01-4c1f-961b-905fb950a9dd-cilium-config-path\") pod \"446f5b44-6e01-4c1f-961b-905fb950a9dd\" (UID: \"446f5b44-6e01-4c1f-961b-905fb950a9dd\") " Jul 12 00:13:06.818507 kubelet[2722]: I0712 00:13:06.818075 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.818676 kubelet[2722]: I0712 00:13:06.818117 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.818676 kubelet[2722]: I0712 00:13:06.818091 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-etc-cni-netd\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818676 kubelet[2722]: I0712 00:13:06.818142 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-bpf-maps\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818676 kubelet[2722]: I0712 00:13:06.818163 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01eb5897-18aa-4dbe-945c-a323f721c1d4-clustermesh-secrets\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818676 kubelet[2722]: I0712 00:13:06.818177 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-kernel\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818676 kubelet[2722]: I0712 00:13:06.818204 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-xtables-lock\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818219 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-hostproc\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818234 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-cgroup\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818250 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-hubble-tls\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818263 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-lib-modules\") pod \"01eb5897-18aa-4dbe-945c-a323f721c1d4\" (UID: \"01eb5897-18aa-4dbe-945c-a323f721c1d4\") " Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818279 2722 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjsb\" (UniqueName: \"kubernetes.io/projected/446f5b44-6e01-4c1f-961b-905fb950a9dd-kube-api-access-6mjsb\") pod \"446f5b44-6e01-4c1f-961b-905fb950a9dd\" (UID: \"446f5b44-6e01-4c1f-961b-905fb950a9dd\") " Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818311 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.818915 kubelet[2722]: I0712 00:13:06.818466 2722 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.822156 kubelet[2722]: I0712 00:13:06.818138 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.822156 kubelet[2722]: I0712 00:13:06.818804 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.822156 kubelet[2722]: I0712 00:13:06.818849 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.822156 kubelet[2722]: I0712 00:13:06.822041 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:13:06.822339 kubelet[2722]: I0712 00:13:06.822080 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.822339 kubelet[2722]: I0712 00:13:06.822098 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.822339 kubelet[2722]: I0712 00:13:06.822114 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.822339 kubelet[2722]: I0712 00:13:06.822208 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.823530 kubelet[2722]: I0712 00:13:06.823370 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:13:06.824673 kubelet[2722]: I0712 00:13:06.824606 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446f5b44-6e01-4c1f-961b-905fb950a9dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "446f5b44-6e01-4c1f-961b-905fb950a9dd" (UID: "446f5b44-6e01-4c1f-961b-905fb950a9dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:13:06.827946 kubelet[2722]: I0712 00:13:06.826125 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01eb5897-18aa-4dbe-945c-a323f721c1d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:13:06.827946 kubelet[2722]: I0712 00:13:06.827616 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:13:06.827433 systemd[1]: var-lib-kubelet-pods-446f5b44\x2d6e01\x2d4c1f\x2d961b\x2d905fb950a9dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6mjsb.mount: Deactivated successfully. Jul 12 00:13:06.827563 systemd[1]: var-lib-kubelet-pods-01eb5897\x2d18aa\x2d4dbe\x2d945c\x2da323f721c1d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbh9f.mount: Deactivated successfully. Jul 12 00:13:06.827644 systemd[1]: var-lib-kubelet-pods-01eb5897\x2d18aa\x2d4dbe\x2d945c\x2da323f721c1d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:13:06.828834 kubelet[2722]: I0712 00:13:06.828785 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446f5b44-6e01-4c1f-961b-905fb950a9dd-kube-api-access-6mjsb" (OuterVolumeSpecName: "kube-api-access-6mjsb") pod "446f5b44-6e01-4c1f-961b-905fb950a9dd" (UID: "446f5b44-6e01-4c1f-961b-905fb950a9dd"). InnerVolumeSpecName "kube-api-access-6mjsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:13:06.831007 kubelet[2722]: I0712 00:13:06.830970 2722 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-kube-api-access-nbh9f" (OuterVolumeSpecName: "kube-api-access-nbh9f") pod "01eb5897-18aa-4dbe-945c-a323f721c1d4" (UID: "01eb5897-18aa-4dbe-945c-a323f721c1d4"). InnerVolumeSpecName "kube-api-access-nbh9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:13:06.831452 systemd[1]: var-lib-kubelet-pods-01eb5897\x2d18aa\x2d4dbe\x2d945c\x2da323f721c1d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:13:06.836755 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 43542 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:13:06.838517 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:13:06.843189 systemd-logind[1567]: New session 32 of user core. Jul 12 00:13:06.854051 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 12 00:13:06.919266 kubelet[2722]: I0712 00:13:06.919173 2722 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919266 kubelet[2722]: I0712 00:13:06.919241 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919266 kubelet[2722]: I0712 00:13:06.919255 2722 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919266 kubelet[2722]: I0712 00:13:06.919266 2722 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919266 kubelet[2722]: I0712 00:13:06.919280 2722 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mjsb\" (UniqueName: \"kubernetes.io/projected/446f5b44-6e01-4c1f-961b-905fb950a9dd-kube-api-access-6mjsb\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919293 2722 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919304 2722 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbh9f\" (UniqueName: \"kubernetes.io/projected/01eb5897-18aa-4dbe-945c-a323f721c1d4-kube-api-access-nbh9f\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919315 2722 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919326 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01eb5897-18aa-4dbe-945c-a323f721c1d4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919337 2722 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/446f5b44-6e01-4c1f-961b-905fb950a9dd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919348 2722 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919362 2722 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01eb5897-18aa-4dbe-945c-a323f721c1d4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919625 kubelet[2722]: I0712 00:13:06.919373 2722 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:06.919908 kubelet[2722]: I0712 00:13:06.919387 2722 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01eb5897-18aa-4dbe-945c-a323f721c1d4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:13:07.302349 kubelet[2722]: I0712 00:13:07.302242 2722 scope.go:117] "RemoveContainer" containerID="4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b" Jul 12 00:13:07.304762 containerd[1585]: time="2025-07-12T00:13:07.304692721Z" level=info msg="RemoveContainer for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\"" Jul 12 00:13:07.310008 systemd[1]: Removed slice kubepods-burstable-pod01eb5897_18aa_4dbe_945c_a323f721c1d4.slice - libcontainer container kubepods-burstable-pod01eb5897_18aa_4dbe_945c_a323f721c1d4.slice. Jul 12 00:13:07.310145 systemd[1]: kubepods-burstable-pod01eb5897_18aa_4dbe_945c_a323f721c1d4.slice: Consumed 7.895s CPU time, 143.5M memory peak, 6.7M read from disk, 26.7M written to disk. Jul 12 00:13:07.312181 systemd[1]: Removed slice kubepods-besteffort-pod446f5b44_6e01_4c1f_961b_905fb950a9dd.slice - libcontainer container kubepods-besteffort-pod446f5b44_6e01_4c1f_961b_905fb950a9dd.slice. Jul 12 00:13:07.485428 containerd[1585]: time="2025-07-12T00:13:07.485354213Z" level=info msg="RemoveContainer for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" returns successfully" Jul 12 00:13:07.485783 kubelet[2722]: I0712 00:13:07.485746 2722 scope.go:117] "RemoveContainer" containerID="7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2" Jul 12 00:13:07.488057 containerd[1585]: time="2025-07-12T00:13:07.488003770Z" level=info msg="RemoveContainer for \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\"" Jul 12 00:13:07.584691 containerd[1585]: time="2025-07-12T00:13:07.584539814Z" level=info msg="RemoveContainer for \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" returns successfully" Jul 12 00:13:07.585086 kubelet[2722]: I0712 00:13:07.585047 2722 scope.go:117] "RemoveContainer" containerID="847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817" Jul 12 00:13:07.586341 containerd[1585]: time="2025-07-12T00:13:07.586315162Z" level=info msg="RemoveContainer for \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\"" Jul 12 00:13:07.689865 containerd[1585]: time="2025-07-12T00:13:07.689795000Z" level=info msg="RemoveContainer for \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" returns successfully" Jul 12 00:13:07.690357 kubelet[2722]: I0712 00:13:07.690093 2722 scope.go:117] "RemoveContainer" containerID="d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6" Jul 12 00:13:07.692344 containerd[1585]: time="2025-07-12T00:13:07.692308411Z" level=info msg="RemoveContainer for \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\"" Jul 12 00:13:07.772696 containerd[1585]: time="2025-07-12T00:13:07.772647228Z" level=info msg="RemoveContainer for \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" returns successfully" Jul 12 00:13:07.773001 kubelet[2722]: I0712 00:13:07.772954 2722 scope.go:117] "RemoveContainer" containerID="8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18" Jul 12 00:13:07.774135 containerd[1585]: time="2025-07-12T00:13:07.774104045Z" level=info msg="RemoveContainer for \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\"" Jul 12 00:13:07.881929 containerd[1585]: time="2025-07-12T00:13:07.881713482Z" level=info msg="RemoveContainer for \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" returns successfully" Jul 12 00:13:07.882157 kubelet[2722]: I0712 00:13:07.882109 2722 scope.go:117] "RemoveContainer" containerID="87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf" Jul 12 00:13:07.884152 containerd[1585]: time="2025-07-12T00:13:07.884114260Z" level=info msg="RemoveContainer for \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\"" Jul 12 00:13:07.901022 kubelet[2722]: I0712 00:13:07.900961 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" path="/var/lib/kubelet/pods/01eb5897-18aa-4dbe-945c-a323f721c1d4/volumes" Jul 12 00:13:08.040463 containerd[1585]: time="2025-07-12T00:13:08.040407865Z" level=info msg="RemoveContainer for \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" returns successfully" Jul 12 00:13:08.040722 kubelet[2722]: I0712 00:13:08.040685 2722 scope.go:117] "RemoveContainer" containerID="4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b" Jul 12 00:13:08.043981 containerd[1585]: time="2025-07-12T00:13:08.041029006Z" level=error msg="ContainerStatus for \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\": not found" Jul 12 00:13:08.065328 kubelet[2722]: E0712 00:13:08.065229 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\": not found" containerID="4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b" Jul 12 00:13:08.065466 kubelet[2722]: I0712 00:13:08.065342 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b"} err="failed to get container status \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cfb7d0b717d3f31671405dc46991e0632b2a473acdf1fa93f83ca82b6b4275b\": not found" Jul 12 00:13:08.065466 kubelet[2722]: I0712 00:13:08.065460 2722 scope.go:117] "RemoveContainer" containerID="7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2" Jul 12 00:13:08.065861 containerd[1585]: time="2025-07-12T00:13:08.065810478Z" level=error msg="ContainerStatus for \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\": not found" Jul 12 00:13:08.066102 kubelet[2722]: E0712 00:13:08.066054 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\": not found" containerID="7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2" Jul 12 00:13:08.066141 kubelet[2722]: I0712 00:13:08.066111 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2"} err="failed to get container status \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a4965e3e3c11db63cdf65df7f3d51db1c66f6271434afe4c66b8e96d281d1e2\": not found" Jul 12 00:13:08.066179 kubelet[2722]: I0712 00:13:08.066145 2722 scope.go:117] "RemoveContainer" containerID="847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817" Jul 12 00:13:08.066397 containerd[1585]: time="2025-07-12T00:13:08.066364193Z" level=error msg="ContainerStatus for \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\": not found" Jul 12 00:13:08.066535 kubelet[2722]: E0712 00:13:08.066489 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\": not found" containerID="847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817" Jul 12 00:13:08.066535 kubelet[2722]: I0712 00:13:08.066521 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817"} err="failed to get container status \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\": rpc error: code = NotFound desc = an error occurred when try to find container \"847baf99478f5635970a4e0087f060cda66c5c5395ca71abb9edf491bdf2f817\": not found" Jul 12 00:13:08.066674 kubelet[2722]: I0712 00:13:08.066541 2722 scope.go:117] "RemoveContainer" containerID="d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6" Jul 12 00:13:08.066819 containerd[1585]: time="2025-07-12T00:13:08.066780528Z" level=error msg="ContainerStatus for \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\": not found" Jul 12 00:13:08.066979 kubelet[2722]: E0712 00:13:08.066946 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\": not found" containerID="d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6" Jul 12 00:13:08.066979 kubelet[2722]: I0712 00:13:08.066970 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6"} err="failed to get container status \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1b5585aff6b6c917ffb6dba1ecbda6a274e2522ef6bdcc4de0f0ccf94b239d6\": not found" Jul 12 00:13:08.067075 kubelet[2722]: I0712 00:13:08.066986 2722 scope.go:117] "RemoveContainer" containerID="8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18" Jul 12 00:13:08.067154 containerd[1585]: time="2025-07-12T00:13:08.067119226Z" level=error msg="ContainerStatus for \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\": not found" Jul 12 00:13:08.067254 kubelet[2722]: E0712 00:13:08.067222 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\": not found" containerID="8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18" Jul 12 00:13:08.067296 kubelet[2722]: I0712 00:13:08.067250 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18"} err="failed to get container status \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\": rpc error: code = NotFound desc = an error occurred when try to find container \"8737f45cce957deedee4f535bca300258bdec15cbbb88d7c3e54464a3499ce18\": not found" Jul 12 00:13:08.067296 kubelet[2722]: I0712 00:13:08.067269 2722 scope.go:117] "RemoveContainer" containerID="87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf" Jul 12 00:13:08.067572 containerd[1585]: time="2025-07-12T00:13:08.067522627Z" level=error msg="ContainerStatus for \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\": not found" Jul 12 00:13:08.067698 kubelet[2722]: E0712 00:13:08.067671 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\": not found" containerID="87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf" Jul 12 00:13:08.067735 kubelet[2722]: I0712 00:13:08.067695 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf"} err="failed to get container status \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"87f0ba5e3fb29d0a0b33e1375056c62169e290342b0672a8b66cec0c2cc2eccf\": not found" Jul 12 00:13:08.067735 kubelet[2722]: I0712 00:13:08.067710 2722 scope.go:117] "RemoveContainer" containerID="fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41" Jul 12 00:13:08.069634 containerd[1585]: time="2025-07-12T00:13:08.069595175Z" level=info msg="RemoveContainer for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\"" Jul 12 00:13:08.195708 containerd[1585]: time="2025-07-12T00:13:08.195538948Z" level=info msg="RemoveContainer for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" returns successfully" Jul 12 00:13:08.196468 kubelet[2722]: I0712 00:13:08.195850 2722 scope.go:117] "RemoveContainer" containerID="fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41" Jul 12 00:13:08.196468 kubelet[2722]: E0712 00:13:08.196278 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\": not found" containerID="fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41" Jul 12 00:13:08.196468 kubelet[2722]: I0712 00:13:08.196304 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41"} err="failed to get container status \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\": not found" Jul 12 00:13:08.196574 containerd[1585]: time="2025-07-12T00:13:08.196125223Z" level=error msg="ContainerStatus for \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc17f8d5dd0dca37a3ba62e10991821697d5c8631130bade71933803f811ef41\": not found" Jul 12 00:13:08.683087 sshd[4721]: Connection closed by 10.0.0.1 port 43542 Jul 12 00:13:08.683490 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Jul 12 00:13:08.692440 systemd[1]: sshd@31-10.0.0.57:22-10.0.0.1:43542.service: Deactivated successfully. Jul 12 00:13:08.694474 systemd[1]: session-32.scope: Deactivated successfully. Jul 12 00:13:08.695360 systemd-logind[1567]: Session 32 logged out. Waiting for processes to exit. Jul 12 00:13:08.699373 systemd[1]: Started sshd@32-10.0.0.57:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Jul 12 00:13:08.700121 systemd-logind[1567]: Removed session 32. Jul 12 00:13:08.750745 sshd[4733]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:13:08.752446 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:13:08.757154 systemd-logind[1567]: New session 33 of user core. Jul 12 00:13:08.766043 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 12 00:13:08.820369 sshd[4735]: Connection closed by 10.0.0.1 port 43556 Jul 12 00:13:08.820799 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Jul 12 00:13:08.841836 systemd[1]: sshd@32-10.0.0.57:22-10.0.0.1:43556.service: Deactivated successfully. Jul 12 00:13:08.843839 systemd[1]: session-33.scope: Deactivated successfully. Jul 12 00:13:08.844733 systemd-logind[1567]: Session 33 logged out. Waiting for processes to exit. Jul 12 00:13:08.847802 systemd[1]: Started sshd@33-10.0.0.57:22-10.0.0.1:43572.service - OpenSSH per-connection server daemon (10.0.0.1:43572). Jul 12 00:13:08.848792 systemd-logind[1567]: Removed session 33. Jul 12 00:13:08.908198 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 43572 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:13:08.910333 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:13:08.915736 systemd-logind[1567]: New session 34 of user core. Jul 12 00:13:08.925017 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 12 00:13:09.074577 kubelet[2722]: E0712 00:13:09.074505 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="clean-cilium-state" Jul 12 00:13:09.074577 kubelet[2722]: E0712 00:13:09.074545 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="cilium-agent" Jul 12 00:13:09.074577 kubelet[2722]: E0712 00:13:09.074552 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="446f5b44-6e01-4c1f-961b-905fb950a9dd" containerName="cilium-operator" Jul 12 00:13:09.074577 kubelet[2722]: E0712 00:13:09.074559 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="mount-cgroup" Jul 12 00:13:09.074577 kubelet[2722]: E0712 00:13:09.074565 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="mount-bpf-fs" Jul 12 00:13:09.074577 kubelet[2722]: E0712 00:13:09.074572 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="apply-sysctl-overwrites" Jul 12 00:13:09.074577 kubelet[2722]: I0712 00:13:09.074601 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="cilium-agent" Jul 12 00:13:09.075289 kubelet[2722]: I0712 00:13:09.074609 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="446f5b44-6e01-4c1f-961b-905fb950a9dd" containerName="cilium-operator" Jul 12 00:13:09.075289 kubelet[2722]: E0712 00:13:09.074623 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="cilium-agent" Jul 12 00:13:09.075289 kubelet[2722]: I0712 00:13:09.074640 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="01eb5897-18aa-4dbe-945c-a323f721c1d4" containerName="cilium-agent" Jul 12 00:13:09.084072 systemd[1]: Created slice kubepods-burstable-pod4c76f749_dfdd_4616_a233_fb67df6adb09.slice - libcontainer container kubepods-burstable-pod4c76f749_dfdd_4616_a233_fb67df6adb09.slice. Jul 12 00:13:09.094050 kubelet[2722]: E0712 00:13:09.094003 2722 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:13:09.133747 kubelet[2722]: I0712 00:13:09.133678 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-etc-cni-netd\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.133747 kubelet[2722]: I0712 00:13:09.133741 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-host-proc-sys-net\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.133747 kubelet[2722]: I0712 00:13:09.133769 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-cni-path\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134074 kubelet[2722]: I0712 00:13:09.133796 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-lib-modules\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134074 kubelet[2722]: I0712 00:13:09.133823 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c76f749-dfdd-4616-a233-fb67df6adb09-clustermesh-secrets\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134074 kubelet[2722]: I0712 00:13:09.133849 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c76f749-dfdd-4616-a233-fb67df6adb09-cilium-config-path\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134074 kubelet[2722]: I0712 00:13:09.133903 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpvh\" (UniqueName: \"kubernetes.io/projected/4c76f749-dfdd-4616-a233-fb67df6adb09-kube-api-access-ggpvh\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134074 kubelet[2722]: I0712 00:13:09.133935 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-bpf-maps\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134074 kubelet[2722]: I0712 00:13:09.133956 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-hostproc\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134362 kubelet[2722]: I0712 00:13:09.133982 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-xtables-lock\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134362 kubelet[2722]: I0712 00:13:09.134074 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c76f749-dfdd-4616-a233-fb67df6adb09-cilium-ipsec-secrets\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134362 kubelet[2722]: I0712 00:13:09.134137 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-cilium-cgroup\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134362 kubelet[2722]: I0712 00:13:09.134168 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-cilium-run\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134362 kubelet[2722]: I0712 00:13:09.134195 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c76f749-dfdd-4616-a233-fb67df6adb09-host-proc-sys-kernel\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.134362 kubelet[2722]: I0712 00:13:09.134220 2722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c76f749-dfdd-4616-a233-fb67df6adb09-hubble-tls\") pod \"cilium-fc7ws\" (UID: \"4c76f749-dfdd-4616-a233-fb67df6adb09\") " pod="kube-system/cilium-fc7ws" Jul 12 00:13:09.687160 kubelet[2722]: E0712 00:13:09.687105 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:09.687783 containerd[1585]: time="2025-07-12T00:13:09.687717255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fc7ws,Uid:4c76f749-dfdd-4616-a233-fb67df6adb09,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:09.900438 kubelet[2722]: I0712 00:13:09.900394 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446f5b44-6e01-4c1f-961b-905fb950a9dd" path="/var/lib/kubelet/pods/446f5b44-6e01-4c1f-961b-905fb950a9dd/volumes" Jul 12 00:13:09.989816 containerd[1585]: time="2025-07-12T00:13:09.989692207Z" level=info msg="connecting to shim 9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd" address="unix:///run/containerd/s/fbc4a2b35649355a1f5cef075b765904e3ba123ab55c3d402b8a1999f71940d8" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:13:10.020063 systemd[1]: Started cri-containerd-9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd.scope - libcontainer container 9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd. Jul 12 00:13:10.109042 containerd[1585]: time="2025-07-12T00:13:10.108983515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fc7ws,Uid:4c76f749-dfdd-4616-a233-fb67df6adb09,Namespace:kube-system,Attempt:0,} returns sandbox id \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\"" Jul 12 00:13:10.110036 kubelet[2722]: E0712 00:13:10.110003 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:10.112431 containerd[1585]: time="2025-07-12T00:13:10.112368007Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:13:10.515917 containerd[1585]: time="2025-07-12T00:13:10.514221706Z" level=info msg="Container dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:13:10.991701 containerd[1585]: time="2025-07-12T00:13:10.991637478Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\"" Jul 12 00:13:10.992478 containerd[1585]: time="2025-07-12T00:13:10.992445551Z" level=info msg="StartContainer for \"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\"" Jul 12 00:13:10.993510 containerd[1585]: time="2025-07-12T00:13:10.993481684Z" level=info msg="connecting to shim dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34" address="unix:///run/containerd/s/fbc4a2b35649355a1f5cef075b765904e3ba123ab55c3d402b8a1999f71940d8" protocol=ttrpc version=3 Jul 12 00:13:11.020006 systemd[1]: Started cri-containerd-dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34.scope - libcontainer container dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34. Jul 12 00:13:11.220129 systemd[1]: cri-containerd-dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34.scope: Deactivated successfully. Jul 12 00:13:11.222569 containerd[1585]: time="2025-07-12T00:13:11.222527949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\" id:\"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\" pid:4813 exited_at:{seconds:1752279191 nanos:222014292}" Jul 12 00:13:11.239535 containerd[1585]: time="2025-07-12T00:13:11.239219062Z" level=info msg="received exit event container_id:\"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\" id:\"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\" pid:4813 exited_at:{seconds:1752279191 nanos:222014292}" Jul 12 00:13:11.240848 containerd[1585]: time="2025-07-12T00:13:11.240819640Z" level=info msg="StartContainer for \"dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34\" returns successfully" Jul 12 00:13:11.261663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc348647084dc8ede0b975cece225b9040f59db798cfea3c6f5a523c9a1b8d34-rootfs.mount: Deactivated successfully. Jul 12 00:13:11.320152 kubelet[2722]: E0712 00:13:11.320089 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:11.897507 kubelet[2722]: E0712 00:13:11.897411 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-62hpp" podUID="4fc842ad-1e23-467e-83aa-5973b291e8ad" Jul 12 00:13:12.325509 kubelet[2722]: E0712 00:13:12.325467 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:12.327576 containerd[1585]: time="2025-07-12T00:13:12.327521748Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:13:12.339959 containerd[1585]: time="2025-07-12T00:13:12.339909534Z" level=info msg="Container 9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:13:12.345914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792689882.mount: Deactivated successfully. Jul 12 00:13:12.348403 containerd[1585]: time="2025-07-12T00:13:12.348353846Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\"" Jul 12 00:13:12.348849 containerd[1585]: time="2025-07-12T00:13:12.348823521Z" level=info msg="StartContainer for \"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\"" Jul 12 00:13:12.349675 containerd[1585]: time="2025-07-12T00:13:12.349652835Z" level=info msg="connecting to shim 9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9" address="unix:///run/containerd/s/fbc4a2b35649355a1f5cef075b765904e3ba123ab55c3d402b8a1999f71940d8" protocol=ttrpc version=3 Jul 12 00:13:12.376163 systemd[1]: Started cri-containerd-9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9.scope - libcontainer container 9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9. Jul 12 00:13:12.412010 containerd[1585]: time="2025-07-12T00:13:12.411956516Z" level=info msg="StartContainer for \"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\" returns successfully" Jul 12 00:13:12.416414 systemd[1]: cri-containerd-9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9.scope: Deactivated successfully. Jul 12 00:13:12.417162 containerd[1585]: time="2025-07-12T00:13:12.417084585Z" level=info msg="received exit event container_id:\"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\" id:\"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\" pid:4858 exited_at:{seconds:1752279192 nanos:416695451}" Jul 12 00:13:12.417162 containerd[1585]: time="2025-07-12T00:13:12.417169014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\" id:\"9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9\" pid:4858 exited_at:{seconds:1752279192 nanos:416695451}" Jul 12 00:13:12.443645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d8d02b58a96a6c0533706a1d2c66259f2296e4039a72f26110b3c6d9d00a5c9-rootfs.mount: Deactivated successfully. Jul 12 00:13:13.329042 kubelet[2722]: E0712 00:13:13.329002 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:13.330789 containerd[1585]: time="2025-07-12T00:13:13.330749497Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:13:13.442089 containerd[1585]: time="2025-07-12T00:13:13.442025225Z" level=info msg="Container 7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:13:13.446332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301372403.mount: Deactivated successfully. Jul 12 00:13:13.651640 containerd[1585]: time="2025-07-12T00:13:13.651478354Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\"" Jul 12 00:13:13.652924 containerd[1585]: time="2025-07-12T00:13:13.652171501Z" level=info msg="StartContainer for \"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\"" Jul 12 00:13:13.653841 containerd[1585]: time="2025-07-12T00:13:13.653810802Z" level=info msg="connecting to shim 7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf" address="unix:///run/containerd/s/fbc4a2b35649355a1f5cef075b765904e3ba123ab55c3d402b8a1999f71940d8" protocol=ttrpc version=3 Jul 12 00:13:13.666676 kubelet[2722]: I0712 00:13:13.666571 2722 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:13:13Z","lastTransitionTime":"2025-07-12T00:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:13:13.682178 systemd[1]: Started cri-containerd-7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf.scope - libcontainer container 7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf. Jul 12 00:13:13.734994 containerd[1585]: time="2025-07-12T00:13:13.734931516Z" level=info msg="StartContainer for \"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\" returns successfully" Jul 12 00:13:13.741359 systemd[1]: cri-containerd-7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf.scope: Deactivated successfully. Jul 12 00:13:13.742708 containerd[1585]: time="2025-07-12T00:13:13.742565899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\" id:\"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\" pid:4903 exited_at:{seconds:1752279193 nanos:742212934}" Jul 12 00:13:13.742708 containerd[1585]: time="2025-07-12T00:13:13.742687128Z" level=info msg="received exit event container_id:\"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\" id:\"7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf\" pid:4903 exited_at:{seconds:1752279193 nanos:742212934}" Jul 12 00:13:13.772772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7882491509018baa376e7a19007cfaca423ff6d127a084bf6b6351caf371d2cf-rootfs.mount: Deactivated successfully. Jul 12 00:13:13.897548 kubelet[2722]: E0712 00:13:13.897479 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-62hpp" podUID="4fc842ad-1e23-467e-83aa-5973b291e8ad" Jul 12 00:13:14.095086 kubelet[2722]: E0712 00:13:14.095007 2722 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:13:14.334967 kubelet[2722]: E0712 00:13:14.334903 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:14.337290 containerd[1585]: time="2025-07-12T00:13:14.337221319Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:13:14.617224 containerd[1585]: time="2025-07-12T00:13:14.617143690Z" level=info msg="Container 071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:13:14.622467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542434523.mount: Deactivated successfully. Jul 12 00:13:14.762390 containerd[1585]: time="2025-07-12T00:13:14.762331034Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\"" Jul 12 00:13:14.763745 containerd[1585]: time="2025-07-12T00:13:14.763711737Z" level=info msg="StartContainer for \"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\"" Jul 12 00:13:14.765295 containerd[1585]: time="2025-07-12T00:13:14.765246871Z" level=info msg="connecting to shim 071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f" address="unix:///run/containerd/s/fbc4a2b35649355a1f5cef075b765904e3ba123ab55c3d402b8a1999f71940d8" protocol=ttrpc version=3 Jul 12 00:13:14.791223 systemd[1]: Started cri-containerd-071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f.scope - libcontainer container 071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f. Jul 12 00:13:14.824587 systemd[1]: cri-containerd-071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f.scope: Deactivated successfully. Jul 12 00:13:14.825570 containerd[1585]: time="2025-07-12T00:13:14.825465091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\" id:\"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\" pid:4941 exited_at:{seconds:1752279194 nanos:824855602}" Jul 12 00:13:14.827969 containerd[1585]: time="2025-07-12T00:13:14.827921883Z" level=info msg="received exit event container_id:\"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\" id:\"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\" pid:4941 exited_at:{seconds:1752279194 nanos:824855602}" Jul 12 00:13:14.837611 containerd[1585]: time="2025-07-12T00:13:14.837557069Z" level=info msg="StartContainer for \"071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f\" returns successfully" Jul 12 00:13:14.855208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-071893a500ea9c44f4a0bf02a2c614e4a10f5c1ee042e56768d44af23c36d25f-rootfs.mount: Deactivated successfully. Jul 12 00:13:15.340334 kubelet[2722]: E0712 00:13:15.340293 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:15.390147 containerd[1585]: time="2025-07-12T00:13:15.390092123Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:13:15.403906 containerd[1585]: time="2025-07-12T00:13:15.403827287Z" level=info msg="Container 7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:13:15.413347 containerd[1585]: time="2025-07-12T00:13:15.413294375Z" level=info msg="CreateContainer within sandbox \"9848dd829fd74b2875c0854421db1de4b7fbd18abdacae97954fc918879479bd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\"" Jul 12 00:13:15.413851 containerd[1585]: time="2025-07-12T00:13:15.413809356Z" level=info msg="StartContainer for \"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\"" Jul 12 00:13:15.414689 containerd[1585]: time="2025-07-12T00:13:15.414649609Z" level=info msg="connecting to shim 7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41" address="unix:///run/containerd/s/fbc4a2b35649355a1f5cef075b765904e3ba123ab55c3d402b8a1999f71940d8" protocol=ttrpc version=3 Jul 12 00:13:15.442067 systemd[1]: Started cri-containerd-7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41.scope - libcontainer container 7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41. Jul 12 00:13:15.486319 containerd[1585]: time="2025-07-12T00:13:15.486260334Z" level=info msg="StartContainer for \"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" returns successfully" Jul 12 00:13:15.583997 containerd[1585]: time="2025-07-12T00:13:15.583936868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" id:\"00fbc2d8de6db16e0e9727243ff036f2e444dc935d65134f53d4eb9ee251996d\" pid:5010 exited_at:{seconds:1752279195 nanos:583545410}" Jul 12 00:13:15.898803 kubelet[2722]: E0712 00:13:15.898274 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-62hpp" podUID="4fc842ad-1e23-467e-83aa-5973b291e8ad" Jul 12 00:13:16.171945 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 12 00:13:16.347260 kubelet[2722]: E0712 00:13:16.347195 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:16.423084 kubelet[2722]: I0712 00:13:16.422990 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fc7ws" podStartSLOduration=8.42295853 podStartE2EDuration="8.42295853s" podCreationTimestamp="2025-07-12 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:16.422425134 +0000 UTC m=+156.623064840" watchObservedRunningTime="2025-07-12 00:13:16.42295853 +0000 UTC m=+156.623598246" Jul 12 00:13:16.897834 kubelet[2722]: E0712 00:13:16.897724 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-bjqv6" podUID="8ceb7649-78f4-4cde-b59b-0722cb99a876" Jul 12 00:13:17.350030 kubelet[2722]: E0712 00:13:17.349952 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:17.897259 kubelet[2722]: E0712 00:13:17.897171 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-62hpp" podUID="4fc842ad-1e23-467e-83aa-5973b291e8ad" Jul 12 00:13:18.353558 kubelet[2722]: E0712 00:13:18.353505 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:18.376061 containerd[1585]: time="2025-07-12T00:13:18.375988786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" id:\"41bb03e1b63ed8988a9a63300b4b6a9d9ed33e190b02120f64ad59fd4f164bd4\" pid:5180 exit_status:1 exited_at:{seconds:1752279198 nanos:375350031}" Jul 12 00:13:18.896912 kubelet[2722]: E0712 00:13:18.896800 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-bjqv6" podUID="8ceb7649-78f4-4cde-b59b-0722cb99a876" Jul 12 00:13:19.692911 kubelet[2722]: E0712 00:13:19.692069 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:19.698978 systemd-networkd[1507]: lxc_health: Link UP Jul 12 00:13:19.715503 systemd-networkd[1507]: lxc_health: Gained carrier Jul 12 00:13:19.897907 kubelet[2722]: E0712 00:13:19.897564 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:19.898587 kubelet[2722]: E0712 00:13:19.898546 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:20.357662 kubelet[2722]: E0712 00:13:20.357570 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:20.519646 containerd[1585]: time="2025-07-12T00:13:20.518769530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" id:\"25239f43c4a21f512a36196429f90cd6711c5f187dcc985e085f4de329eccce5\" pid:5540 exited_at:{seconds:1752279200 nanos:518418128}" Jul 12 00:13:20.897153 kubelet[2722]: E0712 00:13:20.897114 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:20.996015 systemd-networkd[1507]: lxc_health: Gained IPv6LL Jul 12 00:13:21.359986 kubelet[2722]: E0712 00:13:21.359937 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:21.897328 kubelet[2722]: E0712 00:13:21.897267 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:13:22.684171 containerd[1585]: time="2025-07-12T00:13:22.684126523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" id:\"01cf55edc7d7c4e43f6d1ba86c9bb71609eb5e32bd963e030182e81c681de41b\" pid:5577 exited_at:{seconds:1752279202 nanos:683799055}" Jul 12 00:13:24.925084 containerd[1585]: time="2025-07-12T00:13:24.922199973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" id:\"096cd5800f4aca7f0046c8b7c326d8db54f33de8fd495c63ff7d0d2f91510619\" pid:5611 exited_at:{seconds:1752279204 nanos:920610226}" Jul 12 00:13:27.093679 containerd[1585]: time="2025-07-12T00:13:27.093625934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d822d160a2a09da7d343ac1fba1383fea3a19e933efa000b713b2ac0bd9ad41\" id:\"40477f9414b316a2d3f2ef6d138445530274899f9de3a0fd07067813a9a60407\" pid:5634 exited_at:{seconds:1752279207 nanos:93233293}" Jul 12 00:13:27.102229 sshd[4744]: Connection closed by 10.0.0.1 port 43572 Jul 12 00:13:27.102713 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Jul 12 00:13:27.106711 systemd[1]: sshd@33-10.0.0.57:22-10.0.0.1:43572.service: Deactivated successfully. Jul 12 00:13:27.108708 systemd[1]: session-34.scope: Deactivated successfully. Jul 12 00:13:27.109463 systemd-logind[1567]: Session 34 logged out. Waiting for processes to exit. Jul 12 00:13:27.110953 systemd-logind[1567]: Removed session 34. Jul 12 00:13:27.898454 kubelet[2722]: E0712 00:13:27.898332 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"