Jul 15 05:07:06.819436 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:07:06.819474 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:07:06.819486 kernel: BIOS-provided physical RAM map: Jul 15 05:07:06.819495 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 05:07:06.819504 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 05:07:06.819513 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 05:07:06.819523 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 05:07:06.819536 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 05:07:06.819548 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 05:07:06.819557 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 05:07:06.819566 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:07:06.819575 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 05:07:06.819584 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:07:06.819593 kernel: NX (Execute Disable) protection: active Jul 15 05:07:06.819607 kernel: APIC: Static calls initialized Jul 15 05:07:06.819617 kernel: SMBIOS 2.8 present. Jul 15 05:07:06.819641 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 05:07:06.819662 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:07:06.819672 kernel: Hypervisor detected: KVM Jul 15 05:07:06.819682 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:07:06.819691 kernel: kvm-clock: using sched offset of 4417309377 cycles Jul 15 05:07:06.819702 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:07:06.819717 kernel: tsc: Detected 2794.750 MHz processor Jul 15 05:07:06.819733 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:07:06.819743 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:07:06.819754 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 05:07:06.819780 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 05:07:06.819790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:07:06.819800 kernel: Using GB pages for direct mapping Jul 15 05:07:06.819810 kernel: ACPI: Early table checksum verification disabled Jul 15 05:07:06.819820 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 05:07:06.819830 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819844 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819854 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819864 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 05:07:06.819874 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819884 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819894 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819904 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:06.819914 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 05:07:06.819930 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 05:07:06.819941 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 05:07:06.819951 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 05:07:06.819961 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 05:07:06.819971 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 05:07:06.819982 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 05:07:06.819994 kernel: No NUMA configuration found Jul 15 05:07:06.820005 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 05:07:06.820015 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 15 05:07:06.820026 kernel: Zone ranges: Jul 15 05:07:06.820036 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:07:06.820046 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 05:07:06.820057 kernel: Normal empty Jul 15 05:07:06.820067 kernel: Device empty Jul 15 05:07:06.820077 kernel: Movable zone start for each node Jul 15 05:07:06.820087 kernel: Early memory node ranges Jul 15 05:07:06.820111 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 05:07:06.820122 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 05:07:06.820132 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 05:07:06.820143 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:07:06.820153 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:07:06.820164 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 05:07:06.820174 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:07:06.820188 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:07:06.820198 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:07:06.820212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:07:06.820222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:07:06.820235 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:07:06.820246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:07:06.820256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:07:06.820267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:07:06.820277 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:07:06.820287 kernel: TSC deadline timer available Jul 15 05:07:06.820297 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:07:06.820311 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:07:06.820321 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:07:06.820331 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:07:06.820341 kernel: CPU topo: Num. cores per package: 4 Jul 15 05:07:06.820352 kernel: CPU topo: Num. threads per package: 4 Jul 15 05:07:06.820362 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 05:07:06.820374 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:07:06.820386 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:07:06.820398 kernel: kvm-guest: setup PV sched yield Jul 15 05:07:06.820412 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 05:07:06.820422 kernel: Booting paravirtualized kernel on KVM Jul 15 05:07:06.820433 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:07:06.820444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 05:07:06.820454 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 05:07:06.820464 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 05:07:06.820474 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 05:07:06.820485 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:07:06.820495 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:07:06.820510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:07:06.820521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:07:06.820531 kernel: random: crng init done Jul 15 05:07:06.820541 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:07:06.820552 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:07:06.820562 kernel: Fallback order for Node 0: 0 Jul 15 05:07:06.820573 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 15 05:07:06.820583 kernel: Policy zone: DMA32 Jul 15 05:07:06.820596 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:07:06.820607 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 05:07:06.820617 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:07:06.820628 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:07:06.820638 kernel: Dynamic Preempt: voluntary Jul 15 05:07:06.820648 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:07:06.820659 kernel: rcu: RCU event tracing is enabled. Jul 15 05:07:06.820670 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 05:07:06.820680 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:07:06.820697 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:07:06.820707 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:07:06.820718 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:07:06.820728 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 05:07:06.820738 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:07:06.820749 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:07:06.820781 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:07:06.820792 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 05:07:06.820803 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:07:06.820825 kernel: Console: colour VGA+ 80x25 Jul 15 05:07:06.820836 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:07:06.820847 kernel: ACPI: Core revision 20240827 Jul 15 05:07:06.820860 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:07:06.820871 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:07:06.820882 kernel: x2apic enabled Jul 15 05:07:06.820892 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:07:06.820906 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:07:06.820918 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:07:06.820931 kernel: kvm-guest: setup PV IPIs Jul 15 05:07:06.820942 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:07:06.820953 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:07:06.820964 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 05:07:06.820975 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:07:06.820985 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:07:06.820996 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:07:06.821007 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:07:06.821020 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:07:06.821031 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:07:06.821042 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 05:07:06.821053 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 05:07:06.821064 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:07:06.821075 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:07:06.821086 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:07:06.821106 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:07:06.821119 kernel: x86/bugs: return thunk changed Jul 15 05:07:06.821130 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:07:06.821141 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:07:06.821152 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:07:06.821163 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:07:06.821174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:07:06.821184 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 05:07:06.821195 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:07:06.821206 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:07:06.821219 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:07:06.821230 kernel: landlock: Up and running. Jul 15 05:07:06.821240 kernel: SELinux: Initializing. Jul 15 05:07:06.821251 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:07:06.821265 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:07:06.821276 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 05:07:06.821287 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:07:06.821298 kernel: ... version: 0 Jul 15 05:07:06.821308 kernel: ... bit width: 48 Jul 15 05:07:06.821322 kernel: ... generic registers: 6 Jul 15 05:07:06.821333 kernel: ... value mask: 0000ffffffffffff Jul 15 05:07:06.821343 kernel: ... max period: 00007fffffffffff Jul 15 05:07:06.821354 kernel: ... fixed-purpose events: 0 Jul 15 05:07:06.821365 kernel: ... event mask: 000000000000003f Jul 15 05:07:06.821375 kernel: signal: max sigframe size: 1776 Jul 15 05:07:06.821386 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:07:06.821397 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:07:06.821408 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:07:06.821422 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:07:06.821432 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:07:06.821443 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 05:07:06.821454 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 05:07:06.821465 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 05:07:06.821476 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 136904K reserved, 0K cma-reserved) Jul 15 05:07:06.821487 kernel: devtmpfs: initialized Jul 15 05:07:06.821498 kernel: x86/mm: Memory block size: 128MB Jul 15 05:07:06.821509 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:07:06.821523 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 05:07:06.821534 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:07:06.821544 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:07:06.821555 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:07:06.821566 kernel: audit: type=2000 audit(1752556022.966:1): state=initialized audit_enabled=0 res=1 Jul 15 05:07:06.821577 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:07:06.821588 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:07:06.821598 kernel: cpuidle: using governor menu Jul 15 05:07:06.821609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:07:06.821622 kernel: dca service started, version 1.12.1 Jul 15 05:07:06.821633 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 05:07:06.821644 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 05:07:06.821655 kernel: PCI: Using configuration type 1 for base access Jul 15 05:07:06.821666 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:07:06.821677 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:07:06.821687 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:07:06.821698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:07:06.821709 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:07:06.821722 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:07:06.821732 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:07:06.821743 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:07:06.821754 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:07:06.821781 kernel: ACPI: Interpreter enabled Jul 15 05:07:06.821791 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 05:07:06.821802 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:07:06.821813 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:07:06.821824 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:07:06.821838 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:07:06.821849 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:07:06.822129 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:07:06.822299 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:07:06.822457 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:07:06.822473 kernel: PCI host bridge to bus 0000:00 Jul 15 05:07:06.822651 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:07:06.822845 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:07:06.822991 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:07:06.823152 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 05:07:06.823300 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 05:07:06.823440 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 05:07:06.823581 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:07:06.823798 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:07:06.823987 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:07:06.824155 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 05:07:06.824314 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 05:07:06.824469 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 05:07:06.824624 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:07:06.824827 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 05:07:06.824997 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 15 05:07:06.825165 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 05:07:06.825325 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 05:07:06.825504 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:07:06.825663 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 15 05:07:06.825842 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 05:07:06.826000 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 05:07:06.826196 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:07:06.826365 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 15 05:07:06.826521 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 15 05:07:06.826676 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 05:07:06.826858 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 05:07:06.827033 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:07:06.827202 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:07:06.827384 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:07:06.827540 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 15 05:07:06.827695 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 15 05:07:06.827910 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:07:06.828107 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 05:07:06.828126 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:07:06.828137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:07:06.828154 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:07:06.828165 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:07:06.828176 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:07:06.828187 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:07:06.828197 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:07:06.828208 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:07:06.828219 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:07:06.828230 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:07:06.828241 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:07:06.828255 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:07:06.828265 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:07:06.828276 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:07:06.828287 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:07:06.828298 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:07:06.828309 kernel: iommu: Default domain type: Translated Jul 15 05:07:06.828320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:07:06.828331 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:07:06.828342 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:07:06.828355 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 05:07:06.828366 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 05:07:06.828528 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:07:06.828684 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:07:06.828860 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:07:06.828876 kernel: vgaarb: loaded Jul 15 05:07:06.828888 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:07:06.828899 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:07:06.828915 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:07:06.828926 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:07:06.828937 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:07:06.828948 kernel: pnp: PnP ACPI init Jul 15 05:07:06.829140 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 05:07:06.829158 kernel: pnp: PnP ACPI: found 6 devices Jul 15 05:07:06.829169 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:07:06.829180 kernel: NET: Registered PF_INET protocol family Jul 15 05:07:06.829196 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:07:06.829207 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:07:06.829218 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:07:06.829229 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:07:06.829240 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:07:06.829251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:07:06.829261 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:07:06.829273 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:07:06.829284 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:07:06.829298 kernel: NET: Registered PF_XDP protocol family Jul 15 05:07:06.829442 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:07:06.829582 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:07:06.829723 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:07:06.829935 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 05:07:06.830084 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 05:07:06.830240 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 05:07:06.830256 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:07:06.830273 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:07:06.830284 kernel: Initialise system trusted keyrings Jul 15 05:07:06.830295 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:07:06.830306 kernel: Key type asymmetric registered Jul 15 05:07:06.830317 kernel: Asymmetric key parser 'x509' registered Jul 15 05:07:06.830328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:07:06.830339 kernel: io scheduler mq-deadline registered Jul 15 05:07:06.830350 kernel: io scheduler kyber registered Jul 15 05:07:06.830361 kernel: io scheduler bfq registered Jul 15 05:07:06.830374 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:07:06.830386 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:07:06.830397 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:07:06.830408 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 05:07:06.830419 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:07:06.830430 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:07:06.830441 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:07:06.830452 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:07:06.830462 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:07:06.830632 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 05:07:06.830653 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 05:07:06.830815 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 05:07:06.830962 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T05:07:06 UTC (1752556026) Jul 15 05:07:06.831117 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 05:07:06.831133 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:07:06.831144 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:07:06.831155 kernel: Segment Routing with IPv6 Jul 15 05:07:06.831170 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:07:06.831182 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:07:06.831193 kernel: Key type dns_resolver registered Jul 15 05:07:06.831204 kernel: IPI shorthand broadcast: enabled Jul 15 05:07:06.831215 kernel: sched_clock: Marking stable (3360002481, 427544085)->(4148687778, -361141212) Jul 15 05:07:06.831226 kernel: registered taskstats version 1 Jul 15 05:07:06.831237 kernel: Loading compiled-in X.509 certificates Jul 15 05:07:06.831248 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:07:06.831259 kernel: Demotion targets for Node 0: null Jul 15 05:07:06.831273 kernel: Key type .fscrypt registered Jul 15 05:07:06.831284 kernel: Key type fscrypt-provisioning registered Jul 15 05:07:06.831295 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:07:06.831306 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:07:06.831317 kernel: ima: No architecture policies found Jul 15 05:07:06.831327 kernel: clk: Disabling unused clocks Jul 15 05:07:06.831338 kernel: Warning: unable to open an initial console. Jul 15 05:07:06.831349 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:07:06.831360 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:07:06.831374 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:07:06.831385 kernel: Run /init as init process Jul 15 05:07:06.831396 kernel: with arguments: Jul 15 05:07:06.831406 kernel: /init Jul 15 05:07:06.831417 kernel: with environment: Jul 15 05:07:06.831427 kernel: HOME=/ Jul 15 05:07:06.831438 kernel: TERM=linux Jul 15 05:07:06.831449 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:07:06.831465 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:07:06.831483 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:07:06.831511 systemd[1]: Detected virtualization kvm. Jul 15 05:07:06.831523 systemd[1]: Detected architecture x86-64. Jul 15 05:07:06.831535 systemd[1]: Running in initrd. Jul 15 05:07:06.831546 systemd[1]: No hostname configured, using default hostname. Jul 15 05:07:06.831562 systemd[1]: Hostname set to . Jul 15 05:07:06.831574 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:07:06.831586 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:07:06.831598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:07:06.831610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:07:06.831623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:07:06.831635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:07:06.831648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:07:06.831664 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:07:06.831677 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:07:06.831690 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:07:06.831702 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:07:06.831714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:07:06.831726 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:07:06.831738 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:07:06.831753 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:07:06.831783 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:07:06.831795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:07:06.831807 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:07:06.831819 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:07:06.831832 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:07:06.831843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:07:06.831855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:07:06.831868 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:07:06.831883 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:07:06.831895 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:07:06.831907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:07:06.831919 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:07:06.831932 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:07:06.831949 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:07:06.831961 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:07:06.831973 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:07:06.831985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:06.831997 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:07:06.832010 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:07:06.832024 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:07:06.832037 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:07:06.832081 systemd-journald[220]: Collecting audit messages is disabled. Jul 15 05:07:06.832124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:07:06.832137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:07:06.832150 systemd-journald[220]: Journal started Jul 15 05:07:06.832176 systemd-journald[220]: Runtime Journal (/run/log/journal/153d67eb5a2d4116a09231d61fd8a653) is 6M, max 48.6M, 42.5M free. Jul 15 05:07:06.817562 systemd-modules-load[221]: Inserted module 'overlay' Jul 15 05:07:06.866062 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:07:06.866105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:07:06.866122 kernel: Bridge firewalling registered Jul 15 05:07:06.848580 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 15 05:07:06.865678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:07:06.868373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:06.871572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:07:06.876367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:07:06.880834 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:07:06.895973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:07:06.898976 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:07:06.899324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:07:06.903465 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:07:06.908279 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:07:06.916963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:07:06.920404 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:07:06.934349 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:07:06.981373 systemd-resolved[264]: Positive Trust Anchors: Jul 15 05:07:06.981394 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:07:06.981433 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:07:06.984478 systemd-resolved[264]: Defaulting to hostname 'linux'. Jul 15 05:07:06.985806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:07:06.991716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:07:07.038806 kernel: SCSI subsystem initialized Jul 15 05:07:07.049793 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:07:07.059796 kernel: iscsi: registered transport (tcp) Jul 15 05:07:07.088103 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:07:07.088156 kernel: QLogic iSCSI HBA Driver Jul 15 05:07:07.112437 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:07:07.141649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:07:07.144421 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:07:07.207728 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:07:07.209291 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:07:07.281814 kernel: raid6: avx2x4 gen() 29501 MB/s Jul 15 05:07:07.298812 kernel: raid6: avx2x2 gen() 30473 MB/s Jul 15 05:07:07.315930 kernel: raid6: avx2x1 gen() 25309 MB/s Jul 15 05:07:07.316038 kernel: raid6: using algorithm avx2x2 gen() 30473 MB/s Jul 15 05:07:07.333870 kernel: raid6: .... xor() 19743 MB/s, rmw enabled Jul 15 05:07:07.333978 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:07:07.354821 kernel: xor: automatically using best checksumming function avx Jul 15 05:07:07.532814 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:07:07.542986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:07:07.544914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:07:07.579860 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 15 05:07:07.593797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:07:07.617948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:07:07.660147 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 15 05:07:07.694109 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:07:07.696990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:07:07.783561 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:07:07.787426 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:07:07.823127 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 05:07:07.824366 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 05:07:07.837850 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:07:07.837880 kernel: GPT:9289727 != 19775487 Jul 15 05:07:07.837904 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:07:07.837929 kernel: GPT:9289727 != 19775487 Jul 15 05:07:07.837943 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:07:07.837968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:07.846798 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 05:07:07.848792 kernel: libata version 3.00 loaded. Jul 15 05:07:07.849788 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:07:07.860141 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:07:07.860995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:07.864014 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:07.866800 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:07:07.867013 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:07:07.868847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:07.874149 kernel: AES CTR mode by8 optimization enabled Jul 15 05:07:07.874181 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:07:07.874373 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:07:07.870397 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:07:07.879425 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:07:07.879647 kernel: scsi host0: ahci Jul 15 05:07:07.881130 kernel: scsi host1: ahci Jul 15 05:07:07.882791 kernel: scsi host2: ahci Jul 15 05:07:07.891813 kernel: scsi host3: ahci Jul 15 05:07:07.896791 kernel: scsi host4: ahci Jul 15 05:07:07.901794 kernel: scsi host5: ahci Jul 15 05:07:07.905830 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 15 05:07:07.905857 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 15 05:07:07.905868 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 15 05:07:07.907624 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 15 05:07:07.907647 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 15 05:07:07.909236 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 15 05:07:07.919136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 05:07:07.952848 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 05:07:07.953176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:07.974190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:07:07.982374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 05:07:07.983695 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 05:07:07.984801 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:07:08.217168 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:08.217260 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:08.217275 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:08.218796 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:08.218825 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 05:07:08.220116 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 05:07:08.220139 kernel: ata3.00: applying bridge limits Jul 15 05:07:08.221164 kernel: ata3.00: configured for UDMA/100 Jul 15 05:07:08.221789 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 05:07:08.225807 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:08.278820 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 05:07:08.279112 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 05:07:08.292788 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 05:07:08.491603 disk-uuid[624]: Primary Header is updated. Jul 15 05:07:08.491603 disk-uuid[624]: Secondary Entries is updated. Jul 15 05:07:08.491603 disk-uuid[624]: Secondary Header is updated. Jul 15 05:07:08.495792 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:08.500793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:08.719639 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:07:08.721712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:07:08.723145 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:07:08.724477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:07:08.728097 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:07:08.757948 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:07:09.500822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:09.504058 disk-uuid[635]: The operation has completed successfully. Jul 15 05:07:09.538124 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:07:09.538358 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:07:09.580665 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:07:09.611538 sh[661]: Success Jul 15 05:07:09.634749 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:07:09.634851 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:07:09.634869 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:07:09.644791 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:07:09.685196 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:07:09.688753 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:07:09.704661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:07:09.712743 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:07:09.712817 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (673) Jul 15 05:07:09.715076 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:07:09.715107 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:09.715123 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:07:09.721638 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:07:09.723216 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:07:09.724679 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:07:09.725725 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:07:09.729234 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:07:09.754742 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (704) Jul 15 05:07:09.754872 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:09.754902 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:09.756228 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:07:09.763798 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:09.765261 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:07:09.767579 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:07:10.033626 ignition[744]: Ignition 2.21.0 Jul 15 05:07:10.033644 ignition[744]: Stage: fetch-offline Jul 15 05:07:10.033681 ignition[744]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:10.033690 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:10.033798 ignition[744]: parsed url from cmdline: "" Jul 15 05:07:10.033802 ignition[744]: no config URL provided Jul 15 05:07:10.033808 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:07:10.033817 ignition[744]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:07:10.033848 ignition[744]: op(1): [started] loading QEMU firmware config module Jul 15 05:07:10.033853 ignition[744]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 05:07:10.043242 ignition[744]: op(1): [finished] loading QEMU firmware config module Jul 15 05:07:10.082964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:07:10.088266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:07:10.094464 ignition[744]: parsing config with SHA512: 856ae6aa060e0cea7c234f9e408e71e8ef43050de89bbc694ddc707ce2616f8afe8cc1bccad755c306a232d0600d57fd12bd569c0443763b9a687d3244b4bb46 Jul 15 05:07:10.098925 unknown[744]: fetched base config from "system" Jul 15 05:07:10.098942 unknown[744]: fetched user config from "qemu" Jul 15 05:07:10.099352 ignition[744]: fetch-offline: fetch-offline passed Jul 15 05:07:10.099417 ignition[744]: Ignition finished successfully Jul 15 05:07:10.103328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:07:10.134600 systemd-networkd[851]: lo: Link UP Jul 15 05:07:10.134611 systemd-networkd[851]: lo: Gained carrier Jul 15 05:07:10.136596 systemd-networkd[851]: Enumeration completed Jul 15 05:07:10.137143 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:10.137149 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:07:10.139129 systemd-networkd[851]: eth0: Link UP Jul 15 05:07:10.139135 systemd-networkd[851]: eth0: Gained carrier Jul 15 05:07:10.139146 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:10.140255 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:07:10.141673 systemd[1]: Reached target network.target - Network. Jul 15 05:07:10.143702 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 05:07:10.147390 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:07:10.165913 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:07:10.201793 ignition[855]: Ignition 2.21.0 Jul 15 05:07:10.201809 ignition[855]: Stage: kargs Jul 15 05:07:10.202024 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:10.202038 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:10.233278 ignition[855]: kargs: kargs passed Jul 15 05:07:10.233369 ignition[855]: Ignition finished successfully Jul 15 05:07:10.238272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:07:10.240932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:07:10.288964 ignition[864]: Ignition 2.21.0 Jul 15 05:07:10.288977 ignition[864]: Stage: disks Jul 15 05:07:10.289183 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:10.289195 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:10.295432 ignition[864]: disks: disks passed Jul 15 05:07:10.295500 ignition[864]: Ignition finished successfully Jul 15 05:07:10.299795 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:07:10.300166 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:07:10.304318 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:07:10.305729 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:07:10.306167 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:07:10.306523 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:07:10.312872 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:07:10.401065 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:07:10.412264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:07:10.414490 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:07:10.590791 kernel: EXT4-fs (vda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:07:10.591151 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:07:10.592602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:07:10.595592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:07:10.596516 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:07:10.598283 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:07:10.598326 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:07:10.598348 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:07:10.614274 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:07:10.617136 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:07:10.621791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Jul 15 05:07:10.621820 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:10.622791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:10.622812 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:07:10.627339 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:07:10.700958 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:07:10.706860 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:07:10.712476 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:07:10.718042 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:07:10.933643 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:07:10.936269 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:07:10.939240 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:07:10.960819 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:07:10.962024 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:10.974974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:07:10.994937 ignition[997]: INFO : Ignition 2.21.0 Jul 15 05:07:10.994937 ignition[997]: INFO : Stage: mount Jul 15 05:07:10.996820 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:10.996820 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:10.999107 ignition[997]: INFO : mount: mount passed Jul 15 05:07:10.999873 ignition[997]: INFO : Ignition finished successfully Jul 15 05:07:11.003439 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:07:11.006510 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:07:11.037710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:07:11.071349 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 15 05:07:11.071406 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:11.071420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:11.072381 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:07:11.077406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:07:11.187023 ignition[1026]: INFO : Ignition 2.21.0 Jul 15 05:07:11.187023 ignition[1026]: INFO : Stage: files Jul 15 05:07:11.189272 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:11.189272 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:11.189272 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:07:11.193110 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:07:11.193110 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:07:11.193110 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:07:11.197530 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:07:11.197530 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:07:11.197530 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:07:11.197530 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 05:07:11.194267 unknown[1026]: wrote ssh authorized keys file for user: core Jul 15 05:07:11.271000 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:07:11.519238 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 05:07:11.519238 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 05:07:11.523477 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 05:07:11.814067 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 05:07:11.858063 systemd-networkd[851]: eth0: Gained IPv6LL Jul 15 05:07:12.077629 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 05:07:12.077629 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:07:12.082170 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:07:12.149099 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:07:12.151382 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:07:12.151382 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:07:12.195502 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:07:12.198438 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:07:12.198438 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 05:07:12.764540 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 05:07:13.505383 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 05:07:13.505383 ignition[1026]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 05:07:13.578880 ignition[1026]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:07:13.737627 ignition[1026]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:07:13.737627 ignition[1026]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 05:07:13.737627 ignition[1026]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 05:07:13.737627 ignition[1026]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:07:13.744659 ignition[1026]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:07:13.744659 ignition[1026]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 05:07:13.744659 ignition[1026]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 05:07:13.763140 ignition[1026]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:07:13.771465 ignition[1026]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:07:13.773238 ignition[1026]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 05:07:13.773238 ignition[1026]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:07:13.773238 ignition[1026]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:07:13.777249 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:07:13.777249 ignition[1026]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:07:13.777249 ignition[1026]: INFO : files: files passed Jul 15 05:07:13.777249 ignition[1026]: INFO : Ignition finished successfully Jul 15 05:07:13.778717 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:07:13.782147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:07:13.786945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:07:13.798409 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:07:13.798550 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:07:13.800304 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 05:07:13.805524 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:07:13.807449 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:07:13.807449 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:07:13.811913 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:07:13.814064 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:07:13.817300 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:07:13.889146 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:07:13.889330 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:07:13.890397 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:07:13.892697 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:07:13.894727 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:07:13.896299 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:07:13.939546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:07:13.944618 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:07:13.974110 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:07:13.974295 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:07:13.977553 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:07:13.979551 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:07:13.979688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:07:13.982653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:07:13.984888 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:07:13.986727 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:07:13.987647 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:07:13.988159 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:07:13.988471 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:07:13.988802 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:07:13.989267 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:07:13.989596 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:07:13.990257 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:07:13.990561 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:07:13.991080 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:07:13.991257 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:07:14.008487 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:07:14.008637 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:07:14.009183 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:07:14.012986 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:07:14.013997 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:07:14.014150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:07:14.018425 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:07:14.018551 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:07:14.019719 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:07:14.021725 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:07:14.026937 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:07:14.027189 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:07:14.031144 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:07:14.031620 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:07:14.031723 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:07:14.034182 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:07:14.034280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:07:14.037168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:07:14.037319 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:07:14.038235 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:07:14.038367 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:07:14.044037 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:07:14.045154 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:07:14.045292 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:07:14.048264 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:07:14.055140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:07:14.055364 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:07:14.058550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:07:14.058671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:07:14.066014 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:07:14.067174 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:07:14.086616 ignition[1081]: INFO : Ignition 2.21.0 Jul 15 05:07:14.086616 ignition[1081]: INFO : Stage: umount Jul 15 05:07:14.088610 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:14.088610 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:14.090916 ignition[1081]: INFO : umount: umount passed Jul 15 05:07:14.090916 ignition[1081]: INFO : Ignition finished successfully Jul 15 05:07:14.092274 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:07:14.094116 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:07:14.094242 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:07:14.097014 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:07:14.097164 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:07:14.099280 systemd[1]: Stopped target network.target - Network. Jul 15 05:07:14.099648 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:07:14.099727 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:07:14.100343 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:07:14.100398 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:07:14.100652 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:07:14.100707 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:07:14.101164 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:07:14.101214 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:07:14.101488 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:07:14.101550 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:07:14.102272 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:07:14.112539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:07:14.124308 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:07:14.124501 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:07:14.130449 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:07:14.130756 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:07:14.130911 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:07:14.134713 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:07:14.135430 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:07:14.136159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:07:14.136239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:07:14.141741 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:07:14.141958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:07:14.142025 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:07:14.144943 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:07:14.145001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:07:14.148575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:07:14.148632 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:07:14.149885 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:07:14.149953 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:07:14.154740 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:07:14.160496 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:07:14.160585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:07:14.169638 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:07:14.169933 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:07:14.172580 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:07:14.172820 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:07:14.176279 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:07:14.176366 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:07:14.177358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:07:14.177403 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:07:14.180944 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:07:14.181023 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:07:14.185388 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:07:14.185446 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:07:14.189487 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:07:14.189550 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:07:14.194369 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:07:14.195594 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:07:14.195656 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:07:14.199542 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:07:14.199602 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:07:14.203681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:07:14.203734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:14.208779 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:07:14.208847 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:07:14.208904 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:07:14.224952 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:07:14.225123 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:07:14.229480 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:07:14.232315 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:07:14.269565 systemd[1]: Switching root. Jul 15 05:07:14.311923 systemd-journald[220]: Journal stopped Jul 15 05:07:15.747138 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 15 05:07:15.747238 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:07:15.747255 kernel: SELinux: policy capability open_perms=1 Jul 15 05:07:15.747273 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:07:15.747293 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:07:15.747307 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:07:15.747331 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:07:15.747343 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:07:15.747355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:07:15.747367 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:07:15.747378 kernel: audit: type=1403 audit(1752556034.735:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:07:15.747398 systemd[1]: Successfully loaded SELinux policy in 65.106ms. Jul 15 05:07:15.747420 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.258ms. Jul 15 05:07:15.747438 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:07:15.747456 systemd[1]: Detected virtualization kvm. Jul 15 05:07:15.747472 systemd[1]: Detected architecture x86-64. Jul 15 05:07:15.747484 systemd[1]: Detected first boot. Jul 15 05:07:15.747497 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:07:15.747510 zram_generator::config[1127]: No configuration found. Jul 15 05:07:15.747524 kernel: Guest personality initialized and is inactive Jul 15 05:07:15.747536 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:07:15.747549 kernel: Initialized host personality Jul 15 05:07:15.747562 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:07:15.747581 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:07:15.747599 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:07:15.747614 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:07:15.747632 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:07:15.747645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:07:15.747658 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:07:15.747671 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:07:15.747683 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:07:15.747696 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:07:15.747716 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:07:15.747731 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:07:15.747749 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:07:15.747776 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:07:15.747791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:07:15.747804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:07:15.747817 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:07:15.747829 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:07:15.747861 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:07:15.747878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:07:15.747895 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:07:15.747911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:07:15.747923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:07:15.747936 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:07:15.747949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:07:15.747961 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:07:15.747976 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:07:15.748011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:07:15.748030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:07:15.748044 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:07:15.748056 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:07:15.748069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:07:15.748083 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:07:15.748096 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:07:15.748109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:07:15.748129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:07:15.748145 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:07:15.748161 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:07:15.748175 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:07:15.748187 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:07:15.748200 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:07:15.748212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:15.748225 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:07:15.748237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:07:15.748254 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:07:15.748271 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:07:15.748286 systemd[1]: Reached target machines.target - Containers. Jul 15 05:07:15.748303 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:07:15.748316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:15.748328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:07:15.748341 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:07:15.748353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:07:15.748369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:07:15.748386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:07:15.748402 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:07:15.748419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:07:15.748434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:07:15.748447 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:07:15.748459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:07:15.748471 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:07:15.748484 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:07:15.748500 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:15.748513 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:07:15.748530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:07:15.748544 kernel: loop: module loaded Jul 15 05:07:15.748561 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:07:15.748574 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:07:15.748587 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:07:15.748602 kernel: fuse: init (API version 7.41) Jul 15 05:07:15.748614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:07:15.748626 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:07:15.748644 systemd[1]: Stopped verity-setup.service. Jul 15 05:07:15.748658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:15.748675 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:07:15.748696 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:07:15.748711 kernel: ACPI: bus type drm_connector registered Jul 15 05:07:15.748725 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:07:15.748738 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:07:15.748750 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:07:15.748780 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:07:15.748799 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:07:15.748816 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:07:15.748833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:07:15.748858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:07:15.748872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:07:15.748909 systemd-journald[1198]: Collecting audit messages is disabled. Jul 15 05:07:15.748933 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:07:15.748948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:07:15.748963 systemd-journald[1198]: Journal started Jul 15 05:07:15.748993 systemd-journald[1198]: Runtime Journal (/run/log/journal/153d67eb5a2d4116a09231d61fd8a653) is 6M, max 48.6M, 42.5M free. Jul 15 05:07:15.458805 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:07:15.484544 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 05:07:15.485150 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:07:15.751794 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:07:15.753413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:07:15.753702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:07:15.758286 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:07:15.758586 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:07:15.760058 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:07:15.760402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:07:15.761992 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:07:15.763566 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:07:15.765310 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:07:15.767263 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:07:15.781400 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:07:15.810087 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:07:15.813637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:07:15.815071 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:07:15.815128 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:07:15.817688 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:07:15.829264 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:07:15.830981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:15.832790 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:07:15.835664 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:07:15.837322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:07:15.839704 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:07:15.841389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:07:15.844932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:07:15.850185 systemd-journald[1198]: Time spent on flushing to /var/log/journal/153d67eb5a2d4116a09231d61fd8a653 is 18.247ms for 978 entries. Jul 15 05:07:15.850185 systemd-journald[1198]: System Journal (/var/log/journal/153d67eb5a2d4116a09231d61fd8a653) is 8M, max 195.6M, 187.6M free. Jul 15 05:07:16.246107 systemd-journald[1198]: Received client request to flush runtime journal. Jul 15 05:07:16.246161 kernel: loop0: detected capacity change from 0 to 146488 Jul 15 05:07:16.246191 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:07:16.246208 kernel: loop1: detected capacity change from 0 to 224512 Jul 15 05:07:16.246359 kernel: loop2: detected capacity change from 0 to 114000 Jul 15 05:07:16.246381 kernel: loop3: detected capacity change from 0 to 146488 Jul 15 05:07:16.246398 kernel: loop4: detected capacity change from 0 to 224512 Jul 15 05:07:16.246415 kernel: loop5: detected capacity change from 0 to 114000 Jul 15 05:07:15.848022 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:07:15.852556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:07:15.857318 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:07:15.859338 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:07:15.941475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:07:15.959178 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:07:15.963392 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:07:16.226215 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:07:16.229685 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:07:16.231547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:07:16.235432 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:07:16.240921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:07:16.250389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:07:16.260316 (sd-merge)[1258]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 05:07:16.262691 (sd-merge)[1258]: Merged extensions into '/usr'. Jul 15 05:07:16.268404 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:07:16.268442 systemd[1]: Reloading... Jul 15 05:07:16.284320 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 15 05:07:16.284345 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 15 05:07:16.357795 zram_generator::config[1294]: No configuration found. Jul 15 05:07:16.537574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:07:16.603786 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:07:16.628317 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:07:16.629042 systemd[1]: Reloading finished in 360 ms. Jul 15 05:07:16.653456 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:07:16.655001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:07:16.656802 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:07:16.658583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:07:16.676869 systemd[1]: Starting ensure-sysext.service... Jul 15 05:07:16.679239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:07:16.698576 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:07:16.698598 systemd[1]: Reloading... Jul 15 05:07:16.712020 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:07:16.712580 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:07:16.713152 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:07:16.713540 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:07:16.714790 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:07:16.715250 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 15 05:07:16.715440 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 15 05:07:16.721250 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:07:16.721269 systemd-tmpfiles[1336]: Skipping /boot Jul 15 05:07:16.733384 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:07:16.733404 systemd-tmpfiles[1336]: Skipping /boot Jul 15 05:07:16.778820 zram_generator::config[1372]: No configuration found. Jul 15 05:07:16.910639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:07:17.023385 systemd[1]: Reloading finished in 324 ms. Jul 15 05:07:17.039676 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:07:17.071278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:07:17.083686 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:07:17.088712 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:07:17.091728 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:07:17.105639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:07:17.109161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:07:17.113838 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:07:17.122855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.123105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:17.126718 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:07:17.131358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:07:17.137099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:07:17.140967 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:17.141129 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:17.148509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:07:17.149849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.152513 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:07:17.155311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:07:17.156285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:07:17.158541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:07:17.159243 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:07:17.161555 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:07:17.162265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:07:17.177880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.178418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:17.181151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:07:17.181456 augenrules[1435]: No rules Jul 15 05:07:17.184383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:07:17.187843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:07:17.189516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:17.189747 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:17.192280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:07:17.193583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.195797 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:07:17.196336 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:07:17.201071 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:07:17.207477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.209274 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:07:17.210709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:17.213447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:07:17.214843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:17.215006 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:17.215194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.216415 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:07:17.221343 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:07:17.223590 systemd[1]: Finished ensure-sysext.service. Jul 15 05:07:17.227735 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:07:17.373644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:07:17.374564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:07:17.377233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:07:17.378215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:07:17.380385 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:07:17.387235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:07:17.389170 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Jul 15 05:07:17.392579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:07:17.392940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:07:17.398256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:07:17.398406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:07:17.415167 augenrules[1446]: /sbin/augenrules: No change Jul 15 05:07:17.420356 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:07:17.429264 augenrules[1472]: No rules Jul 15 05:07:17.431482 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:07:17.432026 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:07:17.434029 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:07:17.442175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:07:17.453070 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:07:17.577996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:07:17.703470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:07:17.710002 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:07:17.722804 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:07:17.724746 systemd-networkd[1484]: lo: Link UP Jul 15 05:07:17.725713 systemd-networkd[1484]: lo: Gained carrier Jul 15 05:07:17.731900 systemd-networkd[1484]: Enumeration completed Jul 15 05:07:17.732845 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:07:17.737054 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:17.737284 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:07:17.740420 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:07:17.740714 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:07:17.743357 systemd-networkd[1484]: eth0: Link UP Jul 15 05:07:17.745960 systemd-networkd[1484]: eth0: Gained carrier Jul 15 05:07:17.746246 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:17.755843 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 15 05:07:17.754376 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:07:17.765623 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:07:17.766037 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:07:17.763430 systemd-resolved[1405]: Positive Trust Anchors: Jul 15 05:07:17.763451 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:07:17.763480 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:07:17.767116 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:07:17.779292 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jul 15 05:07:17.782607 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:07:17.784524 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:07:17.786744 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:07:17.788326 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:07:17.790092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:07:17.791688 systemd[1]: Reached target network.target - Network. Jul 15 05:07:18.269410 systemd-resolved[1405]: Clock change detected. Flushing caches. Jul 15 05:07:18.269524 systemd-timesyncd[1450]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 05:07:18.269763 systemd-timesyncd[1450]: Initial clock synchronization to Tue 2025-07-15 05:07:18.269369 UTC. Jul 15 05:07:18.270258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:07:18.271562 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:07:18.272919 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:07:18.274470 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:07:18.277415 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:07:18.279150 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:07:18.282594 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:07:18.284181 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:07:18.285875 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:07:18.285911 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:07:18.288435 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:07:18.291289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:07:18.295083 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:07:18.304267 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:07:18.308484 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:07:18.310105 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:07:18.326434 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:07:18.328095 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:07:18.330191 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:07:18.336262 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:07:18.337549 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:07:18.338775 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:07:18.338863 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:07:18.341498 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:07:18.344162 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:07:18.349029 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:07:18.386873 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:07:18.389885 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:07:18.391360 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:07:18.394706 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:07:18.397721 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:07:18.406376 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:07:18.409731 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:07:18.414956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:07:18.419207 jq[1552]: false Jul 15 05:07:18.424868 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:07:18.435276 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jul 15 05:07:18.437375 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jul 15 05:07:18.462641 extend-filesystems[1553]: Found /dev/vda6 Jul 15 05:07:18.464402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:18.465423 oslogin_cache_refresh[1554]: Failure getting users, quitting Jul 15 05:07:18.477744 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Jul 15 05:07:18.477744 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:07:18.477901 extend-filesystems[1553]: Found /dev/vda9 Jul 15 05:07:18.477901 extend-filesystems[1553]: Checking size of /dev/vda9 Jul 15 05:07:18.469417 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:07:18.465445 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:07:18.470303 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:07:18.472941 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:07:18.486747 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Jul 15 05:07:18.486729 oslogin_cache_refresh[1554]: Refreshing group entry cache Jul 15 05:07:18.489821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:07:18.492759 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Jul 15 05:07:18.492759 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:07:18.492722 oslogin_cache_refresh[1554]: Failure getting groups, quitting Jul 15 05:07:18.492740 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:07:18.500070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:07:18.503510 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:07:18.509277 jq[1576]: true Jul 15 05:07:18.510702 extend-filesystems[1553]: Resized partition /dev/vda9 Jul 15 05:07:18.503868 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:07:18.514930 extend-filesystems[1581]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:07:18.516977 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 05:07:18.504519 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:07:18.504843 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:07:18.507287 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:07:18.507614 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:07:18.528283 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:07:18.528771 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:07:18.561677 update_engine[1574]: I20250715 05:07:18.561558 1574 main.cc:92] Flatcar Update Engine starting Jul 15 05:07:18.565760 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 05:07:18.581118 kernel: kvm_amd: TSC scaling supported Jul 15 05:07:18.581209 kernel: kvm_amd: Nested Virtualization enabled Jul 15 05:07:18.581229 kernel: kvm_amd: Nested Paging enabled Jul 15 05:07:18.589430 kernel: kvm_amd: LBR virtualization supported Jul 15 05:07:18.589455 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 05:07:18.589469 kernel: kvm_amd: Virtual GIF supported Jul 15 05:07:18.590541 extend-filesystems[1581]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 05:07:18.590541 extend-filesystems[1581]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:07:18.590541 extend-filesystems[1581]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 05:07:18.601368 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Jul 15 05:07:18.595193 (ntainerd)[1587]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:07:18.605679 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:07:18.605975 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:07:18.607219 jq[1586]: true Jul 15 05:07:18.613694 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:07:18.614028 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:07:18.614343 systemd-logind[1560]: New seat seat0. Jul 15 05:07:18.635398 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:07:18.648763 dbus-daemon[1550]: [system] SELinux support is enabled Jul 15 05:07:18.655433 update_engine[1574]: I20250715 05:07:18.655241 1574 update_check_scheduler.cc:74] Next update check in 11m9s Jul 15 05:07:18.740371 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:07:18.818419 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:07:18.886236 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:07:18.890686 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:07:18.892266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:18.905154 tar[1583]: linux-amd64/LICENSE Jul 15 05:07:18.905555 tar[1583]: linux-amd64/helm Jul 15 05:07:18.911767 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 05:07:18.922911 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:07:18.930588 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:07:18.932668 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:07:18.933145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:07:18.934891 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:07:18.935102 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:07:18.965280 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:07:18.980554 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:07:18.980823 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:07:18.983239 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:07:18.985264 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:07:18.988832 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 05:07:18.991629 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:07:19.091555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:07:19.117661 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:07:19.120886 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:07:19.122456 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:07:19.127124 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:07:19.139358 containerd[1587]: time="2025-07-15T05:07:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:07:19.141672 containerd[1587]: time="2025-07-15T05:07:19.141618173Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:07:19.157013 containerd[1587]: time="2025-07-15T05:07:19.156958300Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.854µs" Jul 15 05:07:19.157013 containerd[1587]: time="2025-07-15T05:07:19.156991743Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:07:19.157013 containerd[1587]: time="2025-07-15T05:07:19.157012151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:07:19.157501 containerd[1587]: time="2025-07-15T05:07:19.157217296Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:07:19.157501 containerd[1587]: time="2025-07-15T05:07:19.157238726Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:07:19.157501 containerd[1587]: time="2025-07-15T05:07:19.157270085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157501 containerd[1587]: time="2025-07-15T05:07:19.157366986Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157501 containerd[1587]: time="2025-07-15T05:07:19.157379780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157716 containerd[1587]: time="2025-07-15T05:07:19.157683991Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157716 containerd[1587]: time="2025-07-15T05:07:19.157703507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157716 containerd[1587]: time="2025-07-15T05:07:19.157714418Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157792 containerd[1587]: time="2025-07-15T05:07:19.157722453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:07:19.157984 containerd[1587]: time="2025-07-15T05:07:19.157952764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:07:19.158250 containerd[1587]: time="2025-07-15T05:07:19.158223682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:07:19.158281 containerd[1587]: time="2025-07-15T05:07:19.158263186Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:07:19.158281 containerd[1587]: time="2025-07-15T05:07:19.158273536Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:07:19.158388 containerd[1587]: time="2025-07-15T05:07:19.158315765Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:07:19.158629 containerd[1587]: time="2025-07-15T05:07:19.158608454Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:07:19.158704 containerd[1587]: time="2025-07-15T05:07:19.158687201Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:07:19.168314 containerd[1587]: time="2025-07-15T05:07:19.168242213Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:07:19.168471 containerd[1587]: time="2025-07-15T05:07:19.168377427Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:07:19.168471 containerd[1587]: time="2025-07-15T05:07:19.168442288Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:07:19.168471 containerd[1587]: time="2025-07-15T05:07:19.168462757Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:07:19.168534 containerd[1587]: time="2025-07-15T05:07:19.168480991Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:07:19.168534 containerd[1587]: time="2025-07-15T05:07:19.168496139Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:07:19.168534 containerd[1587]: time="2025-07-15T05:07:19.168516908Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:07:19.168608 containerd[1587]: time="2025-07-15T05:07:19.168534131Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:07:19.168608 containerd[1587]: time="2025-07-15T05:07:19.168549900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:07:19.168608 containerd[1587]: time="2025-07-15T05:07:19.168562414Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:07:19.168608 containerd[1587]: time="2025-07-15T05:07:19.168571972Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:07:19.168608 containerd[1587]: time="2025-07-15T05:07:19.168584916Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:07:19.168810 containerd[1587]: time="2025-07-15T05:07:19.168777667Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:07:19.168842 containerd[1587]: time="2025-07-15T05:07:19.168815568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:07:19.168842 containerd[1587]: time="2025-07-15T05:07:19.168830476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:07:19.168879 containerd[1587]: time="2025-07-15T05:07:19.168843661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:07:19.168879 containerd[1587]: time="2025-07-15T05:07:19.168868507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:07:19.168915 containerd[1587]: time="2025-07-15T05:07:19.168881482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:07:19.168915 containerd[1587]: time="2025-07-15T05:07:19.168895298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:07:19.168915 containerd[1587]: time="2025-07-15T05:07:19.168908132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:07:19.168986 containerd[1587]: time="2025-07-15T05:07:19.168921286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:07:19.168986 containerd[1587]: time="2025-07-15T05:07:19.168933339Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:07:19.168986 containerd[1587]: time="2025-07-15T05:07:19.168954990Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:07:19.169065 containerd[1587]: time="2025-07-15T05:07:19.169044658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:07:19.169090 containerd[1587]: time="2025-07-15T05:07:19.169071909Z" level=info msg="Start snapshots syncer" Jul 15 05:07:19.169231 containerd[1587]: time="2025-07-15T05:07:19.169110782Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:07:19.169492 containerd[1587]: time="2025-07-15T05:07:19.169425782Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169503969Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169611440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169735232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169753977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169765900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169775368Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169792069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169802328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169814471Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169840179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169850769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169861409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169917193Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:07:19.169954 containerd[1587]: time="2025-07-15T05:07:19.169933965Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.169943192Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.169952439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.169960615Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.169971425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.169982586Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.170001542Z" level=info msg="runtime interface created" Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.170006751Z" level=info msg="created NRI interface" Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.170025817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.170037439Z" level=info msg="Connect containerd service" Jul 15 05:07:19.170380 containerd[1587]: time="2025-07-15T05:07:19.170061815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:07:19.171084 containerd[1587]: time="2025-07-15T05:07:19.171037223Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:07:19.500971 tar[1583]: linux-amd64/README.md Jul 15 05:07:19.594172 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:07:19.602175 containerd[1587]: time="2025-07-15T05:07:19.602076588Z" level=info msg="Start subscribing containerd event" Jul 15 05:07:19.602304 containerd[1587]: time="2025-07-15T05:07:19.602195761Z" level=info msg="Start recovering state" Jul 15 05:07:19.602535 containerd[1587]: time="2025-07-15T05:07:19.602496515Z" level=info msg="Start event monitor" Jul 15 05:07:19.602784 containerd[1587]: time="2025-07-15T05:07:19.602545176Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:07:19.602784 containerd[1587]: time="2025-07-15T05:07:19.602583438Z" level=info msg="Start streaming server" Jul 15 05:07:19.602784 containerd[1587]: time="2025-07-15T05:07:19.602599308Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:07:19.602784 containerd[1587]: time="2025-07-15T05:07:19.602611230Z" level=info msg="runtime interface starting up..." Jul 15 05:07:19.602784 containerd[1587]: time="2025-07-15T05:07:19.602619375Z" level=info msg="starting plugins..." Jul 15 05:07:19.602784 containerd[1587]: time="2025-07-15T05:07:19.602493990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:07:19.602893 containerd[1587]: time="2025-07-15T05:07:19.602641447Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:07:19.602924 containerd[1587]: time="2025-07-15T05:07:19.602892177Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:07:19.603204 containerd[1587]: time="2025-07-15T05:07:19.603176961Z" level=info msg="containerd successfully booted in 0.464511s" Jul 15 05:07:19.603496 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:07:19.759703 systemd-networkd[1484]: eth0: Gained IPv6LL Jul 15 05:07:19.763643 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:07:19.765544 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:07:19.768180 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 05:07:19.771257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:19.773620 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:07:19.849732 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:07:19.852733 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 05:07:19.853114 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 05:07:19.856202 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:07:20.716928 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:07:20.719426 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Jul 15 05:07:20.807097 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:20.809569 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:20.817856 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:07:20.820688 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:07:20.847069 systemd-logind[1560]: New session 1 of user core. Jul 15 05:07:20.878184 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:07:20.884553 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:07:20.909825 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:07:20.914079 systemd-logind[1560]: New session c1 of user core. Jul 15 05:07:21.127627 systemd[1694]: Queued start job for default target default.target. Jul 15 05:07:21.166944 systemd[1694]: Created slice app.slice - User Application Slice. Jul 15 05:07:21.166975 systemd[1694]: Reached target paths.target - Paths. Jul 15 05:07:21.167023 systemd[1694]: Reached target timers.target - Timers. Jul 15 05:07:21.168790 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:07:21.216215 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:07:21.216395 systemd[1694]: Reached target sockets.target - Sockets. Jul 15 05:07:21.216443 systemd[1694]: Reached target basic.target - Basic System. Jul 15 05:07:21.216484 systemd[1694]: Reached target default.target - Main User Target. Jul 15 05:07:21.216517 systemd[1694]: Startup finished in 277ms. Jul 15 05:07:21.217215 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:07:21.226530 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:07:21.292010 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:42918.service - OpenSSH per-connection server daemon (10.0.0.1:42918). Jul 15 05:07:21.356014 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 42918 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:21.358231 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:21.363504 systemd-logind[1560]: New session 2 of user core. Jul 15 05:07:21.370501 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:07:21.471998 sshd[1708]: Connection closed by 10.0.0.1 port 42918 Jul 15 05:07:21.473977 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:21.481321 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:42918.service: Deactivated successfully. Jul 15 05:07:21.483358 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:07:21.484093 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:07:21.487002 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:42920.service - OpenSSH per-connection server daemon (10.0.0.1:42920). Jul 15 05:07:21.489520 systemd-logind[1560]: Removed session 2. Jul 15 05:07:21.577571 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 42920 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:21.579307 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:21.584808 systemd-logind[1560]: New session 3 of user core. Jul 15 05:07:21.600544 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:07:21.603272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:21.606142 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:07:21.607647 systemd[1]: Startup finished in 3.425s (kernel) + 8.095s (initrd) + 6.457s (userspace) = 17.979s. Jul 15 05:07:21.630977 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:21.664621 sshd[1723]: Connection closed by 10.0.0.1 port 42920 Jul 15 05:07:21.664974 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:21.668316 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:42920.service: Deactivated successfully. Jul 15 05:07:21.670612 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:07:21.672142 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:07:21.673790 systemd-logind[1560]: Removed session 3. Jul 15 05:07:22.884928 kubelet[1721]: E0715 05:07:22.884830 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:22.889192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:22.889428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:22.889903 systemd[1]: kubelet.service: Consumed 2.518s CPU time, 266.6M memory peak. Jul 15 05:07:31.691464 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:57628.service - OpenSSH per-connection server daemon (10.0.0.1:57628). Jul 15 05:07:31.752787 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 57628 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:31.754478 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:31.760061 systemd-logind[1560]: New session 4 of user core. Jul 15 05:07:31.770609 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:07:31.828036 sshd[1743]: Connection closed by 10.0.0.1 port 57628 Jul 15 05:07:31.828443 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:31.841465 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:57628.service: Deactivated successfully. Jul 15 05:07:31.843438 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:07:31.844179 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:07:31.847038 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:57630.service - OpenSSH per-connection server daemon (10.0.0.1:57630). Jul 15 05:07:31.847571 systemd-logind[1560]: Removed session 4. Jul 15 05:07:31.912933 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:31.914890 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:31.920054 systemd-logind[1560]: New session 5 of user core. Jul 15 05:07:31.929584 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:07:31.980239 sshd[1753]: Connection closed by 10.0.0.1 port 57630 Jul 15 05:07:31.980564 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:31.994839 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:57630.service: Deactivated successfully. Jul 15 05:07:31.997280 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:07:31.998321 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:07:32.001099 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:57640.service - OpenSSH per-connection server daemon (10.0.0.1:57640). Jul 15 05:07:32.001801 systemd-logind[1560]: Removed session 5. Jul 15 05:07:32.063199 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 57640 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:32.065130 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:32.070889 systemd-logind[1560]: New session 6 of user core. Jul 15 05:07:32.080587 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:07:32.137904 sshd[1762]: Connection closed by 10.0.0.1 port 57640 Jul 15 05:07:32.138381 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:32.153457 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:57640.service: Deactivated successfully. Jul 15 05:07:32.156213 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:07:32.157264 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:07:32.160701 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:57654.service - OpenSSH per-connection server daemon (10.0.0.1:57654). Jul 15 05:07:32.161479 systemd-logind[1560]: Removed session 6. Jul 15 05:07:32.229827 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 57654 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:32.232170 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:32.238709 systemd-logind[1560]: New session 7 of user core. Jul 15 05:07:32.248605 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:07:32.313151 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:07:32.313559 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:32.333214 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 15 05:07:32.335847 sshd[1771]: Connection closed by 10.0.0.1 port 57654 Jul 15 05:07:32.336508 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:32.348870 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:57654.service: Deactivated successfully. Jul 15 05:07:32.351768 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:07:32.354388 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:07:32.356703 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:57660.service - OpenSSH per-connection server daemon (10.0.0.1:57660). Jul 15 05:07:32.358091 systemd-logind[1560]: Removed session 7. Jul 15 05:07:32.424709 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 57660 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:32.427076 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:32.432983 systemd-logind[1560]: New session 8 of user core. Jul 15 05:07:32.450700 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:07:32.510638 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:07:32.511048 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:32.518776 sudo[1783]: pam_unix(sudo:session): session closed for user root Jul 15 05:07:32.527159 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:07:32.527582 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:32.540699 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:07:32.602150 augenrules[1805]: No rules Jul 15 05:07:32.604521 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:07:32.604857 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:07:32.606314 sudo[1782]: pam_unix(sudo:session): session closed for user root Jul 15 05:07:32.608587 sshd[1781]: Connection closed by 10.0.0.1 port 57660 Jul 15 05:07:32.609019 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:32.624979 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:57660.service: Deactivated successfully. Jul 15 05:07:32.627380 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:07:32.628248 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:07:32.631728 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:57672.service - OpenSSH per-connection server daemon (10.0.0.1:57672). Jul 15 05:07:32.632357 systemd-logind[1560]: Removed session 8. Jul 15 05:07:32.693274 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 57672 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:32.695881 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:32.701144 systemd-logind[1560]: New session 9 of user core. Jul 15 05:07:32.711752 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:07:32.769099 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:07:32.769511 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:33.140003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:07:33.141935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:33.530958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:33.548291 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:33.549446 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:07:33.555045 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:07:33.656188 kubelet[1846]: E0715 05:07:33.656065 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:33.663600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:33.663814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:33.664236 systemd[1]: kubelet.service: Consumed 447ms CPU time, 110.8M memory peak. Jul 15 05:07:34.257583 dockerd[1847]: time="2025-07-15T05:07:34.257351885Z" level=info msg="Starting up" Jul 15 05:07:34.259184 dockerd[1847]: time="2025-07-15T05:07:34.259137823Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:07:34.296641 dockerd[1847]: time="2025-07-15T05:07:34.296555989Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:07:35.950947 dockerd[1847]: time="2025-07-15T05:07:35.950847414Z" level=info msg="Loading containers: start." Jul 15 05:07:35.984375 kernel: Initializing XFRM netlink socket Jul 15 05:07:36.331177 systemd-networkd[1484]: docker0: Link UP Jul 15 05:07:36.339462 dockerd[1847]: time="2025-07-15T05:07:36.339422804Z" level=info msg="Loading containers: done." Jul 15 05:07:36.356778 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck416221953-merged.mount: Deactivated successfully. Jul 15 05:07:36.360072 dockerd[1847]: time="2025-07-15T05:07:36.360028553Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:07:36.360147 dockerd[1847]: time="2025-07-15T05:07:36.360138188Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:07:36.360260 dockerd[1847]: time="2025-07-15T05:07:36.360238897Z" level=info msg="Initializing buildkit" Jul 15 05:07:36.395303 dockerd[1847]: time="2025-07-15T05:07:36.395269186Z" level=info msg="Completed buildkit initialization" Jul 15 05:07:36.399866 dockerd[1847]: time="2025-07-15T05:07:36.399822532Z" level=info msg="Daemon has completed initialization" Jul 15 05:07:36.400081 dockerd[1847]: time="2025-07-15T05:07:36.399986520Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:07:36.400075 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:07:37.685026 containerd[1587]: time="2025-07-15T05:07:37.684966789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 05:07:38.362621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689551102.mount: Deactivated successfully. Jul 15 05:07:40.376929 containerd[1587]: time="2025-07-15T05:07:40.376832185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:40.381571 containerd[1587]: time="2025-07-15T05:07:40.381481441Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 15 05:07:40.385043 containerd[1587]: time="2025-07-15T05:07:40.384958980Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:40.389484 containerd[1587]: time="2025-07-15T05:07:40.389421555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:40.390654 containerd[1587]: time="2025-07-15T05:07:40.390575178Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.70555544s" Jul 15 05:07:40.390654 containerd[1587]: time="2025-07-15T05:07:40.390629199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 15 05:07:40.391644 containerd[1587]: time="2025-07-15T05:07:40.391601452Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 05:07:42.419945 containerd[1587]: time="2025-07-15T05:07:42.419835216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:42.493274 containerd[1587]: time="2025-07-15T05:07:42.493116091Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 15 05:07:42.561698 containerd[1587]: time="2025-07-15T05:07:42.561598291Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:42.629124 containerd[1587]: time="2025-07-15T05:07:42.629031695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:42.630197 containerd[1587]: time="2025-07-15T05:07:42.630154790Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.238505268s" Jul 15 05:07:42.630197 containerd[1587]: time="2025-07-15T05:07:42.630193873Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 15 05:07:42.631083 containerd[1587]: time="2025-07-15T05:07:42.630963706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 05:07:43.914572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:07:43.916953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:44.210718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:44.231824 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:44.284558 kubelet[2138]: E0715 05:07:44.284464 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:44.288681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:44.288903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:44.289386 systemd[1]: kubelet.service: Consumed 283ms CPU time, 110.6M memory peak. Jul 15 05:07:44.870662 containerd[1587]: time="2025-07-15T05:07:44.870568816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:44.871287 containerd[1587]: time="2025-07-15T05:07:44.871254131Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 15 05:07:44.872430 containerd[1587]: time="2025-07-15T05:07:44.872389589Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:44.875294 containerd[1587]: time="2025-07-15T05:07:44.875252316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:44.876975 containerd[1587]: time="2025-07-15T05:07:44.876830815Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.24581986s" Jul 15 05:07:44.876975 containerd[1587]: time="2025-07-15T05:07:44.876978552Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 15 05:07:44.877662 containerd[1587]: time="2025-07-15T05:07:44.877640693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 05:07:46.357996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289266064.mount: Deactivated successfully. Jul 15 05:07:47.324449 containerd[1587]: time="2025-07-15T05:07:47.324378753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:47.326150 containerd[1587]: time="2025-07-15T05:07:47.326093127Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 15 05:07:47.327093 containerd[1587]: time="2025-07-15T05:07:47.327017540Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:47.329261 containerd[1587]: time="2025-07-15T05:07:47.329220019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:47.329962 containerd[1587]: time="2025-07-15T05:07:47.329913980Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.452241837s" Jul 15 05:07:47.329962 containerd[1587]: time="2025-07-15T05:07:47.329951801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 15 05:07:47.330770 containerd[1587]: time="2025-07-15T05:07:47.330583876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:07:47.834530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207695343.mount: Deactivated successfully. Jul 15 05:07:49.223708 containerd[1587]: time="2025-07-15T05:07:49.223646582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:49.224696 containerd[1587]: time="2025-07-15T05:07:49.224654271Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:07:49.226220 containerd[1587]: time="2025-07-15T05:07:49.226171626Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:49.229181 containerd[1587]: time="2025-07-15T05:07:49.229132045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:49.230249 containerd[1587]: time="2025-07-15T05:07:49.230212060Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.899593539s" Jul 15 05:07:49.230249 containerd[1587]: time="2025-07-15T05:07:49.230243118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:07:49.230887 containerd[1587]: time="2025-07-15T05:07:49.230857520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:07:49.743431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324627129.mount: Deactivated successfully. Jul 15 05:07:49.750988 containerd[1587]: time="2025-07-15T05:07:49.750909692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:07:49.751797 containerd[1587]: time="2025-07-15T05:07:49.751750078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:07:49.753400 containerd[1587]: time="2025-07-15T05:07:49.753365807Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:07:49.756171 containerd[1587]: time="2025-07-15T05:07:49.756094141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:07:49.756843 containerd[1587]: time="2025-07-15T05:07:49.756776490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 525.869448ms" Jul 15 05:07:49.756843 containerd[1587]: time="2025-07-15T05:07:49.756823819Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:07:49.757375 containerd[1587]: time="2025-07-15T05:07:49.757347120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 05:07:52.042000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009227254.mount: Deactivated successfully. Jul 15 05:07:54.539659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 15 05:07:54.541640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:54.915366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:54.920373 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:55.004907 kubelet[2274]: E0715 05:07:55.004689 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:55.008916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:55.009174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:55.009679 systemd[1]: kubelet.service: Consumed 298ms CPU time, 110.6M memory peak. Jul 15 05:07:55.819055 containerd[1587]: time="2025-07-15T05:07:55.818966419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:55.820019 containerd[1587]: time="2025-07-15T05:07:55.819938657Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 15 05:07:55.821308 containerd[1587]: time="2025-07-15T05:07:55.821264462Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:55.824593 containerd[1587]: time="2025-07-15T05:07:55.824533648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:55.825785 containerd[1587]: time="2025-07-15T05:07:55.825727812Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.068347859s" Jul 15 05:07:55.825785 containerd[1587]: time="2025-07-15T05:07:55.825766435Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 05:07:58.283532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:58.283737 systemd[1]: kubelet.service: Consumed 298ms CPU time, 110.6M memory peak. Jul 15 05:07:58.286317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:58.316767 systemd[1]: Reload requested from client PID 2316 ('systemctl') (unit session-9.scope)... Jul 15 05:07:58.316790 systemd[1]: Reloading... Jul 15 05:07:58.426634 zram_generator::config[2360]: No configuration found. Jul 15 05:07:58.846521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:07:59.008751 systemd[1]: Reloading finished in 691 ms. Jul 15 05:07:59.078622 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:07:59.078765 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:07:59.079189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:59.079251 systemd[1]: kubelet.service: Consumed 176ms CPU time, 98.2M memory peak. Jul 15 05:07:59.081586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:59.337791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:59.368828 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:07:59.414367 kubelet[2406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:07:59.414846 kubelet[2406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:07:59.414846 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:07:59.414846 kubelet[2406]: I0715 05:07:59.414478 2406 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:07:59.715608 kubelet[2406]: I0715 05:07:59.715470 2406 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:07:59.715608 kubelet[2406]: I0715 05:07:59.715502 2406 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:07:59.715845 kubelet[2406]: I0715 05:07:59.715816 2406 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:07:59.753597 kubelet[2406]: E0715 05:07:59.753535 2406 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:07:59.754579 kubelet[2406]: I0715 05:07:59.754512 2406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:07:59.762045 kubelet[2406]: I0715 05:07:59.762012 2406 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:07:59.771118 kubelet[2406]: I0715 05:07:59.771054 2406 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:07:59.771479 kubelet[2406]: I0715 05:07:59.771407 2406 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:07:59.771789 kubelet[2406]: I0715 05:07:59.771462 2406 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:07:59.771915 kubelet[2406]: I0715 05:07:59.771791 2406 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:07:59.771915 kubelet[2406]: I0715 05:07:59.771805 2406 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:07:59.772054 kubelet[2406]: I0715 05:07:59.772018 2406 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:07:59.776472 kubelet[2406]: I0715 05:07:59.776415 2406 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:07:59.780387 kubelet[2406]: I0715 05:07:59.780349 2406 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:07:59.780435 kubelet[2406]: I0715 05:07:59.780409 2406 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:07:59.780435 kubelet[2406]: I0715 05:07:59.780431 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:07:59.784772 kubelet[2406]: I0715 05:07:59.784081 2406 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:07:59.784772 kubelet[2406]: I0715 05:07:59.784598 2406 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:07:59.784772 kubelet[2406]: W0715 05:07:59.784661 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:07:59.784772 kubelet[2406]: E0715 05:07:59.784733 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:07:59.785357 kubelet[2406]: W0715 05:07:59.785286 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:07:59.785357 kubelet[2406]: E0715 05:07:59.785350 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:07:59.786350 kubelet[2406]: W0715 05:07:59.786224 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:07:59.789357 kubelet[2406]: I0715 05:07:59.789271 2406 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:07:59.789357 kubelet[2406]: I0715 05:07:59.789356 2406 server.go:1287] "Started kubelet" Jul 15 05:07:59.789750 kubelet[2406]: I0715 05:07:59.789685 2406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:07:59.790126 kubelet[2406]: I0715 05:07:59.790040 2406 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:07:59.790183 kubelet[2406]: I0715 05:07:59.790164 2406 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:07:59.792147 kubelet[2406]: I0715 05:07:59.792045 2406 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:07:59.798127 kubelet[2406]: I0715 05:07:59.798099 2406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:07:59.798963 kubelet[2406]: I0715 05:07:59.798552 2406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:07:59.798963 kubelet[2406]: E0715 05:07:59.798936 2406 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:07:59.799055 kubelet[2406]: E0715 05:07:59.797393 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852547818cadf7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:07:59.789301631 +0000 UTC m=+0.415851918,LastTimestamp:2025-07-15 05:07:59.789301631 +0000 UTC m=+0.415851918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:07:59.799514 kubelet[2406]: E0715 05:07:59.799399 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:59.799514 kubelet[2406]: I0715 05:07:59.799467 2406 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:07:59.799640 kubelet[2406]: I0715 05:07:59.799608 2406 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:07:59.799770 kubelet[2406]: I0715 05:07:59.799742 2406 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:07:59.800355 kubelet[2406]: I0715 05:07:59.800306 2406 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:07:59.800423 kubelet[2406]: I0715 05:07:59.800412 2406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:07:59.800600 kubelet[2406]: W0715 05:07:59.800517 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:07:59.800661 kubelet[2406]: E0715 05:07:59.800615 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:07:59.801437 kubelet[2406]: E0715 05:07:59.800914 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Jul 15 05:07:59.801985 kubelet[2406]: I0715 05:07:59.801958 2406 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:08:00.073652 kubelet[2406]: E0715 05:08:00.073604 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.076288 kubelet[2406]: E0715 05:08:00.076211 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Jul 15 05:08:00.088084 kubelet[2406]: I0715 05:08:00.087955 2406 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:08:00.088084 kubelet[2406]: I0715 05:08:00.088064 2406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:08:00.088084 kubelet[2406]: I0715 05:08:00.088093 2406 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:08:00.096870 kubelet[2406]: I0715 05:08:00.096796 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:08:00.098792 kubelet[2406]: I0715 05:08:00.098748 2406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:08:00.098870 kubelet[2406]: I0715 05:08:00.098803 2406 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:08:00.098870 kubelet[2406]: I0715 05:08:00.098841 2406 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:08:00.098870 kubelet[2406]: I0715 05:08:00.098853 2406 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:08:00.098972 kubelet[2406]: E0715 05:08:00.098918 2406 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:08:00.100501 kubelet[2406]: W0715 05:08:00.100471 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:08:00.100661 kubelet[2406]: E0715 05:08:00.100627 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:08:00.174515 kubelet[2406]: E0715 05:08:00.174431 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.199800 kubelet[2406]: E0715 05:08:00.199711 2406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:08:00.274714 kubelet[2406]: E0715 05:08:00.274636 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.375119 kubelet[2406]: E0715 05:08:00.374914 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.400504 kubelet[2406]: E0715 05:08:00.400416 2406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:08:00.475817 kubelet[2406]: E0715 05:08:00.475738 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.477283 kubelet[2406]: E0715 05:08:00.477247 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Jul 15 05:08:00.494874 kubelet[2406]: E0715 05:08:00.494756 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852547818cadf7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:07:59.789301631 +0000 UTC m=+0.415851918,LastTimestamp:2025-07-15 05:07:59.789301631 +0000 UTC m=+0.415851918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:08:00.576385 kubelet[2406]: E0715 05:08:00.576318 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.677184 kubelet[2406]: E0715 05:08:00.677012 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.778246 kubelet[2406]: E0715 05:08:00.778143 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.800820 kubelet[2406]: E0715 05:08:00.800697 2406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:08:00.879092 kubelet[2406]: E0715 05:08:00.879006 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.942721 kubelet[2406]: I0715 05:08:00.942542 2406 policy_none.go:49] "None policy: Start" Jul 15 05:08:00.942721 kubelet[2406]: I0715 05:08:00.942613 2406 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:08:00.942721 kubelet[2406]: I0715 05:08:00.942650 2406 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:08:00.953801 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:08:00.965694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:08:00.969622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:08:00.979743 kubelet[2406]: E0715 05:08:00.979673 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:00.990000 kubelet[2406]: I0715 05:08:00.989851 2406 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:08:00.990154 kubelet[2406]: I0715 05:08:00.990123 2406 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:08:00.990197 kubelet[2406]: I0715 05:08:00.990156 2406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:08:00.990271 kubelet[2406]: W0715 05:08:00.990199 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:08:00.990371 kubelet[2406]: E0715 05:08:00.990278 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:08:00.990979 kubelet[2406]: I0715 05:08:00.990449 2406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:08:00.991318 kubelet[2406]: E0715 05:08:00.991300 2406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:08:00.991451 kubelet[2406]: E0715 05:08:00.991363 2406 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 05:08:01.006803 kubelet[2406]: W0715 05:08:01.006705 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:08:01.006803 kubelet[2406]: E0715 05:08:01.006797 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:08:01.035049 kubelet[2406]: W0715 05:08:01.034988 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:08:01.035049 kubelet[2406]: E0715 05:08:01.035058 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:08:01.093784 kubelet[2406]: I0715 05:08:01.092143 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:08:01.093784 kubelet[2406]: E0715 05:08:01.092716 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Jul 15 05:08:01.278429 kubelet[2406]: E0715 05:08:01.278238 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Jul 15 05:08:01.294289 kubelet[2406]: I0715 05:08:01.294215 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:08:01.294685 kubelet[2406]: E0715 05:08:01.294632 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Jul 15 05:08:01.358703 kubelet[2406]: W0715 05:08:01.358632 2406 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Jul 15 05:08:01.358703 kubelet[2406]: E0715 05:08:01.358692 2406 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:08:01.611674 systemd[1]: Created slice kubepods-burstable-pod52b009fa7edf0a535e402254437cd652.slice - libcontainer container kubepods-burstable-pod52b009fa7edf0a535e402254437cd652.slice. Jul 15 05:08:01.640591 kubelet[2406]: E0715 05:08:01.640522 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:01.643040 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 15 05:08:01.653144 kubelet[2406]: E0715 05:08:01.653111 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:01.656488 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 15 05:08:01.658694 kubelet[2406]: E0715 05:08:01.658654 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:01.696539 kubelet[2406]: I0715 05:08:01.696488 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:08:01.697015 kubelet[2406]: E0715 05:08:01.696964 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Jul 15 05:08:01.786523 kubelet[2406]: I0715 05:08:01.786445 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52b009fa7edf0a535e402254437cd652-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"52b009fa7edf0a535e402254437cd652\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.786523 kubelet[2406]: I0715 05:08:01.786490 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.786523 kubelet[2406]: I0715 05:08:01.786511 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.786523 kubelet[2406]: I0715 05:08:01.786533 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:01.786523 kubelet[2406]: I0715 05:08:01.786553 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52b009fa7edf0a535e402254437cd652-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"52b009fa7edf0a535e402254437cd652\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.786881 kubelet[2406]: I0715 05:08:01.786578 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52b009fa7edf0a535e402254437cd652-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"52b009fa7edf0a535e402254437cd652\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.786881 kubelet[2406]: I0715 05:08:01.786594 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.786881 kubelet[2406]: I0715 05:08:01.786636 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.786881 kubelet[2406]: I0715 05:08:01.786689 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.883363 kubelet[2406]: E0715 05:08:01.883158 2406 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:08:01.941838 kubelet[2406]: E0715 05:08:01.941673 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:01.942969 containerd[1587]: time="2025-07-15T05:08:01.942857006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:52b009fa7edf0a535e402254437cd652,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:01.954238 kubelet[2406]: E0715 05:08:01.954140 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:01.954853 containerd[1587]: time="2025-07-15T05:08:01.954796276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:01.959216 kubelet[2406]: E0715 05:08:01.959134 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:01.959801 containerd[1587]: time="2025-07-15T05:08:01.959735437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:02.013393 containerd[1587]: time="2025-07-15T05:08:02.013309618Z" level=info msg="connecting to shim ecadd1e14c433c400fd482fd73849c780fa69b8ff68848ca4b3bccbebc355012" address="unix:///run/containerd/s/362018b168d67fa239fa6efa68b3d6bc96990e0b1cdacbe6ed78a91c8f8fc5a3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:02.017050 containerd[1587]: time="2025-07-15T05:08:02.016338734Z" level=info msg="connecting to shim 9321a6f8d4975c12d086b65be2969755e12456ce9f996bd62d8f2a1b011c4c07" address="unix:///run/containerd/s/4f059e5875ab045a48f88a3bda3b0aba9c451a92105e88cd5b88518dec70b2a8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:02.017448 containerd[1587]: time="2025-07-15T05:08:02.017394204Z" level=info msg="connecting to shim 2263658e3cc80b5cf98cb4365967630dbe139e575dba225e8f8f68030b784c45" address="unix:///run/containerd/s/b1bee4b9bcce227a0999449e9adba5f98c82182676600516ab4a3ebe62cab471" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:02.057592 systemd[1]: Started cri-containerd-ecadd1e14c433c400fd482fd73849c780fa69b8ff68848ca4b3bccbebc355012.scope - libcontainer container ecadd1e14c433c400fd482fd73849c780fa69b8ff68848ca4b3bccbebc355012. Jul 15 05:08:02.063936 systemd[1]: Started cri-containerd-2263658e3cc80b5cf98cb4365967630dbe139e575dba225e8f8f68030b784c45.scope - libcontainer container 2263658e3cc80b5cf98cb4365967630dbe139e575dba225e8f8f68030b784c45. Jul 15 05:08:02.069991 systemd[1]: Started cri-containerd-9321a6f8d4975c12d086b65be2969755e12456ce9f996bd62d8f2a1b011c4c07.scope - libcontainer container 9321a6f8d4975c12d086b65be2969755e12456ce9f996bd62d8f2a1b011c4c07. Jul 15 05:08:02.139078 containerd[1587]: time="2025-07-15T05:08:02.138787869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:52b009fa7edf0a535e402254437cd652,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecadd1e14c433c400fd482fd73849c780fa69b8ff68848ca4b3bccbebc355012\"" Jul 15 05:08:02.142295 kubelet[2406]: E0715 05:08:02.142186 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.152498 containerd[1587]: time="2025-07-15T05:08:02.152383648Z" level=info msg="CreateContainer within sandbox \"ecadd1e14c433c400fd482fd73849c780fa69b8ff68848ca4b3bccbebc355012\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:08:02.163346 containerd[1587]: time="2025-07-15T05:08:02.163284207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9321a6f8d4975c12d086b65be2969755e12456ce9f996bd62d8f2a1b011c4c07\"" Jul 15 05:08:02.164464 kubelet[2406]: E0715 05:08:02.164435 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.166459 containerd[1587]: time="2025-07-15T05:08:02.166435567Z" level=info msg="CreateContainer within sandbox \"9321a6f8d4975c12d086b65be2969755e12456ce9f996bd62d8f2a1b011c4c07\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:08:02.168518 containerd[1587]: time="2025-07-15T05:08:02.168484116Z" level=info msg="Container 2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:02.169039 containerd[1587]: time="2025-07-15T05:08:02.168995690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2263658e3cc80b5cf98cb4365967630dbe139e575dba225e8f8f68030b784c45\"" Jul 15 05:08:02.170052 kubelet[2406]: E0715 05:08:02.170028 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.171724 containerd[1587]: time="2025-07-15T05:08:02.171698715Z" level=info msg="CreateContainer within sandbox \"2263658e3cc80b5cf98cb4365967630dbe139e575dba225e8f8f68030b784c45\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:08:02.176562 containerd[1587]: time="2025-07-15T05:08:02.176516465Z" level=info msg="Container bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:02.178282 containerd[1587]: time="2025-07-15T05:08:02.178246268Z" level=info msg="CreateContainer within sandbox \"ecadd1e14c433c400fd482fd73849c780fa69b8ff68848ca4b3bccbebc355012\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc\"" Jul 15 05:08:02.178849 containerd[1587]: time="2025-07-15T05:08:02.178816203Z" level=info msg="StartContainer for \"2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc\"" Jul 15 05:08:02.180679 containerd[1587]: time="2025-07-15T05:08:02.180657077Z" level=info msg="connecting to shim 2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc" address="unix:///run/containerd/s/362018b168d67fa239fa6efa68b3d6bc96990e0b1cdacbe6ed78a91c8f8fc5a3" protocol=ttrpc version=3 Jul 15 05:08:02.187864 containerd[1587]: time="2025-07-15T05:08:02.187783593Z" level=info msg="CreateContainer within sandbox \"9321a6f8d4975c12d086b65be2969755e12456ce9f996bd62d8f2a1b011c4c07\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e\"" Jul 15 05:08:02.188507 containerd[1587]: time="2025-07-15T05:08:02.188479527Z" level=info msg="StartContainer for \"bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e\"" Jul 15 05:08:02.189829 containerd[1587]: time="2025-07-15T05:08:02.189795702Z" level=info msg="connecting to shim bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e" address="unix:///run/containerd/s/4f059e5875ab045a48f88a3bda3b0aba9c451a92105e88cd5b88518dec70b2a8" protocol=ttrpc version=3 Jul 15 05:08:02.192301 containerd[1587]: time="2025-07-15T05:08:02.191484387Z" level=info msg="Container 8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:02.199535 systemd[1]: Started cri-containerd-2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc.scope - libcontainer container 2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc. Jul 15 05:08:02.204183 containerd[1587]: time="2025-07-15T05:08:02.204129668Z" level=info msg="CreateContainer within sandbox \"2263658e3cc80b5cf98cb4365967630dbe139e575dba225e8f8f68030b784c45\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3\"" Jul 15 05:08:02.204971 containerd[1587]: time="2025-07-15T05:08:02.204927777Z" level=info msg="StartContainer for \"8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3\"" Jul 15 05:08:02.207987 containerd[1587]: time="2025-07-15T05:08:02.207911667Z" level=info msg="connecting to shim 8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3" address="unix:///run/containerd/s/b1bee4b9bcce227a0999449e9adba5f98c82182676600516ab4a3ebe62cab471" protocol=ttrpc version=3 Jul 15 05:08:02.218679 systemd[1]: Started cri-containerd-bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e.scope - libcontainer container bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e. Jul 15 05:08:02.260531 systemd[1]: Started cri-containerd-8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3.scope - libcontainer container 8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3. Jul 15 05:08:02.291173 containerd[1587]: time="2025-07-15T05:08:02.291094808Z" level=info msg="StartContainer for \"2933fa0047698dc93cd1e5cd600f0e5393e8c18b089b7c7df2a59524bcc35fdc\" returns successfully" Jul 15 05:08:02.326836 containerd[1587]: time="2025-07-15T05:08:02.326781899Z" level=info msg="StartContainer for \"bfcd330ea38788ede811923c6bfe140c0a8b5ccced1fdf63e91e426ef49fd78e\" returns successfully" Jul 15 05:08:02.347083 containerd[1587]: time="2025-07-15T05:08:02.346809021Z" level=info msg="StartContainer for \"8e103bf249f877c57ccf8c3f932b306066be4f30097af82c89947f2f8a4f42c3\" returns successfully" Jul 15 05:08:02.500031 kubelet[2406]: I0715 05:08:02.499360 2406 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:08:03.114743 kubelet[2406]: E0715 05:08:03.114684 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:03.115279 kubelet[2406]: E0715 05:08:03.114853 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:03.120166 kubelet[2406]: E0715 05:08:03.120118 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:03.120469 kubelet[2406]: E0715 05:08:03.120443 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:03.128866 kubelet[2406]: E0715 05:08:03.128817 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:03.128999 kubelet[2406]: E0715 05:08:03.128973 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:03.629891 update_engine[1574]: I20250715 05:08:03.629742 1574 update_attempter.cc:509] Updating boot flags... Jul 15 05:08:04.152395 kubelet[2406]: E0715 05:08:04.152318 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:04.152996 kubelet[2406]: E0715 05:08:04.152530 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:04.152996 kubelet[2406]: E0715 05:08:04.152838 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:04.156356 kubelet[2406]: E0715 05:08:04.156276 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:04.176917 kubelet[2406]: E0715 05:08:04.176874 2406 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 05:08:04.236091 kubelet[2406]: E0715 05:08:04.236051 2406 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:08:04.236371 kubelet[2406]: E0715 05:08:04.236224 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:04.270709 kubelet[2406]: I0715 05:08:04.270648 2406 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 05:08:04.301214 kubelet[2406]: I0715 05:08:04.301136 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:04.673265 kubelet[2406]: E0715 05:08:04.673213 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:04.673265 kubelet[2406]: I0715 05:08:04.673251 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:04.675150 kubelet[2406]: E0715 05:08:04.675122 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:04.675150 kubelet[2406]: I0715 05:08:04.675143 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:04.676628 kubelet[2406]: E0715 05:08:04.676599 2406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:04.790749 kubelet[2406]: I0715 05:08:04.790694 2406 apiserver.go:52] "Watching apiserver" Jul 15 05:08:04.800025 kubelet[2406]: I0715 05:08:04.799959 2406 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:08:07.325599 kubelet[2406]: I0715 05:08:07.325536 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:07.361845 kubelet[2406]: E0715 05:08:07.361804 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:08.151476 kubelet[2406]: E0715 05:08:08.151436 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:09.396073 kubelet[2406]: I0715 05:08:09.396026 2406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:09.404065 kubelet[2406]: E0715 05:08:09.404022 2406 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:09.506596 systemd[1]: Reload requested from client PID 2698 ('systemctl') (unit session-9.scope)... Jul 15 05:08:09.506617 systemd[1]: Reloading... Jul 15 05:08:09.604394 zram_generator::config[2744]: No configuration found. Jul 15 05:08:09.712733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:08:09.871901 systemd[1]: Reloading finished in 364 ms. Jul 15 05:08:09.908869 kubelet[2406]: I0715 05:08:09.908779 2406 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:08:09.909100 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:08:09.932124 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:08:09.932513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:08:09.932592 systemd[1]: kubelet.service: Consumed 2.281s CPU time, 136M memory peak. Jul 15 05:08:09.934931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:08:10.193987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:08:10.212810 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:08:10.255037 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:08:10.255037 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:08:10.255037 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:08:10.255621 kubelet[2786]: I0715 05:08:10.255136 2786 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:08:10.263586 kubelet[2786]: I0715 05:08:10.263504 2786 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 05:08:10.263586 kubelet[2786]: I0715 05:08:10.263561 2786 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:08:10.264029 kubelet[2786]: I0715 05:08:10.263997 2786 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 05:08:10.265564 kubelet[2786]: I0715 05:08:10.265408 2786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:08:10.269104 kubelet[2786]: I0715 05:08:10.269015 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:08:10.277639 kubelet[2786]: I0715 05:08:10.277593 2786 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:08:10.283305 kubelet[2786]: I0715 05:08:10.283232 2786 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:08:10.283609 kubelet[2786]: I0715 05:08:10.283559 2786 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:08:10.283831 kubelet[2786]: I0715 05:08:10.283601 2786 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:08:10.283943 kubelet[2786]: I0715 05:08:10.283837 2786 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:08:10.283943 kubelet[2786]: I0715 05:08:10.283848 2786 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 05:08:10.283943 kubelet[2786]: I0715 05:08:10.283919 2786 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:08:10.284164 kubelet[2786]: I0715 05:08:10.284135 2786 kubelet.go:446] "Attempting to sync node with API server" Jul 15 05:08:10.284210 kubelet[2786]: I0715 05:08:10.284173 2786 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:08:10.284210 kubelet[2786]: I0715 05:08:10.284204 2786 kubelet.go:352] "Adding apiserver pod source" Jul 15 05:08:10.284264 kubelet[2786]: I0715 05:08:10.284218 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:08:10.285579 kubelet[2786]: I0715 05:08:10.285552 2786 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:08:10.286015 kubelet[2786]: I0715 05:08:10.285988 2786 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:08:10.286791 kubelet[2786]: I0715 05:08:10.286749 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:08:10.286791 kubelet[2786]: I0715 05:08:10.286794 2786 server.go:1287] "Started kubelet" Jul 15 05:08:10.289158 kubelet[2786]: I0715 05:08:10.289106 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:08:10.293596 kubelet[2786]: I0715 05:08:10.293153 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:08:10.293819 kubelet[2786]: I0715 05:08:10.293758 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:08:10.294010 kubelet[2786]: I0715 05:08:10.293994 2786 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:08:10.294123 kubelet[2786]: I0715 05:08:10.293225 2786 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:08:10.296762 kubelet[2786]: I0715 05:08:10.296090 2786 server.go:479] "Adding debug handlers to kubelet server" Jul 15 05:08:10.298188 kubelet[2786]: I0715 05:08:10.298086 2786 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:08:10.298636 kubelet[2786]: I0715 05:08:10.293283 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:08:10.299009 kubelet[2786]: E0715 05:08:10.298975 2786 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:10.299283 kubelet[2786]: I0715 05:08:10.299262 2786 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:08:10.299434 kubelet[2786]: I0715 05:08:10.298649 2786 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:08:10.299699 kubelet[2786]: I0715 05:08:10.299643 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:08:10.302488 kubelet[2786]: E0715 05:08:10.302449 2786 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:08:10.306686 kubelet[2786]: I0715 05:08:10.306620 2786 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:08:10.314539 kubelet[2786]: I0715 05:08:10.314453 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:08:10.316260 kubelet[2786]: I0715 05:08:10.316232 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:08:10.316344 kubelet[2786]: I0715 05:08:10.316275 2786 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 05:08:10.316344 kubelet[2786]: I0715 05:08:10.316302 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:08:10.316344 kubelet[2786]: I0715 05:08:10.316311 2786 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 05:08:10.316450 kubelet[2786]: E0715 05:08:10.316389 2786 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:08:10.362760 kubelet[2786]: I0715 05:08:10.362712 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:08:10.362760 kubelet[2786]: I0715 05:08:10.362735 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:08:10.362760 kubelet[2786]: I0715 05:08:10.362774 2786 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:08:10.363060 kubelet[2786]: I0715 05:08:10.362993 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:08:10.363060 kubelet[2786]: I0715 05:08:10.363006 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:08:10.363060 kubelet[2786]: I0715 05:08:10.363027 2786 policy_none.go:49] "None policy: Start" Jul 15 05:08:10.363060 kubelet[2786]: I0715 05:08:10.363037 2786 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:08:10.363060 kubelet[2786]: I0715 05:08:10.363051 2786 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:08:10.363225 kubelet[2786]: I0715 05:08:10.363175 2786 state_mem.go:75] "Updated machine memory state" Jul 15 05:08:10.369595 kubelet[2786]: I0715 05:08:10.369552 2786 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:08:10.369838 kubelet[2786]: I0715 05:08:10.369820 2786 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:08:10.369903 kubelet[2786]: I0715 05:08:10.369840 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:08:10.370146 kubelet[2786]: I0715 05:08:10.370120 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:08:10.371777 kubelet[2786]: E0715 05:08:10.371753 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:08:10.417918 kubelet[2786]: I0715 05:08:10.417863 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:10.418388 kubelet[2786]: I0715 05:08:10.417974 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:10.418724 kubelet[2786]: I0715 05:08:10.417983 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:10.429352 kubelet[2786]: E0715 05:08:10.429259 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:10.430137 kubelet[2786]: E0715 05:08:10.430096 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:10.476038 kubelet[2786]: I0715 05:08:10.475877 2786 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:08:10.486973 kubelet[2786]: I0715 05:08:10.486106 2786 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 05:08:10.486973 kubelet[2786]: I0715 05:08:10.486231 2786 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 05:08:10.499555 kubelet[2786]: I0715 05:08:10.499491 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:10.499555 kubelet[2786]: I0715 05:08:10.499546 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:10.499758 kubelet[2786]: I0715 05:08:10.499579 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:10.499758 kubelet[2786]: I0715 05:08:10.499604 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52b009fa7edf0a535e402254437cd652-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"52b009fa7edf0a535e402254437cd652\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:10.499758 kubelet[2786]: I0715 05:08:10.499629 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52b009fa7edf0a535e402254437cd652-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"52b009fa7edf0a535e402254437cd652\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:10.499758 kubelet[2786]: I0715 05:08:10.499651 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:10.499758 kubelet[2786]: I0715 05:08:10.499673 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52b009fa7edf0a535e402254437cd652-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"52b009fa7edf0a535e402254437cd652\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:10.499943 kubelet[2786]: I0715 05:08:10.499693 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:10.499943 kubelet[2786]: I0715 05:08:10.499789 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:10.507690 sudo[2823]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 05:08:10.508177 sudo[2823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 05:08:10.731151 kubelet[2786]: E0715 05:08:10.730725 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:10.731151 kubelet[2786]: E0715 05:08:10.730927 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:10.731780 kubelet[2786]: E0715 05:08:10.731718 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:10.942647 sudo[2823]: pam_unix(sudo:session): session closed for user root Jul 15 05:08:11.285388 kubelet[2786]: I0715 05:08:11.285268 2786 apiserver.go:52] "Watching apiserver" Jul 15 05:08:11.294921 kubelet[2786]: I0715 05:08:11.294864 2786 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:08:11.333359 kubelet[2786]: I0715 05:08:11.333300 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:11.333693 kubelet[2786]: E0715 05:08:11.333404 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:11.333870 kubelet[2786]: E0715 05:08:11.333458 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:11.342997 kubelet[2786]: E0715 05:08:11.342642 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:11.343378 kubelet[2786]: E0715 05:08:11.343320 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:11.363821 kubelet[2786]: I0715 05:08:11.363713 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.363684259 podStartE2EDuration="4.363684259s" podCreationTimestamp="2025-07-15 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:11.363586034 +0000 UTC m=+1.146576990" watchObservedRunningTime="2025-07-15 05:08:11.363684259 +0000 UTC m=+1.146675215" Jul 15 05:08:11.364023 kubelet[2786]: I0715 05:08:11.363880 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.363874639 podStartE2EDuration="2.363874639s" podCreationTimestamp="2025-07-15 05:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:11.353609419 +0000 UTC m=+1.136600365" watchObservedRunningTime="2025-07-15 05:08:11.363874639 +0000 UTC m=+1.146865585" Jul 15 05:08:11.372353 kubelet[2786]: I0715 05:08:11.372253 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.372229317 podStartE2EDuration="1.372229317s" podCreationTimestamp="2025-07-15 05:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:11.372084272 +0000 UTC m=+1.155075238" watchObservedRunningTime="2025-07-15 05:08:11.372229317 +0000 UTC m=+1.155220273" Jul 15 05:08:12.334894 kubelet[2786]: E0715 05:08:12.334849 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:12.335466 kubelet[2786]: E0715 05:08:12.335008 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:13.137964 sudo[1818]: pam_unix(sudo:session): session closed for user root Jul 15 05:08:13.139622 sshd[1817]: Connection closed by 10.0.0.1 port 57672 Jul 15 05:08:13.140374 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jul 15 05:08:13.145186 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:57672.service: Deactivated successfully. Jul 15 05:08:13.147831 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:08:13.148116 systemd[1]: session-9.scope: Consumed 4.952s CPU time, 265.2M memory peak. Jul 15 05:08:13.149820 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:08:13.151390 systemd-logind[1560]: Removed session 9. Jul 15 05:08:13.315231 kubelet[2786]: E0715 05:08:13.315150 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:13.337409 kubelet[2786]: E0715 05:08:13.337361 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:13.337895 kubelet[2786]: E0715 05:08:13.337639 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:13.337895 kubelet[2786]: E0715 05:08:13.337702 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:15.751879 kubelet[2786]: I0715 05:08:15.751835 2786 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:08:15.752605 kubelet[2786]: I0715 05:08:15.752465 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:08:15.752664 containerd[1587]: time="2025-07-15T05:08:15.752179818Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:08:17.794612 systemd[1]: Created slice kubepods-besteffort-podad32abfe_5d74_46b4_9537_f66401d8cb58.slice - libcontainer container kubepods-besteffort-podad32abfe_5d74_46b4_9537_f66401d8cb58.slice. Jul 15 05:08:17.814212 systemd[1]: Created slice kubepods-burstable-pod9b854359_adb1_4ddb_8c79_050d6ac3fd9a.slice - libcontainer container kubepods-burstable-pod9b854359_adb1_4ddb_8c79_050d6ac3fd9a.slice. Jul 15 05:08:17.851176 kubelet[2786]: I0715 05:08:17.851085 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-config-path\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851176 kubelet[2786]: I0715 05:08:17.851159 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-xtables-lock\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851176 kubelet[2786]: I0715 05:08:17.851186 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-clustermesh-secrets\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851831 kubelet[2786]: I0715 05:08:17.851211 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htqpm\" (UniqueName: \"kubernetes.io/projected/ad32abfe-5d74-46b4-9537-f66401d8cb58-kube-api-access-htqpm\") pod \"kube-proxy-h99rn\" (UID: \"ad32abfe-5d74-46b4-9537-f66401d8cb58\") " pod="kube-system/kube-proxy-h99rn" Jul 15 05:08:17.851831 kubelet[2786]: I0715 05:08:17.851236 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-bpf-maps\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851831 kubelet[2786]: I0715 05:08:17.851255 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad32abfe-5d74-46b4-9537-f66401d8cb58-kube-proxy\") pod \"kube-proxy-h99rn\" (UID: \"ad32abfe-5d74-46b4-9537-f66401d8cb58\") " pod="kube-system/kube-proxy-h99rn" Jul 15 05:08:17.851831 kubelet[2786]: I0715 05:08:17.851273 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-lib-modules\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851831 kubelet[2786]: I0715 05:08:17.851293 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9z9\" (UniqueName: \"kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-kube-api-access-kc9z9\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851947 kubelet[2786]: I0715 05:08:17.851313 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad32abfe-5d74-46b4-9537-f66401d8cb58-lib-modules\") pod \"kube-proxy-h99rn\" (UID: \"ad32abfe-5d74-46b4-9537-f66401d8cb58\") " pod="kube-system/kube-proxy-h99rn" Jul 15 05:08:17.851947 kubelet[2786]: I0715 05:08:17.851372 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-net\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851947 kubelet[2786]: I0715 05:08:17.851400 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-cgroup\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851947 kubelet[2786]: I0715 05:08:17.851435 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-etc-cni-netd\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851947 kubelet[2786]: I0715 05:08:17.851456 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-kernel\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.851947 kubelet[2786]: I0715 05:08:17.851477 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hubble-tls\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.852084 kubelet[2786]: I0715 05:08:17.851498 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad32abfe-5d74-46b4-9537-f66401d8cb58-xtables-lock\") pod \"kube-proxy-h99rn\" (UID: \"ad32abfe-5d74-46b4-9537-f66401d8cb58\") " pod="kube-system/kube-proxy-h99rn" Jul 15 05:08:17.852084 kubelet[2786]: I0715 05:08:17.851522 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-run\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.852084 kubelet[2786]: I0715 05:08:17.851545 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hostproc\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:17.852084 kubelet[2786]: I0715 05:08:17.851567 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cni-path\") pod \"cilium-h4pnf\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " pod="kube-system/cilium-h4pnf" Jul 15 05:08:18.411782 kubelet[2786]: E0715 05:08:18.411684 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:18.412715 containerd[1587]: time="2025-07-15T05:08:18.412652284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h99rn,Uid:ad32abfe-5d74-46b4-9537-f66401d8cb58,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:18.417058 kubelet[2786]: E0715 05:08:18.417012 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:18.417692 containerd[1587]: time="2025-07-15T05:08:18.417544104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4pnf,Uid:9b854359-adb1-4ddb-8c79-050d6ac3fd9a,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:18.971985 systemd[1]: Created slice kubepods-besteffort-podd9d2d2e2_d9aa_43a8_ac74_ce76ad414a47.slice - libcontainer container kubepods-besteffort-podd9d2d2e2_d9aa_43a8_ac74_ce76ad414a47.slice. Jul 15 05:08:19.060495 kubelet[2786]: I0715 05:08:19.060413 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6w8v\" (UniqueName: \"kubernetes.io/projected/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-kube-api-access-d6w8v\") pod \"cilium-operator-6c4d7847fc-kvtbb\" (UID: \"d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47\") " pod="kube-system/cilium-operator-6c4d7847fc-kvtbb" Jul 15 05:08:19.060495 kubelet[2786]: I0715 05:08:19.060465 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kvtbb\" (UID: \"d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47\") " pod="kube-system/cilium-operator-6c4d7847fc-kvtbb" Jul 15 05:08:19.575954 kubelet[2786]: E0715 05:08:19.575896 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:19.576830 containerd[1587]: time="2025-07-15T05:08:19.576625927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kvtbb,Uid:d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:20.728231 containerd[1587]: time="2025-07-15T05:08:20.728154273Z" level=info msg="connecting to shim fd619e44e40ad61b37e249f0640939669db633d1ddd48fa347adaf95f34e8c8e" address="unix:///run/containerd/s/a8e9363dc0215f44ccdcc45dcaa97528a96d5018e16c3ef72b9789328eb634ef" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:20.764554 systemd[1]: Started cri-containerd-fd619e44e40ad61b37e249f0640939669db633d1ddd48fa347adaf95f34e8c8e.scope - libcontainer container fd619e44e40ad61b37e249f0640939669db633d1ddd48fa347adaf95f34e8c8e. Jul 15 05:08:20.950215 containerd[1587]: time="2025-07-15T05:08:20.950126779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h99rn,Uid:ad32abfe-5d74-46b4-9537-f66401d8cb58,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd619e44e40ad61b37e249f0640939669db633d1ddd48fa347adaf95f34e8c8e\"" Jul 15 05:08:20.951730 kubelet[2786]: E0715 05:08:20.951672 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:20.954562 containerd[1587]: time="2025-07-15T05:08:20.954510196Z" level=info msg="CreateContainer within sandbox \"fd619e44e40ad61b37e249f0640939669db633d1ddd48fa347adaf95f34e8c8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:08:21.006514 containerd[1587]: time="2025-07-15T05:08:21.005865540Z" level=info msg="connecting to shim 351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102" address="unix:///run/containerd/s/e6d2f5f4970a0ff38394aab26a388f161a7fd356c08354da386e8ba51565768e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:21.017982 containerd[1587]: time="2025-07-15T05:08:21.017918196Z" level=info msg="Container 195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:21.019809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526783785.mount: Deactivated successfully. Jul 15 05:08:21.033358 containerd[1587]: time="2025-07-15T05:08:21.033249226Z" level=info msg="connecting to shim 98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829" address="unix:///run/containerd/s/7a74696e73ecfc8f610fc14d8c1bd17539a7b961b806ba10b0be6e465de09e4d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:21.035631 containerd[1587]: time="2025-07-15T05:08:21.035594924Z" level=info msg="CreateContainer within sandbox \"fd619e44e40ad61b37e249f0640939669db633d1ddd48fa347adaf95f34e8c8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5\"" Jul 15 05:08:21.040051 containerd[1587]: time="2025-07-15T05:08:21.039239427Z" level=info msg="StartContainer for \"195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5\"" Jul 15 05:08:21.043086 containerd[1587]: time="2025-07-15T05:08:21.043050544Z" level=info msg="connecting to shim 195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5" address="unix:///run/containerd/s/a8e9363dc0215f44ccdcc45dcaa97528a96d5018e16c3ef72b9789328eb634ef" protocol=ttrpc version=3 Jul 15 05:08:21.045566 systemd[1]: Started cri-containerd-351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102.scope - libcontainer container 351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102. Jul 15 05:08:21.065721 systemd[1]: Started cri-containerd-98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829.scope - libcontainer container 98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829. Jul 15 05:08:21.073862 systemd[1]: Started cri-containerd-195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5.scope - libcontainer container 195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5. Jul 15 05:08:21.341189 containerd[1587]: time="2025-07-15T05:08:21.341113611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4pnf,Uid:9b854359-adb1-4ddb-8c79-050d6ac3fd9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\"" Jul 15 05:08:21.342213 kubelet[2786]: E0715 05:08:21.342172 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:21.343423 containerd[1587]: time="2025-07-15T05:08:21.343388785Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 05:08:21.343610 containerd[1587]: time="2025-07-15T05:08:21.343388905Z" level=info msg="StartContainer for \"195ab60c1d250d1c38a348b02bfad5962bfa349c117da425ba4d7d2e09232ad5\" returns successfully" Jul 15 05:08:21.359634 kubelet[2786]: E0715 05:08:21.359593 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:21.379871 containerd[1587]: time="2025-07-15T05:08:21.379800042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kvtbb,Uid:d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47,Namespace:kube-system,Attempt:0,} returns sandbox id \"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\"" Jul 15 05:08:21.381989 kubelet[2786]: E0715 05:08:21.381936 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:21.447807 kubelet[2786]: I0715 05:08:21.447702 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h99rn" podStartSLOduration=4.447671702 podStartE2EDuration="4.447671702s" podCreationTimestamp="2025-07-15 05:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:21.446821792 +0000 UTC m=+11.229812738" watchObservedRunningTime="2025-07-15 05:08:21.447671702 +0000 UTC m=+11.230662658" Jul 15 05:08:21.570543 kubelet[2786]: E0715 05:08:21.570505 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:21.889835 kubelet[2786]: E0715 05:08:21.832600 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:22.364437 kubelet[2786]: E0715 05:08:22.364397 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:30.883023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733324383.mount: Deactivated successfully. Jul 15 05:08:38.562444 containerd[1587]: time="2025-07-15T05:08:38.562240801Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:38.563711 containerd[1587]: time="2025-07-15T05:08:38.563658412Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 05:08:38.565741 containerd[1587]: time="2025-07-15T05:08:38.565689476Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:38.567182 containerd[1587]: time="2025-07-15T05:08:38.567108851Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.223579491s" Jul 15 05:08:38.567182 containerd[1587]: time="2025-07-15T05:08:38.567181607Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 05:08:38.575373 containerd[1587]: time="2025-07-15T05:08:38.575281314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 05:08:38.580784 containerd[1587]: time="2025-07-15T05:08:38.580719634Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 05:08:38.595612 containerd[1587]: time="2025-07-15T05:08:38.595413171Z" level=info msg="Container 30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:38.608974 containerd[1587]: time="2025-07-15T05:08:38.608717329Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\"" Jul 15 05:08:38.610403 containerd[1587]: time="2025-07-15T05:08:38.610370392Z" level=info msg="StartContainer for \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\"" Jul 15 05:08:38.611802 containerd[1587]: time="2025-07-15T05:08:38.611685371Z" level=info msg="connecting to shim 30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03" address="unix:///run/containerd/s/e6d2f5f4970a0ff38394aab26a388f161a7fd356c08354da386e8ba51565768e" protocol=ttrpc version=3 Jul 15 05:08:38.652645 systemd[1]: Started cri-containerd-30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03.scope - libcontainer container 30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03. Jul 15 05:08:38.696699 containerd[1587]: time="2025-07-15T05:08:38.696639801Z" level=info msg="StartContainer for \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" returns successfully" Jul 15 05:08:38.707651 systemd[1]: cri-containerd-30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03.scope: Deactivated successfully. Jul 15 05:08:38.709210 containerd[1587]: time="2025-07-15T05:08:38.709171609Z" level=info msg="received exit event container_id:\"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" id:\"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" pid:3213 exited_at:{seconds:1752556118 nanos:708631736}" Jul 15 05:08:38.709354 containerd[1587]: time="2025-07-15T05:08:38.709299950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" id:\"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" pid:3213 exited_at:{seconds:1752556118 nanos:708631736}" Jul 15 05:08:38.738793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03-rootfs.mount: Deactivated successfully. Jul 15 05:08:39.403803 kubelet[2786]: E0715 05:08:39.403723 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:39.406290 containerd[1587]: time="2025-07-15T05:08:39.406133683Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 05:08:39.431428 containerd[1587]: time="2025-07-15T05:08:39.431311980Z" level=info msg="Container 1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:39.441920 containerd[1587]: time="2025-07-15T05:08:39.441826760Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\"" Jul 15 05:08:39.443462 containerd[1587]: time="2025-07-15T05:08:39.443378142Z" level=info msg="StartContainer for \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\"" Jul 15 05:08:39.444795 containerd[1587]: time="2025-07-15T05:08:39.444701958Z" level=info msg="connecting to shim 1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1" address="unix:///run/containerd/s/e6d2f5f4970a0ff38394aab26a388f161a7fd356c08354da386e8ba51565768e" protocol=ttrpc version=3 Jul 15 05:08:39.475767 systemd[1]: Started cri-containerd-1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1.scope - libcontainer container 1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1. Jul 15 05:08:39.582597 containerd[1587]: time="2025-07-15T05:08:39.582533989Z" level=info msg="StartContainer for \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" returns successfully" Jul 15 05:08:39.602195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:08:39.602955 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:08:39.605356 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:08:39.608904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:08:39.612293 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:08:39.613414 systemd[1]: cri-containerd-1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1.scope: Deactivated successfully. Jul 15 05:08:39.614011 containerd[1587]: time="2025-07-15T05:08:39.613941169Z" level=info msg="received exit event container_id:\"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" id:\"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" pid:3258 exited_at:{seconds:1752556119 nanos:613509819}" Jul 15 05:08:39.614577 containerd[1587]: time="2025-07-15T05:08:39.614016030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" id:\"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" pid:3258 exited_at:{seconds:1752556119 nanos:613509819}" Jul 15 05:08:39.614720 systemd[1]: cri-containerd-1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1.scope: Consumed 37ms CPU time, 7.7M memory peak, 4K read from disk, 2.2M written to disk. Jul 15 05:08:39.644790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1-rootfs.mount: Deactivated successfully. Jul 15 05:08:39.660689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:08:40.407392 kubelet[2786]: E0715 05:08:40.407321 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:40.409571 containerd[1587]: time="2025-07-15T05:08:40.409388010Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 05:08:40.654484 containerd[1587]: time="2025-07-15T05:08:40.654408274Z" level=info msg="Container 829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:40.659020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993876508.mount: Deactivated successfully. Jul 15 05:08:40.669091 containerd[1587]: time="2025-07-15T05:08:40.669032646Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\"" Jul 15 05:08:40.669656 containerd[1587]: time="2025-07-15T05:08:40.669632562Z" level=info msg="StartContainer for \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\"" Jul 15 05:08:40.671600 containerd[1587]: time="2025-07-15T05:08:40.671571923Z" level=info msg="connecting to shim 829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb" address="unix:///run/containerd/s/e6d2f5f4970a0ff38394aab26a388f161a7fd356c08354da386e8ba51565768e" protocol=ttrpc version=3 Jul 15 05:08:40.697566 systemd[1]: Started cri-containerd-829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb.scope - libcontainer container 829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb. Jul 15 05:08:40.743309 systemd[1]: cri-containerd-829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb.scope: Deactivated successfully. Jul 15 05:08:40.744495 containerd[1587]: time="2025-07-15T05:08:40.744296286Z" level=info msg="received exit event container_id:\"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" id:\"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" pid:3305 exited_at:{seconds:1752556120 nanos:744072215}" Jul 15 05:08:40.744788 containerd[1587]: time="2025-07-15T05:08:40.744619342Z" level=info msg="StartContainer for \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" returns successfully" Jul 15 05:08:40.745461 containerd[1587]: time="2025-07-15T05:08:40.745414655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" id:\"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" pid:3305 exited_at:{seconds:1752556120 nanos:744072215}" Jul 15 05:08:40.770122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb-rootfs.mount: Deactivated successfully. Jul 15 05:08:41.413146 kubelet[2786]: E0715 05:08:41.413091 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:41.416535 containerd[1587]: time="2025-07-15T05:08:41.416474907Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 05:08:41.712829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994759547.mount: Deactivated successfully. Jul 15 05:08:41.722000 containerd[1587]: time="2025-07-15T05:08:41.721277633Z" level=info msg="Container c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:41.724968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592008122.mount: Deactivated successfully. Jul 15 05:08:41.734150 containerd[1587]: time="2025-07-15T05:08:41.734102295Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\"" Jul 15 05:08:41.735149 containerd[1587]: time="2025-07-15T05:08:41.735115857Z" level=info msg="StartContainer for \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\"" Jul 15 05:08:41.736289 containerd[1587]: time="2025-07-15T05:08:41.736026546Z" level=info msg="connecting to shim c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b" address="unix:///run/containerd/s/e6d2f5f4970a0ff38394aab26a388f161a7fd356c08354da386e8ba51565768e" protocol=ttrpc version=3 Jul 15 05:08:41.765532 systemd[1]: Started cri-containerd-c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b.scope - libcontainer container c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b. Jul 15 05:08:41.805604 systemd[1]: cri-containerd-c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b.scope: Deactivated successfully. Jul 15 05:08:41.807620 containerd[1587]: time="2025-07-15T05:08:41.807567377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" id:\"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" pid:3352 exited_at:{seconds:1752556121 nanos:806878855}" Jul 15 05:08:41.808052 containerd[1587]: time="2025-07-15T05:08:41.807802057Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b854359_adb1_4ddb_8c79_050d6ac3fd9a.slice/cri-containerd-c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b.scope/memory.events\": no such file or directory" Jul 15 05:08:41.810982 containerd[1587]: time="2025-07-15T05:08:41.810935779Z" level=info msg="received exit event container_id:\"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" id:\"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" pid:3352 exited_at:{seconds:1752556121 nanos:806878855}" Jul 15 05:08:41.813265 containerd[1587]: time="2025-07-15T05:08:41.813228562Z" level=info msg="StartContainer for \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" returns successfully" Jul 15 05:08:42.145789 containerd[1587]: time="2025-07-15T05:08:42.145711403Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:42.146440 containerd[1587]: time="2025-07-15T05:08:42.146388053Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 05:08:42.147564 containerd[1587]: time="2025-07-15T05:08:42.147524156Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:42.148793 containerd[1587]: time="2025-07-15T05:08:42.148742201Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.573390044s" Jul 15 05:08:42.148793 containerd[1587]: time="2025-07-15T05:08:42.148779080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 05:08:42.150836 containerd[1587]: time="2025-07-15T05:08:42.150806235Z" level=info msg="CreateContainer within sandbox \"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 05:08:42.162235 containerd[1587]: time="2025-07-15T05:08:42.162178549Z" level=info msg="Container 2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:42.168672 containerd[1587]: time="2025-07-15T05:08:42.168636029Z" level=info msg="CreateContainer within sandbox \"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\"" Jul 15 05:08:42.170353 containerd[1587]: time="2025-07-15T05:08:42.169227899Z" level=info msg="StartContainer for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\"" Jul 15 05:08:42.170353 containerd[1587]: time="2025-07-15T05:08:42.170113111Z" level=info msg="connecting to shim 2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793" address="unix:///run/containerd/s/7a74696e73ecfc8f610fc14d8c1bd17539a7b961b806ba10b0be6e465de09e4d" protocol=ttrpc version=3 Jul 15 05:08:42.193582 systemd[1]: Started cri-containerd-2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793.scope - libcontainer container 2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793. Jul 15 05:08:42.447915 containerd[1587]: time="2025-07-15T05:08:42.447766408Z" level=info msg="StartContainer for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" returns successfully" Jul 15 05:08:42.454358 kubelet[2786]: E0715 05:08:42.454285 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:42.458801 kubelet[2786]: E0715 05:08:42.458728 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:42.459010 containerd[1587]: time="2025-07-15T05:08:42.458966669Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 05:08:42.650130 containerd[1587]: time="2025-07-15T05:08:42.650070458Z" level=info msg="Container 5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:42.709566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b-rootfs.mount: Deactivated successfully. Jul 15 05:08:43.015182 containerd[1587]: time="2025-07-15T05:08:43.014584847Z" level=info msg="CreateContainer within sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\"" Jul 15 05:08:43.016374 containerd[1587]: time="2025-07-15T05:08:43.015602567Z" level=info msg="StartContainer for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\"" Jul 15 05:08:43.020608 containerd[1587]: time="2025-07-15T05:08:43.020495520Z" level=info msg="connecting to shim 5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f" address="unix:///run/containerd/s/e6d2f5f4970a0ff38394aab26a388f161a7fd356c08354da386e8ba51565768e" protocol=ttrpc version=3 Jul 15 05:08:43.062650 systemd[1]: Started cri-containerd-5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f.scope - libcontainer container 5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f. Jul 15 05:08:43.179718 containerd[1587]: time="2025-07-15T05:08:43.179667346Z" level=info msg="StartContainer for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" returns successfully" Jul 15 05:08:43.433134 containerd[1587]: time="2025-07-15T05:08:43.432936814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" id:\"72c048bb6e232bdaf7d12ec1d26bedd64d84fd39d4b2ad4a5090eaa8ef1d6cbb\" pid:3463 exited_at:{seconds:1752556123 nanos:283476013}" Jul 15 05:08:43.441354 kubelet[2786]: I0715 05:08:43.441259 2786 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:08:43.467467 kubelet[2786]: E0715 05:08:43.467409 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:43.467932 kubelet[2786]: E0715 05:08:43.467698 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:43.468978 kubelet[2786]: I0715 05:08:43.468910 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kvtbb" podStartSLOduration=5.702165631 podStartE2EDuration="26.468868848s" podCreationTimestamp="2025-07-15 05:08:17 +0000 UTC" firstStartedPulling="2025-07-15 05:08:21.38297486 +0000 UTC m=+11.165965816" lastFinishedPulling="2025-07-15 05:08:42.149678077 +0000 UTC m=+31.932669033" observedRunningTime="2025-07-15 05:08:43.016944576 +0000 UTC m=+32.799935532" watchObservedRunningTime="2025-07-15 05:08:43.468868848 +0000 UTC m=+33.251859804" Jul 15 05:08:43.482686 systemd[1]: Created slice kubepods-burstable-podd6e1218b_027b_429d_af99_488710b464e6.slice - libcontainer container kubepods-burstable-podd6e1218b_027b_429d_af99_488710b464e6.slice. Jul 15 05:08:43.489532 systemd[1]: Created slice kubepods-burstable-pod559d1e25_e63d_40ab_a2c9_d6328e187ac2.slice - libcontainer container kubepods-burstable-pod559d1e25_e63d_40ab_a2c9_d6328e187ac2.slice. Jul 15 05:08:43.508310 kubelet[2786]: I0715 05:08:43.508167 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h4pnf" podStartSLOduration=9.276032995 podStartE2EDuration="26.508146881s" podCreationTimestamp="2025-07-15 05:08:17 +0000 UTC" firstStartedPulling="2025-07-15 05:08:21.342911577 +0000 UTC m=+11.125902533" lastFinishedPulling="2025-07-15 05:08:38.575025463 +0000 UTC m=+28.358016419" observedRunningTime="2025-07-15 05:08:43.495100497 +0000 UTC m=+33.278091453" watchObservedRunningTime="2025-07-15 05:08:43.508146881 +0000 UTC m=+33.291137837" Jul 15 05:08:43.563767 kubelet[2786]: I0715 05:08:43.563663 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hq6\" (UniqueName: \"kubernetes.io/projected/d6e1218b-027b-429d-af99-488710b464e6-kube-api-access-22hq6\") pod \"coredns-668d6bf9bc-x9g7n\" (UID: \"d6e1218b-027b-429d-af99-488710b464e6\") " pod="kube-system/coredns-668d6bf9bc-x9g7n" Jul 15 05:08:43.564702 kubelet[2786]: I0715 05:08:43.564354 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559d1e25-e63d-40ab-a2c9-d6328e187ac2-config-volume\") pod \"coredns-668d6bf9bc-v9sj4\" (UID: \"559d1e25-e63d-40ab-a2c9-d6328e187ac2\") " pod="kube-system/coredns-668d6bf9bc-v9sj4" Jul 15 05:08:43.564702 kubelet[2786]: I0715 05:08:43.564383 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6e1218b-027b-429d-af99-488710b464e6-config-volume\") pod \"coredns-668d6bf9bc-x9g7n\" (UID: \"d6e1218b-027b-429d-af99-488710b464e6\") " pod="kube-system/coredns-668d6bf9bc-x9g7n" Jul 15 05:08:43.564702 kubelet[2786]: I0715 05:08:43.564398 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5j9n\" (UniqueName: \"kubernetes.io/projected/559d1e25-e63d-40ab-a2c9-d6328e187ac2-kube-api-access-s5j9n\") pod \"coredns-668d6bf9bc-v9sj4\" (UID: \"559d1e25-e63d-40ab-a2c9-d6328e187ac2\") " pod="kube-system/coredns-668d6bf9bc-v9sj4" Jul 15 05:08:43.788372 kubelet[2786]: E0715 05:08:43.787844 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:43.795891 kubelet[2786]: E0715 05:08:43.795834 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:43.800749 containerd[1587]: time="2025-07-15T05:08:43.800676869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x9g7n,Uid:d6e1218b-027b-429d-af99-488710b464e6,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:43.800749 containerd[1587]: time="2025-07-15T05:08:43.800705793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9sj4,Uid:559d1e25-e63d-40ab-a2c9-d6328e187ac2,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:44.472265 kubelet[2786]: E0715 05:08:44.472150 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:45.474258 kubelet[2786]: E0715 05:08:45.474199 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:46.238190 systemd-networkd[1484]: cilium_host: Link UP Jul 15 05:08:46.238573 systemd-networkd[1484]: cilium_net: Link UP Jul 15 05:08:46.238973 systemd-networkd[1484]: cilium_net: Gained carrier Jul 15 05:08:46.239252 systemd-networkd[1484]: cilium_host: Gained carrier Jul 15 05:08:46.336586 systemd-networkd[1484]: cilium_net: Gained IPv6LL Jul 15 05:08:46.379613 systemd-networkd[1484]: cilium_vxlan: Link UP Jul 15 05:08:46.379625 systemd-networkd[1484]: cilium_vxlan: Gained carrier Jul 15 05:08:46.471563 systemd-networkd[1484]: cilium_host: Gained IPv6LL Jul 15 05:08:46.762372 kernel: NET: Registered PF_ALG protocol family Jul 15 05:08:47.520212 systemd-networkd[1484]: lxc_health: Link UP Jul 15 05:08:47.522971 systemd-networkd[1484]: lxc_health: Gained carrier Jul 15 05:08:47.631580 systemd-networkd[1484]: cilium_vxlan: Gained IPv6LL Jul 15 05:08:47.680530 systemd-networkd[1484]: lxc7cf0643c4ff9: Link UP Jul 15 05:08:47.775374 kernel: eth0: renamed from tmpf51b7 Jul 15 05:08:47.778292 systemd-networkd[1484]: lxc7cf0643c4ff9: Gained carrier Jul 15 05:08:47.819382 systemd-networkd[1484]: lxc7fcf187dd461: Link UP Jul 15 05:08:47.836503 kernel: eth0: renamed from tmpe82dc Jul 15 05:08:47.836612 systemd-networkd[1484]: lxc7fcf187dd461: Gained carrier Jul 15 05:08:48.419615 kubelet[2786]: E0715 05:08:48.419540 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:48.479316 kubelet[2786]: E0715 05:08:48.479279 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:48.978419 systemd-networkd[1484]: lxc7cf0643c4ff9: Gained IPv6LL Jul 15 05:08:49.039513 systemd-networkd[1484]: lxc_health: Gained IPv6LL Jul 15 05:08:49.481612 kubelet[2786]: E0715 05:08:49.481552 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:49.615602 systemd-networkd[1484]: lxc7fcf187dd461: Gained IPv6LL Jul 15 05:08:52.187320 containerd[1587]: time="2025-07-15T05:08:52.187186442Z" level=info msg="connecting to shim f51b7241a232c31ca793409a59f0f76e754c988eb1d93c4d80ea7f88ff8ec260" address="unix:///run/containerd/s/72c524df91b30fbdd5d6e2868ff2c491b57e62ab0629150e7cd8bd4cabc85a31" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:52.188550 containerd[1587]: time="2025-07-15T05:08:52.188513321Z" level=info msg="connecting to shim e82dcb2eba245b29d1bbf067419dea94da59b011cf5cc4e67ee1f5f059a18bc3" address="unix:///run/containerd/s/e482f5cf4181f5157c874b8f8c7eed01171329e0110c31efea6db516c84e62af" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:52.228920 systemd[1]: Started cri-containerd-e82dcb2eba245b29d1bbf067419dea94da59b011cf5cc4e67ee1f5f059a18bc3.scope - libcontainer container e82dcb2eba245b29d1bbf067419dea94da59b011cf5cc4e67ee1f5f059a18bc3. Jul 15 05:08:52.230901 systemd[1]: Started cri-containerd-f51b7241a232c31ca793409a59f0f76e754c988eb1d93c4d80ea7f88ff8ec260.scope - libcontainer container f51b7241a232c31ca793409a59f0f76e754c988eb1d93c4d80ea7f88ff8ec260. Jul 15 05:08:52.250320 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:08:52.253754 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:08:52.298799 containerd[1587]: time="2025-07-15T05:08:52.298747827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9sj4,Uid:559d1e25-e63d-40ab-a2c9-d6328e187ac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e82dcb2eba245b29d1bbf067419dea94da59b011cf5cc4e67ee1f5f059a18bc3\"" Jul 15 05:08:52.299591 kubelet[2786]: E0715 05:08:52.299542 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:52.300065 containerd[1587]: time="2025-07-15T05:08:52.300035683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x9g7n,Uid:d6e1218b-027b-429d-af99-488710b464e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f51b7241a232c31ca793409a59f0f76e754c988eb1d93c4d80ea7f88ff8ec260\"" Jul 15 05:08:52.301427 kubelet[2786]: E0715 05:08:52.301396 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:52.302205 containerd[1587]: time="2025-07-15T05:08:52.302079888Z" level=info msg="CreateContainer within sandbox \"e82dcb2eba245b29d1bbf067419dea94da59b011cf5cc4e67ee1f5f059a18bc3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:08:52.305024 containerd[1587]: time="2025-07-15T05:08:52.304978364Z" level=info msg="CreateContainer within sandbox \"f51b7241a232c31ca793409a59f0f76e754c988eb1d93c4d80ea7f88ff8ec260\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:08:52.327026 containerd[1587]: time="2025-07-15T05:08:52.326965238Z" level=info msg="Container ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:52.333129 containerd[1587]: time="2025-07-15T05:08:52.333052867Z" level=info msg="Container db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:52.343986 containerd[1587]: time="2025-07-15T05:08:52.343902989Z" level=info msg="CreateContainer within sandbox \"e82dcb2eba245b29d1bbf067419dea94da59b011cf5cc4e67ee1f5f059a18bc3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897\"" Jul 15 05:08:52.344817 containerd[1587]: time="2025-07-15T05:08:52.344750068Z" level=info msg="StartContainer for \"ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897\"" Jul 15 05:08:52.348763 containerd[1587]: time="2025-07-15T05:08:52.348710978Z" level=info msg="connecting to shim ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897" address="unix:///run/containerd/s/e482f5cf4181f5157c874b8f8c7eed01171329e0110c31efea6db516c84e62af" protocol=ttrpc version=3 Jul 15 05:08:52.352054 containerd[1587]: time="2025-07-15T05:08:52.351991801Z" level=info msg="CreateContainer within sandbox \"f51b7241a232c31ca793409a59f0f76e754c988eb1d93c4d80ea7f88ff8ec260\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22\"" Jul 15 05:08:52.353651 containerd[1587]: time="2025-07-15T05:08:52.352698678Z" level=info msg="StartContainer for \"db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22\"" Jul 15 05:08:52.353847 containerd[1587]: time="2025-07-15T05:08:52.353807098Z" level=info msg="connecting to shim db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22" address="unix:///run/containerd/s/72c524df91b30fbdd5d6e2868ff2c491b57e62ab0629150e7cd8bd4cabc85a31" protocol=ttrpc version=3 Jul 15 05:08:52.369487 systemd[1]: Started cri-containerd-ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897.scope - libcontainer container ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897. Jul 15 05:08:52.381828 systemd[1]: Started cri-containerd-db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22.scope - libcontainer container db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22. Jul 15 05:08:52.417778 containerd[1587]: time="2025-07-15T05:08:52.417697412Z" level=info msg="StartContainer for \"ea72a062ae8fba41da4c31a9c7bb2ac4a71953b4204e79042e7b39e92788a897\" returns successfully" Jul 15 05:08:52.425727 containerd[1587]: time="2025-07-15T05:08:52.425673453Z" level=info msg="StartContainer for \"db1263ffbf194367ff17d3fca67f9b377f4b263465b94a52f16c8d25b7dbff22\" returns successfully" Jul 15 05:08:52.490836 kubelet[2786]: E0715 05:08:52.490291 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:52.493613 kubelet[2786]: E0715 05:08:52.493508 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:52.543875 kubelet[2786]: I0715 05:08:52.543735 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x9g7n" podStartSLOduration=35.543708522 podStartE2EDuration="35.543708522s" podCreationTimestamp="2025-07-15 05:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:52.524654431 +0000 UTC m=+42.307645407" watchObservedRunningTime="2025-07-15 05:08:52.543708522 +0000 UTC m=+42.326699478" Jul 15 05:08:53.160225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586732414.mount: Deactivated successfully. Jul 15 05:08:53.496487 kubelet[2786]: E0715 05:08:53.495985 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:53.496487 kubelet[2786]: E0715 05:08:53.496205 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:53.869374 kubelet[2786]: I0715 05:08:53.869190 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v9sj4" podStartSLOduration=35.869166549 podStartE2EDuration="35.869166549s" podCreationTimestamp="2025-07-15 05:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:52.544799378 +0000 UTC m=+42.327790334" watchObservedRunningTime="2025-07-15 05:08:53.869166549 +0000 UTC m=+43.652157535" Jul 15 05:08:54.498204 kubelet[2786]: E0715 05:08:54.498165 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:54.498747 kubelet[2786]: E0715 05:08:54.498260 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:56.862947 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:38628.service - OpenSSH per-connection server daemon (10.0.0.1:38628). Jul 15 05:08:56.950017 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 38628 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:08:56.951890 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:08:56.957493 systemd-logind[1560]: New session 10 of user core. Jul 15 05:08:56.966519 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:08:57.304370 sshd[4116]: Connection closed by 10.0.0.1 port 38628 Jul 15 05:08:57.304660 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Jul 15 05:08:57.308806 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:38628.service: Deactivated successfully. Jul 15 05:08:57.310985 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:08:57.311921 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:08:57.313308 systemd-logind[1560]: Removed session 10. Jul 15 05:09:02.322004 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:58324.service - OpenSSH per-connection server daemon (10.0.0.1:58324). Jul 15 05:09:02.380626 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 58324 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:02.382434 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:02.387967 systemd-logind[1560]: New session 11 of user core. Jul 15 05:09:02.397523 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:09:02.625237 sshd[4135]: Connection closed by 10.0.0.1 port 58324 Jul 15 05:09:02.625524 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:02.630548 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:58324.service: Deactivated successfully. Jul 15 05:09:02.632902 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:09:02.634323 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:09:02.636223 systemd-logind[1560]: Removed session 11. Jul 15 05:09:07.652527 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:58338.service - OpenSSH per-connection server daemon (10.0.0.1:58338). Jul 15 05:09:07.717260 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 58338 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:07.719650 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:07.725664 systemd-logind[1560]: New session 12 of user core. Jul 15 05:09:07.736637 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:09:07.878793 sshd[4152]: Connection closed by 10.0.0.1 port 58338 Jul 15 05:09:07.878628 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:07.884576 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:58338.service: Deactivated successfully. Jul 15 05:09:07.887179 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:09:07.888463 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:09:07.890344 systemd-logind[1560]: Removed session 12. Jul 15 05:09:12.905669 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:55080.service - OpenSSH per-connection server daemon (10.0.0.1:55080). Jul 15 05:09:12.992768 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 55080 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:12.995151 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:13.001672 systemd-logind[1560]: New session 13 of user core. Jul 15 05:09:13.010609 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:09:13.143084 sshd[4171]: Connection closed by 10.0.0.1 port 55080 Jul 15 05:09:13.143531 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:13.148908 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:55080.service: Deactivated successfully. Jul 15 05:09:13.151579 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:09:13.152374 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:09:13.153607 systemd-logind[1560]: Removed session 13. Jul 15 05:09:18.159706 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:59230.service - OpenSSH per-connection server daemon (10.0.0.1:59230). Jul 15 05:09:18.232813 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 59230 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:18.235042 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:18.242005 systemd-logind[1560]: New session 14 of user core. Jul 15 05:09:18.248639 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:09:18.365688 sshd[4188]: Connection closed by 10.0.0.1 port 59230 Jul 15 05:09:18.366053 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:18.370711 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:59230.service: Deactivated successfully. Jul 15 05:09:18.372704 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:09:18.373618 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:09:18.375372 systemd-logind[1560]: Removed session 14. Jul 15 05:09:23.380573 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:59240.service - OpenSSH per-connection server daemon (10.0.0.1:59240). Jul 15 05:09:23.445423 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 59240 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:23.448034 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:23.455258 systemd-logind[1560]: New session 15 of user core. Jul 15 05:09:23.463015 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:09:23.600844 sshd[4208]: Connection closed by 10.0.0.1 port 59240 Jul 15 05:09:23.601310 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:23.612597 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:59240.service: Deactivated successfully. Jul 15 05:09:23.615293 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:09:23.616651 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:09:23.620232 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:59246.service - OpenSSH per-connection server daemon (10.0.0.1:59246). Jul 15 05:09:23.622031 systemd-logind[1560]: Removed session 15. Jul 15 05:09:23.689785 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 59246 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:23.692061 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:23.698645 systemd-logind[1560]: New session 16 of user core. Jul 15 05:09:23.709586 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:09:23.903200 sshd[4225]: Connection closed by 10.0.0.1 port 59246 Jul 15 05:09:23.903735 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:23.917059 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:59246.service: Deactivated successfully. Jul 15 05:09:23.922389 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:09:23.923612 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:09:23.931098 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:59248.service - OpenSSH per-connection server daemon (10.0.0.1:59248). Jul 15 05:09:23.932734 systemd-logind[1560]: Removed session 16. Jul 15 05:09:23.996649 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 59248 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:23.999205 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:24.004480 systemd-logind[1560]: New session 17 of user core. Jul 15 05:09:24.012608 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:09:24.138477 sshd[4240]: Connection closed by 10.0.0.1 port 59248 Jul 15 05:09:24.138931 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:24.144873 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:59248.service: Deactivated successfully. Jul 15 05:09:24.147925 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:09:24.149007 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:09:24.151123 systemd-logind[1560]: Removed session 17. Jul 15 05:09:29.164756 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:50578.service - OpenSSH per-connection server daemon (10.0.0.1:50578). Jul 15 05:09:29.227076 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 50578 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:29.229725 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:29.234943 systemd-logind[1560]: New session 18 of user core. Jul 15 05:09:29.241610 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:09:29.333528 kubelet[2786]: E0715 05:09:29.333476 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:29.402064 sshd[4258]: Connection closed by 10.0.0.1 port 50578 Jul 15 05:09:29.402540 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:29.407400 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:50578.service: Deactivated successfully. Jul 15 05:09:29.410298 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:09:29.411462 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:09:29.413966 systemd-logind[1560]: Removed session 18. Jul 15 05:09:34.421740 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:50592.service - OpenSSH per-connection server daemon (10.0.0.1:50592). Jul 15 05:09:34.489092 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 50592 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:34.491901 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:34.499791 systemd-logind[1560]: New session 19 of user core. Jul 15 05:09:34.514736 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:09:34.651643 sshd[4275]: Connection closed by 10.0.0.1 port 50592 Jul 15 05:09:34.652134 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:34.658178 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:50592.service: Deactivated successfully. Jul 15 05:09:34.661920 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:09:34.663555 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:09:34.665754 systemd-logind[1560]: Removed session 19. Jul 15 05:09:37.317673 kubelet[2786]: E0715 05:09:37.317592 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:39.673118 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:51378.service - OpenSSH per-connection server daemon (10.0.0.1:51378). Jul 15 05:09:39.744079 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 51378 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:39.746632 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:39.752247 systemd-logind[1560]: New session 20 of user core. Jul 15 05:09:39.766663 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:09:39.948993 sshd[4291]: Connection closed by 10.0.0.1 port 51378 Jul 15 05:09:39.949387 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:39.968819 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:51378.service: Deactivated successfully. Jul 15 05:09:39.971664 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:09:39.972708 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:09:39.977229 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:51392.service - OpenSSH per-connection server daemon (10.0.0.1:51392). Jul 15 05:09:39.978006 systemd-logind[1560]: Removed session 20. Jul 15 05:09:40.045505 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 51392 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:40.047711 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:40.053488 systemd-logind[1560]: New session 21 of user core. Jul 15 05:09:40.070668 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:09:40.922907 sshd[4307]: Connection closed by 10.0.0.1 port 51392 Jul 15 05:09:40.923806 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:40.938115 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:51392.service: Deactivated successfully. Jul 15 05:09:40.940712 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:09:40.941751 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:09:40.946228 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:51396.service - OpenSSH per-connection server daemon (10.0.0.1:51396). Jul 15 05:09:40.947364 systemd-logind[1560]: Removed session 21. Jul 15 05:09:41.029522 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 51396 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:41.031825 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:41.040173 systemd-logind[1560]: New session 22 of user core. Jul 15 05:09:41.057708 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:09:42.043178 sshd[4322]: Connection closed by 10.0.0.1 port 51396 Jul 15 05:09:42.042823 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:42.056391 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:51396.service: Deactivated successfully. Jul 15 05:09:42.058698 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:09:42.059795 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:09:42.063263 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:51410.service - OpenSSH per-connection server daemon (10.0.0.1:51410). Jul 15 05:09:42.064443 systemd-logind[1560]: Removed session 22. Jul 15 05:09:42.127059 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 51410 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:42.129160 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:42.134910 systemd-logind[1560]: New session 23 of user core. Jul 15 05:09:42.148620 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:09:42.722024 sshd[4362]: Connection closed by 10.0.0.1 port 51410 Jul 15 05:09:42.722792 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:42.734209 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:51410.service: Deactivated successfully. Jul 15 05:09:42.737780 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:09:42.740550 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:09:42.743572 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:51420.service - OpenSSH per-connection server daemon (10.0.0.1:51420). Jul 15 05:09:42.744703 systemd-logind[1560]: Removed session 23. Jul 15 05:09:42.816736 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 51420 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:42.818840 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:42.824814 systemd-logind[1560]: New session 24 of user core. Jul 15 05:09:42.839638 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:09:42.976771 sshd[4377]: Connection closed by 10.0.0.1 port 51420 Jul 15 05:09:42.977062 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:42.982148 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:51420.service: Deactivated successfully. Jul 15 05:09:42.984554 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:09:42.985563 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:09:42.986988 systemd-logind[1560]: Removed session 24. Jul 15 05:09:44.317402 kubelet[2786]: E0715 05:09:44.317314 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:47.995062 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:48070.service - OpenSSH per-connection server daemon (10.0.0.1:48070). Jul 15 05:09:48.059534 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 48070 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:48.061251 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:48.065886 systemd-logind[1560]: New session 25 of user core. Jul 15 05:09:48.076485 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:09:48.231911 sshd[4393]: Connection closed by 10.0.0.1 port 48070 Jul 15 05:09:48.232299 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:48.237533 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:48070.service: Deactivated successfully. Jul 15 05:09:48.240141 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:09:48.241059 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:09:48.242983 systemd-logind[1560]: Removed session 25. Jul 15 05:09:50.318060 kubelet[2786]: E0715 05:09:50.317956 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:51.317959 kubelet[2786]: E0715 05:09:51.317894 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:53.249754 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:48076.service - OpenSSH per-connection server daemon (10.0.0.1:48076). Jul 15 05:09:53.312831 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 48076 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:53.314881 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:53.317772 kubelet[2786]: E0715 05:09:53.317739 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:53.320417 systemd-logind[1560]: New session 26 of user core. Jul 15 05:09:53.331512 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 05:09:53.465177 sshd[4414]: Connection closed by 10.0.0.1 port 48076 Jul 15 05:09:53.464962 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:53.470567 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:48076.service: Deactivated successfully. Jul 15 05:09:53.472842 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 05:09:53.473747 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Jul 15 05:09:53.475188 systemd-logind[1560]: Removed session 26. Jul 15 05:09:58.482783 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:34102.service - OpenSSH per-connection server daemon (10.0.0.1:34102). Jul 15 05:09:58.570431 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 34102 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:58.573583 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:58.579776 systemd-logind[1560]: New session 27 of user core. Jul 15 05:09:58.591352 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 05:09:58.713587 sshd[4430]: Connection closed by 10.0.0.1 port 34102 Jul 15 05:09:58.714034 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:58.719290 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:34102.service: Deactivated successfully. Jul 15 05:09:58.722224 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 05:09:58.723106 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Jul 15 05:09:58.724819 systemd-logind[1560]: Removed session 27. Jul 15 05:10:03.730566 systemd[1]: Started sshd@27-10.0.0.20:22-10.0.0.1:34116.service - OpenSSH per-connection server daemon (10.0.0.1:34116). Jul 15 05:10:03.794922 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 34116 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:03.797647 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:03.804970 systemd-logind[1560]: New session 28 of user core. Jul 15 05:10:03.814645 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 05:10:03.938553 sshd[4446]: Connection closed by 10.0.0.1 port 34116 Jul 15 05:10:03.938840 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:03.943637 systemd[1]: sshd@27-10.0.0.20:22-10.0.0.1:34116.service: Deactivated successfully. Jul 15 05:10:03.946077 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 05:10:03.947156 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Jul 15 05:10:03.948690 systemd-logind[1560]: Removed session 28. Jul 15 05:10:08.953188 systemd[1]: Started sshd@28-10.0.0.20:22-10.0.0.1:51566.service - OpenSSH per-connection server daemon (10.0.0.1:51566). Jul 15 05:10:09.017477 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 51566 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:09.019689 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:09.025449 systemd-logind[1560]: New session 29 of user core. Jul 15 05:10:09.036680 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 15 05:10:09.168882 sshd[4462]: Connection closed by 10.0.0.1 port 51566 Jul 15 05:10:09.169411 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:09.181990 systemd[1]: sshd@28-10.0.0.20:22-10.0.0.1:51566.service: Deactivated successfully. Jul 15 05:10:09.185121 systemd[1]: session-29.scope: Deactivated successfully. Jul 15 05:10:09.186176 systemd-logind[1560]: Session 29 logged out. Waiting for processes to exit. Jul 15 05:10:09.190241 systemd[1]: Started sshd@29-10.0.0.20:22-10.0.0.1:51578.service - OpenSSH per-connection server daemon (10.0.0.1:51578). Jul 15 05:10:09.191702 systemd-logind[1560]: Removed session 29. Jul 15 05:10:09.258220 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 51578 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:09.261363 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:09.267555 systemd-logind[1560]: New session 30 of user core. Jul 15 05:10:09.278695 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 15 05:10:09.318009 kubelet[2786]: E0715 05:10:09.317927 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:11.510471 containerd[1587]: time="2025-07-15T05:10:11.510395706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" id:\"b1a20467bb2046d2367b94fa22023e53a5c0748505d347b67f690d850dd88828\" pid:4500 exited_at:{seconds:1752556211 nanos:509873060}" Jul 15 05:10:11.511201 containerd[1587]: time="2025-07-15T05:10:11.510562961Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:10:11.513365 containerd[1587]: time="2025-07-15T05:10:11.513292622Z" level=info msg="StopContainer for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" with timeout 2 (s)" Jul 15 05:10:11.513749 containerd[1587]: time="2025-07-15T05:10:11.513712645Z" level=info msg="Stop container \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" with signal terminated" Jul 15 05:10:11.523506 systemd-networkd[1484]: lxc_health: Link DOWN Jul 15 05:10:11.523761 systemd-networkd[1484]: lxc_health: Lost carrier Jul 15 05:10:11.534052 containerd[1587]: time="2025-07-15T05:10:11.533817074Z" level=info msg="StopContainer for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" with timeout 30 (s)" Jul 15 05:10:11.535753 containerd[1587]: time="2025-07-15T05:10:11.535720837Z" level=info msg="Stop container \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" with signal terminated" Jul 15 05:10:11.549036 systemd[1]: cri-containerd-2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793.scope: Deactivated successfully. Jul 15 05:10:11.550614 systemd[1]: cri-containerd-5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f.scope: Deactivated successfully. Jul 15 05:10:11.551118 systemd[1]: cri-containerd-5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f.scope: Consumed 7.141s CPU time, 126.1M memory peak, 320K read from disk, 13.3M written to disk. Jul 15 05:10:11.552206 containerd[1587]: time="2025-07-15T05:10:11.552128058Z" level=info msg="received exit event container_id:\"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" id:\"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" pid:3433 exited_at:{seconds:1752556211 nanos:551036889}" Jul 15 05:10:11.552479 containerd[1587]: time="2025-07-15T05:10:11.552429016Z" level=info msg="received exit event container_id:\"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" id:\"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" pid:3397 exited_at:{seconds:1752556211 nanos:551261342}" Jul 15 05:10:11.553314 containerd[1587]: time="2025-07-15T05:10:11.553285131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" id:\"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" pid:3433 exited_at:{seconds:1752556211 nanos:551036889}" Jul 15 05:10:11.554659 containerd[1587]: time="2025-07-15T05:10:11.554618778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" id:\"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" pid:3397 exited_at:{seconds:1752556211 nanos:551261342}" Jul 15 05:10:11.577891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793-rootfs.mount: Deactivated successfully. Jul 15 05:10:11.582097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f-rootfs.mount: Deactivated successfully. Jul 15 05:10:11.820812 containerd[1587]: time="2025-07-15T05:10:11.820744247Z" level=info msg="StopContainer for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" returns successfully" Jul 15 05:10:11.824032 containerd[1587]: time="2025-07-15T05:10:11.823959535Z" level=info msg="StopPodSandbox for \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\"" Jul 15 05:10:11.831424 containerd[1587]: time="2025-07-15T05:10:11.831346374Z" level=info msg="Container to stop \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:10:11.831424 containerd[1587]: time="2025-07-15T05:10:11.831377323Z" level=info msg="Container to stop \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:10:11.831424 containerd[1587]: time="2025-07-15T05:10:11.831394405Z" level=info msg="Container to stop \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:10:11.831424 containerd[1587]: time="2025-07-15T05:10:11.831413932Z" level=info msg="Container to stop \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:10:11.831424 containerd[1587]: time="2025-07-15T05:10:11.831426565Z" level=info msg="Container to stop \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:10:11.835881 containerd[1587]: time="2025-07-15T05:10:11.835752058Z" level=info msg="StopContainer for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" returns successfully" Jul 15 05:10:11.837032 containerd[1587]: time="2025-07-15T05:10:11.836879385Z" level=info msg="StopPodSandbox for \"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\"" Jul 15 05:10:11.837271 containerd[1587]: time="2025-07-15T05:10:11.837247450Z" level=info msg="Container to stop \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:10:11.841411 systemd[1]: cri-containerd-351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102.scope: Deactivated successfully. Jul 15 05:10:11.844553 containerd[1587]: time="2025-07-15T05:10:11.844491100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" id:\"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" pid:2977 exit_status:137 exited_at:{seconds:1752556211 nanos:844009992}" Jul 15 05:10:11.849678 systemd[1]: cri-containerd-98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829.scope: Deactivated successfully. Jul 15 05:10:11.876774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102-rootfs.mount: Deactivated successfully. Jul 15 05:10:11.880450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829-rootfs.mount: Deactivated successfully. Jul 15 05:10:12.028246 containerd[1587]: time="2025-07-15T05:10:12.028074562Z" level=info msg="shim disconnected" id=351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102 namespace=k8s.io Jul 15 05:10:12.028246 containerd[1587]: time="2025-07-15T05:10:12.028116811Z" level=warning msg="cleaning up after shim disconnected" id=351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102 namespace=k8s.io Jul 15 05:10:12.039055 containerd[1587]: time="2025-07-15T05:10:12.028127923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 05:10:12.039196 containerd[1587]: time="2025-07-15T05:10:12.028131289Z" level=info msg="shim disconnected" id=98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829 namespace=k8s.io Jul 15 05:10:12.039196 containerd[1587]: time="2025-07-15T05:10:12.039156502Z" level=warning msg="cleaning up after shim disconnected" id=98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829 namespace=k8s.io Jul 15 05:10:12.039196 containerd[1587]: time="2025-07-15T05:10:12.039167172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 05:10:12.092094 containerd[1587]: time="2025-07-15T05:10:12.091883343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" id:\"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" pid:3004 exit_status:137 exited_at:{seconds:1752556211 nanos:851048715}" Jul 15 05:10:12.093355 containerd[1587]: time="2025-07-15T05:10:12.093265842Z" level=info msg="TearDown network for sandbox \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" successfully" Jul 15 05:10:12.093355 containerd[1587]: time="2025-07-15T05:10:12.093292773Z" level=info msg="StopPodSandbox for \"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" returns successfully" Jul 15 05:10:12.094855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829-shm.mount: Deactivated successfully. Jul 15 05:10:12.095144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102-shm.mount: Deactivated successfully. Jul 15 05:10:12.102653 containerd[1587]: time="2025-07-15T05:10:12.102601477Z" level=info msg="received exit event sandbox_id:\"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" exit_status:137 exited_at:{seconds:1752556211 nanos:851048715}" Jul 15 05:10:12.103136 containerd[1587]: time="2025-07-15T05:10:12.102680687Z" level=info msg="received exit event sandbox_id:\"351cd8bc70737033256866678a7a8a7328aca36ea5f29006882418cd9ef56102\" exit_status:137 exited_at:{seconds:1752556211 nanos:844009992}" Jul 15 05:10:12.104042 containerd[1587]: time="2025-07-15T05:10:12.103962005Z" level=info msg="TearDown network for sandbox \"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" successfully" Jul 15 05:10:12.104042 containerd[1587]: time="2025-07-15T05:10:12.104028791Z" level=info msg="StopPodSandbox for \"98ffb7f51634eaae48ce35697b793d15c9e82c054e3fb61540e9c19244894829\" returns successfully" Jul 15 05:10:12.246267 kubelet[2786]: I0715 05:10:12.246161 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-lib-modules\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246267 kubelet[2786]: I0715 05:10:12.246225 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-xtables-lock\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246267 kubelet[2786]: I0715 05:10:12.246263 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cni-path\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246267 kubelet[2786]: I0715 05:10:12.246285 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-run\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246937 kubelet[2786]: I0715 05:10:12.246303 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hostproc\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246937 kubelet[2786]: I0715 05:10:12.246362 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-etc-cni-netd\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246937 kubelet[2786]: I0715 05:10:12.246396 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hubble-tls\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246937 kubelet[2786]: I0715 05:10:12.246418 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6w8v\" (UniqueName: \"kubernetes.io/projected/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-kube-api-access-d6w8v\") pod \"d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47\" (UID: \"d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47\") " Jul 15 05:10:12.246937 kubelet[2786]: I0715 05:10:12.246445 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-clustermesh-secrets\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.246937 kubelet[2786]: I0715 05:10:12.246430 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.247094 kubelet[2786]: I0715 05:10:12.246476 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-net\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.247094 kubelet[2786]: I0715 05:10:12.246499 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-cgroup\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.247094 kubelet[2786]: I0715 05:10:12.246536 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-cilium-config-path\") pod \"d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47\" (UID: \"d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47\") " Jul 15 05:10:12.247094 kubelet[2786]: I0715 05:10:12.246562 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-kernel\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.247094 kubelet[2786]: I0715 05:10:12.246589 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-config-path\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.247094 kubelet[2786]: I0715 05:10:12.246474 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.247228 kubelet[2786]: I0715 05:10:12.246430 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.247228 kubelet[2786]: I0715 05:10:12.246615 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-bpf-maps\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.247228 kubelet[2786]: I0715 05:10:12.246644 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc9z9\" (UniqueName: \"kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-kube-api-access-kc9z9\") pod \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\" (UID: \"9b854359-adb1-4ddb-8c79-050d6ac3fd9a\") " Jul 15 05:10:12.247228 kubelet[2786]: I0715 05:10:12.246687 2786 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.247228 kubelet[2786]: I0715 05:10:12.246698 2786 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.253346 kubelet[2786]: I0715 05:10:12.246545 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cni-path" (OuterVolumeSpecName: "cni-path") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253411 kubelet[2786]: I0715 05:10:12.246570 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hostproc" (OuterVolumeSpecName: "hostproc") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253411 kubelet[2786]: I0715 05:10:12.246592 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253411 kubelet[2786]: I0715 05:10:12.246904 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253411 kubelet[2786]: I0715 05:10:12.246969 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253411 kubelet[2786]: I0715 05:10:12.251855 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47" (UID: "d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:10:12.253555 kubelet[2786]: I0715 05:10:12.251890 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253555 kubelet[2786]: I0715 05:10:12.253490 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 05:10:12.253605 kubelet[2786]: I0715 05:10:12.253551 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-kube-api-access-kc9z9" (OuterVolumeSpecName: "kube-api-access-kc9z9") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "kube-api-access-kc9z9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:10:12.253961 kubelet[2786]: I0715 05:10:12.253903 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:10:12.254516 kubelet[2786]: I0715 05:10:12.254486 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-kube-api-access-d6w8v" (OuterVolumeSpecName: "kube-api-access-d6w8v") pod "d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47" (UID: "d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47"). InnerVolumeSpecName "kube-api-access-d6w8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:10:12.255519 kubelet[2786]: I0715 05:10:12.255477 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:10:12.257036 kubelet[2786]: I0715 05:10:12.256979 2786 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9b854359-adb1-4ddb-8c79-050d6ac3fd9a" (UID: "9b854359-adb1-4ddb-8c79-050d6ac3fd9a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:10:12.322355 kubelet[2786]: E0715 05:10:12.321559 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:12.330560 systemd[1]: Removed slice kubepods-besteffort-podd9d2d2e2_d9aa_43a8_ac74_ce76ad414a47.slice - libcontainer container kubepods-besteffort-podd9d2d2e2_d9aa_43a8_ac74_ce76ad414a47.slice. Jul 15 05:10:12.332247 systemd[1]: Removed slice kubepods-burstable-pod9b854359_adb1_4ddb_8c79_050d6ac3fd9a.slice - libcontainer container kubepods-burstable-pod9b854359_adb1_4ddb_8c79_050d6ac3fd9a.slice. Jul 15 05:10:12.332389 systemd[1]: kubepods-burstable-pod9b854359_adb1_4ddb_8c79_050d6ac3fd9a.slice: Consumed 7.278s CPU time, 126.5M memory peak, 328K read from disk, 15.6M written to disk. Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347652 2786 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347709 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347724 2786 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347738 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kc9z9\" (UniqueName: \"kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-kube-api-access-kc9z9\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347751 2786 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347761 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347771 2786 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.347845 kubelet[2786]: I0715 05:10:12.347780 2786 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.348296 kubelet[2786]: I0715 05:10:12.347791 2786 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.348296 kubelet[2786]: I0715 05:10:12.347802 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6w8v\" (UniqueName: \"kubernetes.io/projected/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-kube-api-access-d6w8v\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.348296 kubelet[2786]: I0715 05:10:12.347817 2786 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.348296 kubelet[2786]: I0715 05:10:12.347833 2786 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.348296 kubelet[2786]: I0715 05:10:12.347845 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b854359-adb1-4ddb-8c79-050d6ac3fd9a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.348296 kubelet[2786]: I0715 05:10:12.347857 2786 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 05:10:12.578045 systemd[1]: var-lib-kubelet-pods-d9d2d2e2\x2dd9aa\x2d43a8\x2dac74\x2dce76ad414a47-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6w8v.mount: Deactivated successfully. Jul 15 05:10:12.578205 systemd[1]: var-lib-kubelet-pods-9b854359\x2dadb1\x2d4ddb\x2d8c79\x2d050d6ac3fd9a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkc9z9.mount: Deactivated successfully. Jul 15 05:10:12.578320 systemd[1]: var-lib-kubelet-pods-9b854359\x2dadb1\x2d4ddb\x2d8c79\x2d050d6ac3fd9a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 05:10:12.578475 systemd[1]: var-lib-kubelet-pods-9b854359\x2dadb1\x2d4ddb\x2d8c79\x2d050d6ac3fd9a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 05:10:12.693993 kubelet[2786]: I0715 05:10:12.693766 2786 scope.go:117] "RemoveContainer" containerID="5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f" Jul 15 05:10:12.697004 containerd[1587]: time="2025-07-15T05:10:12.696937890Z" level=info msg="RemoveContainer for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\"" Jul 15 05:10:12.875740 containerd[1587]: time="2025-07-15T05:10:12.875650876Z" level=info msg="RemoveContainer for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" returns successfully" Jul 15 05:10:12.876115 kubelet[2786]: I0715 05:10:12.876049 2786 scope.go:117] "RemoveContainer" containerID="c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b" Jul 15 05:10:12.878105 containerd[1587]: time="2025-07-15T05:10:12.878053550Z" level=info msg="RemoveContainer for \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\"" Jul 15 05:10:13.053233 containerd[1587]: time="2025-07-15T05:10:13.053076074Z" level=info msg="RemoveContainer for \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" returns successfully" Jul 15 05:10:13.053524 kubelet[2786]: I0715 05:10:13.053445 2786 scope.go:117] "RemoveContainer" containerID="829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb" Jul 15 05:10:13.056650 containerd[1587]: time="2025-07-15T05:10:13.056587539Z" level=info msg="RemoveContainer for \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\"" Jul 15 05:10:13.145153 sshd[4478]: Connection closed by 10.0.0.1 port 51578 Jul 15 05:10:13.145628 sshd-session[4475]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:13.150225 containerd[1587]: time="2025-07-15T05:10:13.150085687Z" level=info msg="RemoveContainer for \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" returns successfully" Jul 15 05:10:13.150490 kubelet[2786]: I0715 05:10:13.150448 2786 scope.go:117] "RemoveContainer" containerID="1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1" Jul 15 05:10:13.152320 containerd[1587]: time="2025-07-15T05:10:13.152289415Z" level=info msg="RemoveContainer for \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\"" Jul 15 05:10:13.156943 systemd[1]: sshd@29-10.0.0.20:22-10.0.0.1:51578.service: Deactivated successfully. Jul 15 05:10:13.159163 systemd[1]: session-30.scope: Deactivated successfully. Jul 15 05:10:13.160077 systemd-logind[1560]: Session 30 logged out. Waiting for processes to exit. Jul 15 05:10:13.163068 systemd-logind[1560]: Removed session 30. Jul 15 05:10:13.164737 systemd[1]: Started sshd@30-10.0.0.20:22-10.0.0.1:51592.service - OpenSSH per-connection server daemon (10.0.0.1:51592). Jul 15 05:10:13.273243 containerd[1587]: time="2025-07-15T05:10:13.273190195Z" level=info msg="RemoveContainer for \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" returns successfully" Jul 15 05:10:13.273890 kubelet[2786]: I0715 05:10:13.273858 2786 scope.go:117] "RemoveContainer" containerID="30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03" Jul 15 05:10:13.276149 containerd[1587]: time="2025-07-15T05:10:13.275673061Z" level=info msg="RemoveContainer for \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\"" Jul 15 05:10:13.355205 sshd[4631]: Accepted publickey for core from 10.0.0.1 port 51592 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:13.359438 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:13.361449 containerd[1587]: time="2025-07-15T05:10:13.361319254Z" level=info msg="RemoveContainer for \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" returns successfully" Jul 15 05:10:13.362814 kubelet[2786]: I0715 05:10:13.362723 2786 scope.go:117] "RemoveContainer" containerID="5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f" Jul 15 05:10:13.363485 containerd[1587]: time="2025-07-15T05:10:13.363386915Z" level=error msg="ContainerStatus for \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\": not found" Jul 15 05:10:13.365761 kubelet[2786]: E0715 05:10:13.365696 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\": not found" containerID="5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f" Jul 15 05:10:13.365937 kubelet[2786]: I0715 05:10:13.365768 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f"} err="failed to get container status \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fa3d77b31dff17dc28464e836fff22978d2d92b2f3d76b58c57d77166336a1f\": not found" Jul 15 05:10:13.365937 kubelet[2786]: I0715 05:10:13.365898 2786 scope.go:117] "RemoveContainer" containerID="c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b" Jul 15 05:10:13.366551 containerd[1587]: time="2025-07-15T05:10:13.366451467Z" level=error msg="ContainerStatus for \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\": not found" Jul 15 05:10:13.366819 kubelet[2786]: E0715 05:10:13.366777 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\": not found" containerID="c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b" Jul 15 05:10:13.367077 kubelet[2786]: I0715 05:10:13.366901 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b"} err="failed to get container status \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c44578c73e7390d3aba64bb700660e1ba35971124feececea9e3612d2209b86b\": not found" Jul 15 05:10:13.367077 kubelet[2786]: I0715 05:10:13.367066 2786 scope.go:117] "RemoveContainer" containerID="829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb" Jul 15 05:10:13.367691 containerd[1587]: time="2025-07-15T05:10:13.367590466Z" level=error msg="ContainerStatus for \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\": not found" Jul 15 05:10:13.368996 kubelet[2786]: E0715 05:10:13.368501 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\": not found" containerID="829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb" Jul 15 05:10:13.368996 kubelet[2786]: I0715 05:10:13.368554 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb"} err="failed to get container status \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"829d46670bd75ee26114a08fec7a7c77eb68b9d621cb13e56b77c7605fbe6cfb\": not found" Jul 15 05:10:13.368996 kubelet[2786]: I0715 05:10:13.368584 2786 scope.go:117] "RemoveContainer" containerID="1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1" Jul 15 05:10:13.369321 containerd[1587]: time="2025-07-15T05:10:13.369136122Z" level=error msg="ContainerStatus for \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\": not found" Jul 15 05:10:13.369784 kubelet[2786]: E0715 05:10:13.369741 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\": not found" containerID="1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1" Jul 15 05:10:13.369784 kubelet[2786]: I0715 05:10:13.369776 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1"} err="failed to get container status \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e5ef5f20a1a3bdd4cb841b4682e60c3004808246272618222562aa4fa005dc1\": not found" Jul 15 05:10:13.369999 kubelet[2786]: I0715 05:10:13.369798 2786 scope.go:117] "RemoveContainer" containerID="30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03" Jul 15 05:10:13.370086 containerd[1587]: time="2025-07-15T05:10:13.370028907Z" level=error msg="ContainerStatus for \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\": not found" Jul 15 05:10:13.370721 kubelet[2786]: E0715 05:10:13.370675 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\": not found" containerID="30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03" Jul 15 05:10:13.371007 kubelet[2786]: I0715 05:10:13.370731 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03"} err="failed to get container status \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\": rpc error: code = NotFound desc = an error occurred when try to find container \"30e7feba1a8178ea415a1cb411cfcfe4dedb98bf37f797eeb3a2711719006d03\": not found" Jul 15 05:10:13.371007 kubelet[2786]: I0715 05:10:13.370776 2786 scope.go:117] "RemoveContainer" containerID="2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793" Jul 15 05:10:13.373273 containerd[1587]: time="2025-07-15T05:10:13.373221882Z" level=info msg="RemoveContainer for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\"" Jul 15 05:10:13.373982 systemd-logind[1560]: New session 31 of user core. Jul 15 05:10:13.379536 containerd[1587]: time="2025-07-15T05:10:13.379455552Z" level=info msg="RemoveContainer for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" returns successfully" Jul 15 05:10:13.379884 kubelet[2786]: I0715 05:10:13.379830 2786 scope.go:117] "RemoveContainer" containerID="2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793" Jul 15 05:10:13.380207 containerd[1587]: time="2025-07-15T05:10:13.380156686Z" level=error msg="ContainerStatus for \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\": not found" Jul 15 05:10:13.380393 kubelet[2786]: E0715 05:10:13.380321 2786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\": not found" containerID="2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793" Jul 15 05:10:13.380462 kubelet[2786]: I0715 05:10:13.380397 2786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793"} err="failed to get container status \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e3a74bad0b4b4e94737f652b2efcca18094f431fd61a774ee802c7bb5afa793\": not found" Jul 15 05:10:13.381630 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 15 05:10:14.034226 sshd[4634]: Connection closed by 10.0.0.1 port 51592 Jul 15 05:10:14.037016 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:14.052721 systemd[1]: sshd@30-10.0.0.20:22-10.0.0.1:51592.service: Deactivated successfully. Jul 15 05:10:14.057449 systemd[1]: session-31.scope: Deactivated successfully. Jul 15 05:10:14.059407 systemd-logind[1560]: Session 31 logged out. Waiting for processes to exit. Jul 15 05:10:14.065706 systemd-logind[1560]: Removed session 31. Jul 15 05:10:14.067711 systemd[1]: Started sshd@31-10.0.0.20:22-10.0.0.1:51602.service - OpenSSH per-connection server daemon (10.0.0.1:51602). Jul 15 05:10:14.075032 kubelet[2786]: I0715 05:10:14.074878 2786 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47" containerName="cilium-operator" Jul 15 05:10:14.075032 kubelet[2786]: I0715 05:10:14.074921 2786 memory_manager.go:355] "RemoveStaleState removing state" podUID="9b854359-adb1-4ddb-8c79-050d6ac3fd9a" containerName="cilium-agent" Jul 15 05:10:14.096917 systemd[1]: Created slice kubepods-burstable-podf23dc47b_7943_499a_b5fc_095f850d5d2d.slice - libcontainer container kubepods-burstable-podf23dc47b_7943_499a_b5fc_095f850d5d2d.slice. Jul 15 05:10:14.139913 sshd[4646]: Accepted publickey for core from 10.0.0.1 port 51602 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:14.142384 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:14.149465 systemd-logind[1560]: New session 32 of user core. Jul 15 05:10:14.161371 kubelet[2786]: I0715 05:10:14.161274 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f23dc47b-7943-499a-b5fc-095f850d5d2d-hubble-tls\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161371 kubelet[2786]: I0715 05:10:14.161375 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-cilium-run\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161611 kubelet[2786]: I0715 05:10:14.161396 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-cni-path\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161611 kubelet[2786]: I0715 05:10:14.161410 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-host-proc-sys-kernel\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161611 kubelet[2786]: I0715 05:10:14.161427 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-hostproc\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161611 kubelet[2786]: I0715 05:10:14.161440 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-host-proc-sys-net\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161611 kubelet[2786]: I0715 05:10:14.161454 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-cilium-cgroup\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161611 kubelet[2786]: I0715 05:10:14.161470 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-xtables-lock\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161801 kubelet[2786]: I0715 05:10:14.161501 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f23dc47b-7943-499a-b5fc-095f850d5d2d-clustermesh-secrets\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161801 kubelet[2786]: I0715 05:10:14.161614 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8pt\" (UniqueName: \"kubernetes.io/projected/f23dc47b-7943-499a-b5fc-095f850d5d2d-kube-api-access-hb8pt\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161801 kubelet[2786]: I0715 05:10:14.161682 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-bpf-maps\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161801 kubelet[2786]: I0715 05:10:14.161718 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f23dc47b-7943-499a-b5fc-095f850d5d2d-cilium-ipsec-secrets\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161801 kubelet[2786]: I0715 05:10:14.161741 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-etc-cni-netd\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161801 kubelet[2786]: I0715 05:10:14.161764 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f23dc47b-7943-499a-b5fc-095f850d5d2d-lib-modules\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.161937 kubelet[2786]: I0715 05:10:14.161789 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f23dc47b-7943-499a-b5fc-095f850d5d2d-cilium-config-path\") pod \"cilium-t2xrt\" (UID: \"f23dc47b-7943-499a-b5fc-095f850d5d2d\") " pod="kube-system/cilium-t2xrt" Jul 15 05:10:14.162115 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 15 05:10:14.217714 sshd[4649]: Connection closed by 10.0.0.1 port 51602 Jul 15 05:10:14.218229 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:14.237660 systemd[1]: sshd@31-10.0.0.20:22-10.0.0.1:51602.service: Deactivated successfully. Jul 15 05:10:14.240015 systemd[1]: session-32.scope: Deactivated successfully. Jul 15 05:10:14.241121 systemd-logind[1560]: Session 32 logged out. Waiting for processes to exit. Jul 15 05:10:14.244677 systemd[1]: Started sshd@32-10.0.0.20:22-10.0.0.1:51618.service - OpenSSH per-connection server daemon (10.0.0.1:51618). Jul 15 05:10:14.245615 systemd-logind[1560]: Removed session 32. Jul 15 05:10:14.315337 sshd[4656]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:14.317574 sshd-session[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:14.320400 kubelet[2786]: I0715 05:10:14.320349 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b854359-adb1-4ddb-8c79-050d6ac3fd9a" path="/var/lib/kubelet/pods/9b854359-adb1-4ddb-8c79-050d6ac3fd9a/volumes" Jul 15 05:10:14.321455 kubelet[2786]: I0715 05:10:14.321421 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47" path="/var/lib/kubelet/pods/d9d2d2e2-d9aa-43a8-ac74-ce76ad414a47/volumes" Jul 15 05:10:14.323102 systemd-logind[1560]: New session 33 of user core. Jul 15 05:10:14.336755 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 15 05:10:14.402970 kubelet[2786]: E0715 05:10:14.402595 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:14.405017 containerd[1587]: time="2025-07-15T05:10:14.404842858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2xrt,Uid:f23dc47b-7943-499a-b5fc-095f850d5d2d,Namespace:kube-system,Attempt:0,}" Jul 15 05:10:14.462375 containerd[1587]: time="2025-07-15T05:10:14.462242789Z" level=info msg="connecting to shim afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8" address="unix:///run/containerd/s/8b8bad9c5eba37ff2c7848c5d3df9427480a971371de6ab650470080ecae694d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:10:14.509738 systemd[1]: Started cri-containerd-afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8.scope - libcontainer container afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8. Jul 15 05:10:14.549660 containerd[1587]: time="2025-07-15T05:10:14.549571234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2xrt,Uid:f23dc47b-7943-499a-b5fc-095f850d5d2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\"" Jul 15 05:10:14.550812 kubelet[2786]: E0715 05:10:14.550738 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:14.553510 containerd[1587]: time="2025-07-15T05:10:14.553458588Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 05:10:14.567141 containerd[1587]: time="2025-07-15T05:10:14.566976511Z" level=info msg="Container af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:10:14.579685 containerd[1587]: time="2025-07-15T05:10:14.579611458Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\"" Jul 15 05:10:14.580391 containerd[1587]: time="2025-07-15T05:10:14.580348068Z" level=info msg="StartContainer for \"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\"" Jul 15 05:10:14.581460 containerd[1587]: time="2025-07-15T05:10:14.581420441Z" level=info msg="connecting to shim af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4" address="unix:///run/containerd/s/8b8bad9c5eba37ff2c7848c5d3df9427480a971371de6ab650470080ecae694d" protocol=ttrpc version=3 Jul 15 05:10:14.612597 systemd[1]: Started cri-containerd-af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4.scope - libcontainer container af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4. Jul 15 05:10:14.729796 systemd[1]: cri-containerd-af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4.scope: Deactivated successfully. Jul 15 05:10:14.731881 containerd[1587]: time="2025-07-15T05:10:14.731830495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\" id:\"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\" pid:4730 exited_at:{seconds:1752556214 nanos:731266992}" Jul 15 05:10:14.753022 containerd[1587]: time="2025-07-15T05:10:14.752905906Z" level=info msg="received exit event container_id:\"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\" id:\"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\" pid:4730 exited_at:{seconds:1752556214 nanos:731266992}" Jul 15 05:10:14.754586 containerd[1587]: time="2025-07-15T05:10:14.754463014Z" level=info msg="StartContainer for \"af1e2ce5f4aae59c09880b9f122d92fad9288405d259834d820d372341821dc4\" returns successfully" Jul 15 05:10:15.408983 kubelet[2786]: E0715 05:10:15.408906 2786 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 05:10:15.764021 kubelet[2786]: E0715 05:10:15.763379 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:15.766590 containerd[1587]: time="2025-07-15T05:10:15.766538228Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 05:10:15.845211 containerd[1587]: time="2025-07-15T05:10:15.844946382Z" level=info msg="Container 6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:10:15.860524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752925732.mount: Deactivated successfully. Jul 15 05:10:15.878100 containerd[1587]: time="2025-07-15T05:10:15.877976532Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\"" Jul 15 05:10:15.887131 containerd[1587]: time="2025-07-15T05:10:15.887064365Z" level=info msg="StartContainer for \"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\"" Jul 15 05:10:15.889071 containerd[1587]: time="2025-07-15T05:10:15.888734305Z" level=info msg="connecting to shim 6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a" address="unix:///run/containerd/s/8b8bad9c5eba37ff2c7848c5d3df9427480a971371de6ab650470080ecae694d" protocol=ttrpc version=3 Jul 15 05:10:15.921865 systemd[1]: Started cri-containerd-6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a.scope - libcontainer container 6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a. Jul 15 05:10:15.969390 containerd[1587]: time="2025-07-15T05:10:15.969226962Z" level=info msg="StartContainer for \"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\" returns successfully" Jul 15 05:10:15.979909 systemd[1]: cri-containerd-6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a.scope: Deactivated successfully. Jul 15 05:10:15.983203 containerd[1587]: time="2025-07-15T05:10:15.983137253Z" level=info msg="received exit event container_id:\"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\" id:\"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\" pid:4776 exited_at:{seconds:1752556215 nanos:982421873}" Jul 15 05:10:15.983582 containerd[1587]: time="2025-07-15T05:10:15.983439122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\" id:\"6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a\" pid:4776 exited_at:{seconds:1752556215 nanos:982421873}" Jul 15 05:10:16.273296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6813c778a3156e1c05dd6a12dd75631820232c7aea2273b1da01a3840a03b79a-rootfs.mount: Deactivated successfully. Jul 15 05:10:16.767276 kubelet[2786]: E0715 05:10:16.767233 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:16.769765 containerd[1587]: time="2025-07-15T05:10:16.769728434Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 05:10:17.089187 containerd[1587]: time="2025-07-15T05:10:17.089135274Z" level=info msg="Container 1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:10:17.093712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414167179.mount: Deactivated successfully. Jul 15 05:10:17.558415 containerd[1587]: time="2025-07-15T05:10:17.558233407Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\"" Jul 15 05:10:17.558942 containerd[1587]: time="2025-07-15T05:10:17.558907208Z" level=info msg="StartContainer for \"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\"" Jul 15 05:10:17.560505 containerd[1587]: time="2025-07-15T05:10:17.560459886Z" level=info msg="connecting to shim 1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129" address="unix:///run/containerd/s/8b8bad9c5eba37ff2c7848c5d3df9427480a971371de6ab650470080ecae694d" protocol=ttrpc version=3 Jul 15 05:10:17.582648 systemd[1]: Started cri-containerd-1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129.scope - libcontainer container 1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129. Jul 15 05:10:17.638238 systemd[1]: cri-containerd-1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129.scope: Deactivated successfully. Jul 15 05:10:17.639404 containerd[1587]: time="2025-07-15T05:10:17.639350929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\" id:\"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\" pid:4821 exited_at:{seconds:1752556217 nanos:638829385}" Jul 15 05:10:17.776860 containerd[1587]: time="2025-07-15T05:10:17.776725371Z" level=info msg="received exit event container_id:\"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\" id:\"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\" pid:4821 exited_at:{seconds:1752556217 nanos:638829385}" Jul 15 05:10:17.780392 containerd[1587]: time="2025-07-15T05:10:17.780083195Z" level=info msg="StartContainer for \"1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129\" returns successfully" Jul 15 05:10:17.808874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e38c322a11a02f8c25b899978f32103145e19c9f31afbc1b08bbb50b9a23129-rootfs.mount: Deactivated successfully. Jul 15 05:10:18.788921 kubelet[2786]: E0715 05:10:18.788866 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:18.791014 containerd[1587]: time="2025-07-15T05:10:18.790704260Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 05:10:19.107055 containerd[1587]: time="2025-07-15T05:10:19.106999392Z" level=info msg="Container 061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:10:19.294698 containerd[1587]: time="2025-07-15T05:10:19.294633434Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\"" Jul 15 05:10:19.295447 containerd[1587]: time="2025-07-15T05:10:19.295386405Z" level=info msg="StartContainer for \"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\"" Jul 15 05:10:19.296549 containerd[1587]: time="2025-07-15T05:10:19.296519641Z" level=info msg="connecting to shim 061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7" address="unix:///run/containerd/s/8b8bad9c5eba37ff2c7848c5d3df9427480a971371de6ab650470080ecae694d" protocol=ttrpc version=3 Jul 15 05:10:19.328560 systemd[1]: Started cri-containerd-061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7.scope - libcontainer container 061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7. Jul 15 05:10:19.366965 systemd[1]: cri-containerd-061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7.scope: Deactivated successfully. Jul 15 05:10:19.367594 containerd[1587]: time="2025-07-15T05:10:19.367525993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\" id:\"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\" pid:4861 exited_at:{seconds:1752556219 nanos:367209848}" Jul 15 05:10:19.463699 containerd[1587]: time="2025-07-15T05:10:19.463610871Z" level=info msg="received exit event container_id:\"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\" id:\"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\" pid:4861 exited_at:{seconds:1752556219 nanos:367209848}" Jul 15 05:10:19.472951 containerd[1587]: time="2025-07-15T05:10:19.472913936Z" level=info msg="StartContainer for \"061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7\" returns successfully" Jul 15 05:10:19.488800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-061d330d587691a2c313afa31d7a738857d5f112e572b2f9e6ed5a85c219dab7-rootfs.mount: Deactivated successfully. Jul 15 05:10:19.838119 kubelet[2786]: E0715 05:10:19.838060 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:19.840602 containerd[1587]: time="2025-07-15T05:10:19.840537211Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 05:10:19.956208 containerd[1587]: time="2025-07-15T05:10:19.956141581Z" level=info msg="Container 0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:10:20.032009 containerd[1587]: time="2025-07-15T05:10:20.031946659Z" level=info msg="CreateContainer within sandbox \"afe53e66f8b65401740ceaf968534e201a0b56f5c871e67e3259256741a1c9c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\"" Jul 15 05:10:20.032733 containerd[1587]: time="2025-07-15T05:10:20.032662549Z" level=info msg="StartContainer for \"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\"" Jul 15 05:10:20.034298 containerd[1587]: time="2025-07-15T05:10:20.034255012Z" level=info msg="connecting to shim 0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7" address="unix:///run/containerd/s/8b8bad9c5eba37ff2c7848c5d3df9427480a971371de6ab650470080ecae694d" protocol=ttrpc version=3 Jul 15 05:10:20.058581 systemd[1]: Started cri-containerd-0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7.scope - libcontainer container 0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7. Jul 15 05:10:20.160743 containerd[1587]: time="2025-07-15T05:10:20.160566418Z" level=info msg="StartContainer for \"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" returns successfully" Jul 15 05:10:20.303729 containerd[1587]: time="2025-07-15T05:10:20.303666935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" id:\"5cfec2af573ce23f2c8f65613149f70cee397dac4a9113304608d99767d3a83e\" pid:4932 exited_at:{seconds:1752556220 nanos:303368904}" Jul 15 05:10:20.648583 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 15 05:10:20.844445 kubelet[2786]: E0715 05:10:20.844407 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:20.859836 kubelet[2786]: I0715 05:10:20.859767 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t2xrt" podStartSLOduration=6.859742045 podStartE2EDuration="6.859742045s" podCreationTimestamp="2025-07-15 05:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:10:20.859253904 +0000 UTC m=+130.642244860" watchObservedRunningTime="2025-07-15 05:10:20.859742045 +0000 UTC m=+130.642733001" Jul 15 05:10:21.846044 kubelet[2786]: E0715 05:10:21.845984 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:22.848153 kubelet[2786]: E0715 05:10:22.848102 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:22.940666 containerd[1587]: time="2025-07-15T05:10:22.940577150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" id:\"91f6e442dd55589333ef1b35dda1e644eed61d91cb33625838018dc34ad4ca33\" pid:5207 exit_status:1 exited_at:{seconds:1752556222 nanos:939417603}" Jul 15 05:10:23.917649 systemd-networkd[1484]: lxc_health: Link UP Jul 15 05:10:23.919764 systemd-networkd[1484]: lxc_health: Gained carrier Jul 15 05:10:24.404753 kubelet[2786]: E0715 05:10:24.404381 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:24.851830 kubelet[2786]: E0715 05:10:24.851776 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:25.123053 containerd[1587]: time="2025-07-15T05:10:25.122910311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" id:\"39b20eab81d380f082752874084eaf17fba43179fac9cd7d79cc769fb24bdfa6\" pid:5468 exited_at:{seconds:1752556225 nanos:122207426}" Jul 15 05:10:25.857362 kubelet[2786]: E0715 05:10:25.856924 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:25.935791 systemd-networkd[1484]: lxc_health: Gained IPv6LL Jul 15 05:10:27.451183 containerd[1587]: time="2025-07-15T05:10:27.450983222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" id:\"2d25063a14a4ab6ffc410b9b060825209266837820222757f59e61e0e75602b9\" pid:5494 exited_at:{seconds:1752556227 nanos:450452562}" Jul 15 05:10:29.567919 containerd[1587]: time="2025-07-15T05:10:29.567861252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" id:\"492f1e3046c18f86a828dd1952c02cd97d50109e9d85db588482d18e2f565cf8\" pid:5526 exited_at:{seconds:1752556229 nanos:567281088}" Jul 15 05:10:31.317525 kubelet[2786]: E0715 05:10:31.317454 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:10:31.709473 containerd[1587]: time="2025-07-15T05:10:31.709398248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a210442a4ca58862f7bd5c174467fae524dd20beaf1674f3c259b707f287ab7\" id:\"5da0983a0bc21af52376f6c26e49c8f1f1ba06776af237e427d0bba8a2346777\" pid:5549 exited_at:{seconds:1752556231 nanos:708940736}" Jul 15 05:10:31.717215 sshd[4663]: Connection closed by 10.0.0.1 port 51618 Jul 15 05:10:31.717826 sshd-session[4656]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:31.723511 systemd[1]: sshd@32-10.0.0.20:22-10.0.0.1:51618.service: Deactivated successfully. Jul 15 05:10:31.725892 systemd[1]: session-33.scope: Deactivated successfully. Jul 15 05:10:31.726712 systemd-logind[1560]: Session 33 logged out. Waiting for processes to exit. Jul 15 05:10:31.728011 systemd-logind[1560]: Removed session 33.