Jul 11 00:16:40.030482 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:16:40.030515 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:16:40.030532 kernel: BIOS-provided physical RAM map: Jul 11 00:16:40.030541 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:16:40.030549 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:16:40.030557 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:16:40.030567 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:16:40.030577 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:16:40.030585 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:16:40.030598 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:16:40.030607 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:16:40.030615 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:16:40.030629 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:16:40.030638 kernel: NX (Execute Disable) protection: active Jul 11 00:16:40.030650 kernel: APIC: Static calls initialized Jul 11 00:16:40.030667 kernel: SMBIOS 2.8 present. Jul 11 00:16:40.030677 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:16:40.030686 kernel: Hypervisor detected: KVM Jul 11 00:16:40.030695 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:16:40.030704 kernel: kvm-clock: using sched offset of 2848710659 cycles Jul 11 00:16:40.030714 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:16:40.030736 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:16:40.030746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:16:40.030756 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:16:40.030770 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:16:40.030780 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:16:40.030790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:16:40.030799 kernel: Using GB pages for direct mapping Jul 11 00:16:40.030809 kernel: ACPI: Early table checksum verification disabled Jul 11 00:16:40.030819 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:16:40.030828 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030838 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030848 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030862 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:16:40.030871 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030881 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030890 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030900 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:40.030909 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:16:40.030919 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:16:40.030934 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:16:40.030947 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:16:40.030957 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:16:40.030967 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:16:40.030977 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:16:40.030987 kernel: No NUMA configuration found Jul 11 00:16:40.030997 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:16:40.031011 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 11 00:16:40.031021 kernel: Zone ranges: Jul 11 00:16:40.031032 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:16:40.031042 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:16:40.031052 kernel: Normal empty Jul 11 00:16:40.031062 kernel: Movable zone start for each node Jul 11 00:16:40.031072 kernel: Early memory node ranges Jul 11 00:16:40.031083 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:16:40.031092 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:16:40.031102 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:16:40.031160 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:16:40.031175 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:16:40.031185 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:16:40.031195 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:16:40.031219 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:16:40.031241 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:16:40.031263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:16:40.031285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:16:40.031308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:16:40.031340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:16:40.031362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:16:40.031385 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:16:40.031404 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:16:40.031414 kernel: TSC deadline timer available Jul 11 00:16:40.031425 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:16:40.031435 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:16:40.031445 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:16:40.031459 kernel: kvm-guest: setup PV sched yield Jul 11 00:16:40.031474 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:16:40.031484 kernel: Booting paravirtualized kernel on KVM Jul 11 00:16:40.031494 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:16:40.031503 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:16:40.031512 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:16:40.031522 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:16:40.031531 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:16:40.031540 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:16:40.031550 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:16:40.031565 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:16:40.031575 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:16:40.031584 kernel: random: crng init done Jul 11 00:16:40.031594 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:16:40.031606 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:16:40.031615 kernel: Fallback order for Node 0: 0 Jul 11 00:16:40.031627 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 11 00:16:40.031637 kernel: Policy zone: DMA32 Jul 11 00:16:40.031653 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:16:40.031663 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) Jul 11 00:16:40.031672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:16:40.031682 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:16:40.031703 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:16:40.031716 kernel: Dynamic Preempt: voluntary Jul 11 00:16:40.031747 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:16:40.031765 kernel: rcu: RCU event tracing is enabled. Jul 11 00:16:40.031775 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:16:40.031789 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:16:40.031799 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:16:40.031809 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:16:40.031819 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:16:40.031833 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:16:40.031843 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:16:40.031853 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:16:40.031863 kernel: Console: colour VGA+ 80x25 Jul 11 00:16:40.031872 kernel: printk: console [ttyS0] enabled Jul 11 00:16:40.031886 kernel: ACPI: Core revision 20230628 Jul 11 00:16:40.031897 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:16:40.031906 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:16:40.031916 kernel: x2apic enabled Jul 11 00:16:40.031925 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:16:40.031935 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:16:40.031946 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:16:40.031956 kernel: kvm-guest: setup PV IPIs Jul 11 00:16:40.031981 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:16:40.031991 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:16:40.032001 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:16:40.032011 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:16:40.032025 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:16:40.032036 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:16:40.032046 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:16:40.032057 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:16:40.032068 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:16:40.032082 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:16:40.032093 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:16:40.032126 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:16:40.032138 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:16:40.032149 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:16:40.032160 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:16:40.032170 kernel: x86/bugs: return thunk changed Jul 11 00:16:40.032181 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:16:40.032197 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:16:40.032208 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:16:40.032219 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:16:40.032229 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:16:40.032240 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:16:40.032251 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:16:40.032261 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:16:40.032271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:16:40.032282 kernel: landlock: Up and running. Jul 11 00:16:40.032296 kernel: SELinux: Initializing. Jul 11 00:16:40.032307 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:16:40.032318 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:16:40.032329 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:16:40.032339 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:40.032350 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:40.032360 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:40.032371 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:16:40.032386 kernel: ... version: 0 Jul 11 00:16:40.032402 kernel: ... bit width: 48 Jul 11 00:16:40.032413 kernel: ... generic registers: 6 Jul 11 00:16:40.032423 kernel: ... value mask: 0000ffffffffffff Jul 11 00:16:40.032434 kernel: ... max period: 00007fffffffffff Jul 11 00:16:40.032444 kernel: ... fixed-purpose events: 0 Jul 11 00:16:40.032455 kernel: ... event mask: 000000000000003f Jul 11 00:16:40.032465 kernel: signal: max sigframe size: 1776 Jul 11 00:16:40.032476 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:16:40.032487 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:16:40.032502 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:16:40.032512 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:16:40.032522 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:16:40.032533 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:16:40.032544 kernel: smpboot: Max logical packages: 1 Jul 11 00:16:40.032554 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:16:40.032565 kernel: devtmpfs: initialized Jul 11 00:16:40.032576 kernel: x86/mm: Memory block size: 128MB Jul 11 00:16:40.032587 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:16:40.032604 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:16:40.032615 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:16:40.032625 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:16:40.032636 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:16:40.032647 kernel: audit: type=2000 audit(1752192998.144:1): state=initialized audit_enabled=0 res=1 Jul 11 00:16:40.032658 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:16:40.032668 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:16:40.032679 kernel: cpuidle: using governor menu Jul 11 00:16:40.032689 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:16:40.032704 kernel: dca service started, version 1.12.1 Jul 11 00:16:40.032714 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:16:40.032734 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:16:40.032744 kernel: PCI: Using configuration type 1 for base access Jul 11 00:16:40.032754 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:16:40.032764 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:16:40.032774 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:16:40.032784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:16:40.032794 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:16:40.032808 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:16:40.032818 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:16:40.032828 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:16:40.032838 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:16:40.032849 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:16:40.032859 kernel: ACPI: Interpreter enabled Jul 11 00:16:40.032869 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:16:40.032879 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:16:40.032889 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:16:40.032902 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:16:40.032913 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:16:40.032923 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:16:40.033311 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:16:40.033502 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:16:40.033676 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:16:40.033692 kernel: PCI host bridge to bus 0000:00 Jul 11 00:16:40.033893 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:16:40.034162 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:16:40.034343 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:16:40.034503 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:16:40.034658 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:16:40.034821 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:16:40.034975 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:16:40.035239 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:16:40.035431 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:16:40.035599 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 11 00:16:40.035781 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 11 00:16:40.035954 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:16:40.036160 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:16:40.036368 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:16:40.036551 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 11 00:16:40.036739 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 11 00:16:40.036913 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:16:40.037155 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:16:40.037333 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 11 00:16:40.037499 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 11 00:16:40.037666 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:16:40.037864 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:16:40.038019 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 11 00:16:40.038218 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 11 00:16:40.038376 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:16:40.038537 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:16:40.038736 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:16:40.038914 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:16:40.039139 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:16:40.039321 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 11 00:16:40.039489 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 11 00:16:40.039689 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:16:40.039871 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 11 00:16:40.039889 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:16:40.039909 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:16:40.039920 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:16:40.039931 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:16:40.039943 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:16:40.039954 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:16:40.039965 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:16:40.039975 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:16:40.039987 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:16:40.039999 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:16:40.040014 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:16:40.040026 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:16:40.040038 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:16:40.040049 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:16:40.040060 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:16:40.040071 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:16:40.040083 kernel: iommu: Default domain type: Translated Jul 11 00:16:40.040095 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:16:40.040106 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:16:40.040253 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:16:40.040266 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:16:40.040277 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:16:40.040464 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:16:40.040626 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:16:40.040801 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:16:40.040817 kernel: vgaarb: loaded Jul 11 00:16:40.040828 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:16:40.040845 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:16:40.040856 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:16:40.040866 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:16:40.040877 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:16:40.040887 kernel: pnp: PnP ACPI init Jul 11 00:16:40.041083 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:16:40.041103 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:16:40.041132 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:16:40.041151 kernel: NET: Registered PF_INET protocol family Jul 11 00:16:40.041162 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:16:40.041174 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:16:40.041185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:16:40.041197 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:16:40.041208 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:16:40.041219 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:16:40.041229 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:16:40.041240 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:16:40.041254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:16:40.041265 kernel: NET: Registered PF_XDP protocol family Jul 11 00:16:40.041418 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:16:40.041558 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:16:40.041698 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:16:40.041854 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:16:40.041991 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:16:40.042217 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:16:40.042241 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:16:40.042253 kernel: Initialise system trusted keyrings Jul 11 00:16:40.042265 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:16:40.042276 kernel: Key type asymmetric registered Jul 11 00:16:40.042287 kernel: Asymmetric key parser 'x509' registered Jul 11 00:16:40.042299 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:16:40.042310 kernel: io scheduler mq-deadline registered Jul 11 00:16:40.042321 kernel: io scheduler kyber registered Jul 11 00:16:40.042332 kernel: io scheduler bfq registered Jul 11 00:16:40.042343 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:16:40.042360 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:16:40.042371 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:16:40.042383 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:16:40.042394 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:16:40.042406 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:16:40.042417 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:16:40.042428 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:16:40.042439 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:16:40.042623 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:16:40.042645 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:16:40.042799 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:16:40.042943 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:16:39 UTC (1752192999) Jul 11 00:16:40.043086 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:16:40.043101 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:16:40.043127 kernel: hpet: Lost 1 RTC interrupts Jul 11 00:16:40.043138 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:16:40.043155 kernel: Segment Routing with IPv6 Jul 11 00:16:40.043165 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:16:40.043175 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:16:40.043186 kernel: Key type dns_resolver registered Jul 11 00:16:40.043197 kernel: IPI shorthand broadcast: enabled Jul 11 00:16:40.043207 kernel: sched_clock: Marking stable (1209003328, 108950554)->(1393409791, -75455909) Jul 11 00:16:40.043217 kernel: registered taskstats version 1 Jul 11 00:16:40.043228 kernel: Loading compiled-in X.509 certificates Jul 11 00:16:40.043239 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:16:40.043249 kernel: Key type .fscrypt registered Jul 11 00:16:40.043263 kernel: Key type fscrypt-provisioning registered Jul 11 00:16:40.043274 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:16:40.043286 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:16:40.043297 kernel: ima: No architecture policies found Jul 11 00:16:40.043308 kernel: clk: Disabling unused clocks Jul 11 00:16:40.043319 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:16:40.043331 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:16:40.043342 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:16:40.043357 kernel: Run /init as init process Jul 11 00:16:40.043369 kernel: with arguments: Jul 11 00:16:40.043380 kernel: /init Jul 11 00:16:40.043391 kernel: with environment: Jul 11 00:16:40.043402 kernel: HOME=/ Jul 11 00:16:40.043413 kernel: TERM=linux Jul 11 00:16:40.043425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:16:40.043438 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:16:40.043456 systemd[1]: Detected virtualization kvm. Jul 11 00:16:40.043468 systemd[1]: Detected architecture x86-64. Jul 11 00:16:40.043479 systemd[1]: Running in initrd. Jul 11 00:16:40.043490 systemd[1]: No hostname configured, using default hostname. Jul 11 00:16:40.043502 systemd[1]: Hostname set to . Jul 11 00:16:40.043514 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:16:40.043525 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:16:40.043538 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:40.043554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:40.043568 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:16:40.043598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:16:40.043615 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:16:40.043628 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:16:40.043646 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:16:40.043659 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:16:40.043672 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:40.043685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:40.043697 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:16:40.043710 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:16:40.043734 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:16:40.043748 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:16:40.043765 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:16:40.043776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:16:40.043789 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:16:40.043802 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:16:40.043815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:40.043828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:40.043840 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:40.043853 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:16:40.043866 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:16:40.043883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:16:40.043896 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:16:40.043909 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:16:40.043921 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:16:40.043935 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:16:40.043948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:40.043962 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:16:40.043975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:40.043991 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:16:40.044039 systemd-journald[194]: Collecting audit messages is disabled. Jul 11 00:16:40.044073 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:16:40.044086 systemd-journald[194]: Journal started Jul 11 00:16:40.044136 systemd-journald[194]: Runtime Journal (/run/log/journal/985db0cbc18b4b6e848e62d0f7996023) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:16:40.031787 systemd-modules-load[195]: Inserted module 'overlay' Jul 11 00:16:40.083485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:16:40.083560 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:16:40.083581 kernel: Bridge firewalling registered Jul 11 00:16:40.083596 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:16:40.064580 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 11 00:16:40.083873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:40.095607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:40.099346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:16:40.103190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:16:40.106715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:40.109581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:40.114129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:40.119499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:40.125048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:40.136349 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:16:40.137851 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:40.141675 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:16:40.162384 dracut-cmdline[231]: dracut-dracut-053 Jul 11 00:16:40.182237 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:16:40.181035 systemd-resolved[227]: Positive Trust Anchors: Jul 11 00:16:40.181046 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:16:40.181086 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:16:40.184975 systemd-resolved[227]: Defaulting to hostname 'linux'. Jul 11 00:16:40.186537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:16:40.189233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:40.260186 kernel: SCSI subsystem initialized Jul 11 00:16:40.271152 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:16:40.283187 kernel: iscsi: registered transport (tcp) Jul 11 00:16:40.309185 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:16:40.309269 kernel: QLogic iSCSI HBA Driver Jul 11 00:16:40.381961 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:16:40.403309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:16:40.438185 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:16:40.438277 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:16:40.439480 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:16:40.493173 kernel: raid6: avx2x4 gen() 24158 MB/s Jul 11 00:16:40.510180 kernel: raid6: avx2x2 gen() 27758 MB/s Jul 11 00:16:40.527330 kernel: raid6: avx2x1 gen() 22656 MB/s Jul 11 00:16:40.527425 kernel: raid6: using algorithm avx2x2 gen() 27758 MB/s Jul 11 00:16:40.545466 kernel: raid6: .... xor() 16237 MB/s, rmw enabled Jul 11 00:16:40.545565 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:16:40.570204 kernel: xor: automatically using best checksumming function avx Jul 11 00:16:40.793151 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:16:40.812193 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:16:40.826505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:40.842764 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 11 00:16:40.847933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:40.883428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:16:40.901646 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jul 11 00:16:40.945023 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:16:40.955273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:16:41.039901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:41.050334 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:16:41.067092 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:16:41.071395 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:16:41.075479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:41.078264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:16:41.090444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:16:41.097158 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:16:41.105202 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:16:41.109491 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:16:41.116172 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:16:41.116204 kernel: GPT:9289727 != 19775487 Jul 11 00:16:41.116215 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:16:41.116225 kernel: GPT:9289727 != 19775487 Jul 11 00:16:41.116236 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:16:41.116246 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:41.117603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:16:41.117740 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:41.122652 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:41.123974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:16:41.124098 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:41.125684 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:41.140303 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:16:41.140382 kernel: libata version 3.00 loaded. Jul 11 00:16:41.140412 kernel: AES CTR mode by8 optimization enabled Jul 11 00:16:41.190546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:41.196639 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:16:41.204144 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:16:41.205139 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:16:41.208530 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:16:41.208734 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:16:41.215643 kernel: scsi host0: ahci Jul 11 00:16:41.215997 kernel: scsi host1: ahci Jul 11 00:16:41.222137 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (462) Jul 11 00:16:41.223549 kernel: scsi host2: ahci Jul 11 00:16:41.225404 kernel: scsi host3: ahci Jul 11 00:16:41.226415 kernel: scsi host4: ahci Jul 11 00:16:41.228209 kernel: scsi host5: ahci Jul 11 00:16:41.228399 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 11 00:16:41.228412 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 11 00:16:41.228423 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 11 00:16:41.228437 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 11 00:16:41.228451 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 11 00:16:41.228464 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 11 00:16:41.238604 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Jul 11 00:16:41.242786 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:16:41.275207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:41.287472 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:16:41.293837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:16:41.295192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:16:41.302601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:16:41.314462 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:16:41.317019 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:41.328751 disk-uuid[554]: Primary Header is updated. Jul 11 00:16:41.328751 disk-uuid[554]: Secondary Entries is updated. Jul 11 00:16:41.328751 disk-uuid[554]: Secondary Header is updated. Jul 11 00:16:41.333148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:41.338160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:41.375657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:41.536190 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.536278 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.545160 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:16:41.545258 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:16:41.545272 kernel: ata3.00: applying bridge limits Jul 11 00:16:41.545284 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.546143 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.547163 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:16:41.548152 kernel: ata3.00: configured for UDMA/100 Jul 11 00:16:41.550144 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:16:41.604175 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:16:41.604620 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:16:41.618140 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:16:42.360939 disk-uuid[555]: The operation has completed successfully. Jul 11 00:16:42.362282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:42.391972 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:16:42.392099 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:16:42.422283 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:16:42.443346 sh[592]: Success Jul 11 00:16:42.457152 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:16:42.497807 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:16:42.573564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:16:42.578049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:16:42.621555 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:16:42.621589 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:42.621601 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:16:42.622643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:16:42.626629 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:16:42.632156 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:16:42.633153 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:16:42.634017 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:16:42.637787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:16:42.653326 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:16:42.653384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:42.653396 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:16:42.656140 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:16:42.667035 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:16:42.669125 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:16:42.762807 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:16:42.779354 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:16:42.780206 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:16:42.785810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:16:42.821914 systemd-networkd[773]: lo: Link UP Jul 11 00:16:42.821928 systemd-networkd[773]: lo: Gained carrier Jul 11 00:16:42.824026 systemd-networkd[773]: Enumeration completed Jul 11 00:16:42.824558 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:42.824563 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:16:42.824726 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:16:42.830776 systemd-networkd[773]: eth0: Link UP Jul 11 00:16:42.830781 systemd-networkd[773]: eth0: Gained carrier Jul 11 00:16:42.830795 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:42.831542 systemd[1]: Reached target network.target - Network. Jul 11 00:16:42.848493 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:16:42.907074 ignition[770]: Ignition 2.19.0 Jul 11 00:16:42.907091 ignition[770]: Stage: fetch-offline Jul 11 00:16:42.907162 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:42.907177 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:42.907318 ignition[770]: parsed url from cmdline: "" Jul 11 00:16:42.907324 ignition[770]: no config URL provided Jul 11 00:16:42.907331 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:16:42.907346 ignition[770]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:16:42.907390 ignition[770]: op(1): [started] loading QEMU firmware config module Jul 11 00:16:42.907402 ignition[770]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:16:42.924931 ignition[770]: op(1): [finished] loading QEMU firmware config module Jul 11 00:16:42.924982 ignition[770]: QEMU firmware config was not found. Ignoring... Jul 11 00:16:42.954095 systemd-resolved[227]: Detected conflict on linux IN A 10.0.0.79 Jul 11 00:16:42.954135 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jul 11 00:16:42.973614 ignition[770]: parsing config with SHA512: 202a18826937941ce96a2cdc6004be0bfb015037a094963b9235a2fff83c0904c427241300e6494a08aef47c968fcb4c644e83beff7e2706577498519b12fa00 Jul 11 00:16:42.977788 unknown[770]: fetched base config from "system" Jul 11 00:16:42.977798 unknown[770]: fetched user config from "qemu" Jul 11 00:16:42.978220 ignition[770]: fetch-offline: fetch-offline passed Jul 11 00:16:42.978296 ignition[770]: Ignition finished successfully Jul 11 00:16:42.980742 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:16:42.982575 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:16:42.990430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:16:43.013142 ignition[786]: Ignition 2.19.0 Jul 11 00:16:43.013157 ignition[786]: Stage: kargs Jul 11 00:16:43.013344 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.013360 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.014194 ignition[786]: kargs: kargs passed Jul 11 00:16:43.014241 ignition[786]: Ignition finished successfully Jul 11 00:16:43.019089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:16:43.029511 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:16:43.048932 ignition[794]: Ignition 2.19.0 Jul 11 00:16:43.048947 ignition[794]: Stage: disks Jul 11 00:16:43.049176 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.049192 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.050423 ignition[794]: disks: disks passed Jul 11 00:16:43.053061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:16:43.050482 ignition[794]: Ignition finished successfully Jul 11 00:16:43.054784 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:16:43.056600 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:16:43.058689 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:16:43.060737 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:16:43.062926 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:16:43.073343 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:16:43.086305 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:16:43.095466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:16:43.112266 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:16:43.208125 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:16:43.208584 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:16:43.210243 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:16:43.221222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:16:43.223579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:16:43.224869 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:16:43.224921 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:16:43.232999 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 11 00:16:43.224949 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:16:43.238282 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:16:43.238315 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:43.238326 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:16:43.233810 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:16:43.240039 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:16:43.242553 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:16:43.244940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:16:43.295897 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:16:43.301980 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:16:43.306698 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:16:43.312291 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:16:43.421552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:16:43.436331 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:16:43.437630 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:16:43.475159 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:16:43.489846 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:16:43.616363 ignition[930]: INFO : Ignition 2.19.0 Jul 11 00:16:43.616363 ignition[930]: INFO : Stage: mount Jul 11 00:16:43.620289 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.620289 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.620289 ignition[930]: INFO : mount: mount passed Jul 11 00:16:43.620289 ignition[930]: INFO : Ignition finished successfully Jul 11 00:16:43.620692 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:16:43.621215 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:16:43.629390 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:16:43.638212 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:16:43.668732 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jul 11 00:16:43.668764 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:16:43.668782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:16:43.669562 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:16:43.673140 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:16:43.675355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:16:43.715003 ignition[956]: INFO : Ignition 2.19.0 Jul 11 00:16:43.715003 ignition[956]: INFO : Stage: files Jul 11 00:16:43.716995 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:43.716995 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:43.716995 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:16:43.720542 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:16:43.720542 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:16:43.720542 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:16:43.724970 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:16:43.724970 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:16:43.724970 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:16:43.724970 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 00:16:43.721370 unknown[956]: wrote ssh authorized keys file for user: core Jul 11 00:16:43.768388 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:16:43.934445 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:16:43.934445 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:16:43.938637 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 11 00:16:44.129372 systemd-networkd[773]: eth0: Gained IPv6LL Jul 11 00:16:44.285944 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:16:44.440997 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:16:44.440997 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:44.445255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 00:16:45.033652 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:16:45.633076 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:16:45.633076 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 00:16:45.637191 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 00:16:45.639240 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:16:45.665180 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:16:45.725330 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:16:45.727263 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:16:45.727263 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:16:45.730052 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:16:45.731875 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:16:45.734049 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:16:45.767909 ignition[956]: INFO : files: files passed Jul 11 00:16:45.768854 ignition[956]: INFO : Ignition finished successfully Jul 11 00:16:45.771719 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:16:45.777304 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:16:45.779210 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:16:45.783882 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:16:45.784008 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:16:45.790200 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:16:45.793029 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:45.793029 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:45.796703 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:45.799136 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:16:45.801310 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:16:45.859306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:16:45.884963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:16:45.885143 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:16:45.925162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:16:45.927430 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:16:45.927859 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:16:45.938266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:16:45.953940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:16:45.978415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:16:45.990860 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:45.993618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:45.996356 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:16:45.998468 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:16:45.999681 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:16:46.002569 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:16:46.004823 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:16:46.006970 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:16:46.009049 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:16:46.012154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:16:46.014654 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:16:46.016850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:16:46.019682 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:16:46.038207 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:16:46.040412 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:16:46.042161 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:16:46.043327 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:16:46.045813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:46.048201 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:46.050831 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:16:46.051905 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:46.054923 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:16:46.056019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:16:46.059068 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:16:46.060370 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:16:46.063227 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:16:46.065312 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:16:46.070196 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:46.073168 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:16:46.075155 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:16:46.077315 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:16:46.078358 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:16:46.080518 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:16:46.081568 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:16:46.083862 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:16:46.085134 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:16:46.087672 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:16:46.088700 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:16:46.108424 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:16:46.110515 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:16:46.110719 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:46.115260 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:16:46.117193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:16:46.117399 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:46.120892 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:16:46.121972 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:16:46.127423 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:16:46.127602 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:16:46.132245 ignition[1010]: INFO : Ignition 2.19.0 Jul 11 00:16:46.132245 ignition[1010]: INFO : Stage: umount Jul 11 00:16:46.132245 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:46.132245 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:46.136910 ignition[1010]: INFO : umount: umount passed Jul 11 00:16:46.136910 ignition[1010]: INFO : Ignition finished successfully Jul 11 00:16:46.137152 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:16:46.137335 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:16:46.140094 systemd[1]: Stopped target network.target - Network. Jul 11 00:16:46.141019 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:16:46.141169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:16:46.143946 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:16:46.144022 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:16:46.144534 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:16:46.144619 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:16:46.144918 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:16:46.144980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:16:46.147471 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:16:46.152757 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:16:46.154889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:16:46.160676 systemd-networkd[773]: eth0: DHCPv6 lease lost Jul 11 00:16:46.166243 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:16:46.167542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:16:46.171948 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:16:46.172240 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:16:46.177859 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:16:46.177961 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:46.191463 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:16:46.192027 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:16:46.192155 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:16:46.192451 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:16:46.192510 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:46.192808 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:16:46.192862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:46.193345 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:16:46.193400 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:46.193833 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:46.208517 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:16:46.208708 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:16:46.211745 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:16:46.212051 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:46.214754 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:16:46.214852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:46.216329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:16:46.216380 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:46.218718 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:16:46.218803 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:16:46.221090 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:16:46.221180 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:16:46.223005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:16:46.223072 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:46.233421 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:16:46.235104 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:16:46.235218 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:46.237428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:16:46.237501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:46.243832 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:16:46.244014 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:16:46.488916 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:16:46.489108 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:16:46.490610 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:16:46.492536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:16:46.492630 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:16:46.506616 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:16:46.517285 systemd[1]: Switching root. Jul 11 00:16:46.541790 systemd-journald[194]: Journal stopped Jul 11 00:16:48.338094 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jul 11 00:16:48.338249 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:16:48.338272 kernel: SELinux: policy capability open_perms=1 Jul 11 00:16:48.338287 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:16:48.338305 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:16:48.338318 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:16:48.338332 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:16:48.338345 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:16:48.338359 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:16:48.338373 kernel: audit: type=1403 audit(1752193007.105:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:16:48.338389 systemd[1]: Successfully loaded SELinux policy in 56.609ms. Jul 11 00:16:48.338415 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.100ms. Jul 11 00:16:48.338440 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:16:48.338456 systemd[1]: Detected virtualization kvm. Jul 11 00:16:48.338471 systemd[1]: Detected architecture x86-64. Jul 11 00:16:48.338486 systemd[1]: Detected first boot. Jul 11 00:16:48.338500 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:16:48.338515 zram_generator::config[1055]: No configuration found. Jul 11 00:16:48.338544 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:16:48.338559 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:16:48.338574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:16:48.338593 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:16:48.338610 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:16:48.338628 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:16:48.338645 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:16:48.338662 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:16:48.338679 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:16:48.338696 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:16:48.338714 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:16:48.338736 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:16:48.338754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:48.338771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:48.338788 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:16:48.338813 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:16:48.338831 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:16:48.338848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:16:48.338865 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:16:48.338881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:48.338902 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:16:48.338919 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:16:48.338937 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:16:48.338954 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:16:48.338970 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:48.338987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:16:48.339004 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:16:48.339021 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:16:48.339077 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:16:48.339094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:16:48.339135 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:48.339155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:48.339172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:48.339188 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:16:48.339204 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:16:48.339221 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:16:48.339238 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:16:48.339263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:48.339280 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:16:48.339297 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:16:48.339313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:16:48.339330 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:16:48.339347 systemd[1]: Reached target machines.target - Containers. Jul 11 00:16:48.339364 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:16:48.339381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:48.339402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:16:48.339419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:16:48.339435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:48.339452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:16:48.339481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:48.339498 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:16:48.339515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:48.339544 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:16:48.339562 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:16:48.339583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:16:48.339600 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:16:48.339616 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:16:48.339629 kernel: fuse: init (API version 7.39) Jul 11 00:16:48.339640 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:16:48.339652 kernel: loop: module loaded Jul 11 00:16:48.339664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:16:48.339676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:16:48.339697 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:16:48.339713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:16:48.339725 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:16:48.339738 systemd[1]: Stopped verity-setup.service. Jul 11 00:16:48.339750 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:48.339762 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:16:48.339774 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:16:48.339786 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:16:48.339809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:16:48.339825 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:16:48.339837 kernel: ACPI: bus type drm_connector registered Jul 11 00:16:48.339850 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:16:48.339872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:48.339887 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:16:48.339906 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:16:48.339948 systemd-journald[1118]: Collecting audit messages is disabled. Jul 11 00:16:48.339973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:48.339985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:48.339998 systemd-journald[1118]: Journal started Jul 11 00:16:48.340020 systemd-journald[1118]: Runtime Journal (/run/log/journal/985db0cbc18b4b6e848e62d0f7996023) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:16:47.894674 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:16:47.922456 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:16:47.923372 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:16:48.342288 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:16:48.344188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:16:48.344462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:16:48.346001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:48.346264 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:48.347930 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:16:48.348144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:16:48.349742 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:48.349974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:48.351802 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:48.354122 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:16:48.355999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:16:48.370724 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:16:48.382247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:16:48.385225 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:16:48.386596 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:16:48.386628 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:16:48.416601 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:16:48.419784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:16:48.422194 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:16:48.480639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:48.502550 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:16:48.519559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:16:48.520835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:16:48.523940 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:16:48.524618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:16:48.535129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:48.570236 systemd-journald[1118]: Time spent on flushing to /var/log/journal/985db0cbc18b4b6e848e62d0f7996023 is 14.539ms for 954 entries. Jul 11 00:16:48.570236 systemd-journald[1118]: System Journal (/var/log/journal/985db0cbc18b4b6e848e62d0f7996023) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:16:48.858813 systemd-journald[1118]: Received client request to flush runtime journal. Jul 11 00:16:48.858878 kernel: loop0: detected capacity change from 0 to 140768 Jul 11 00:16:48.858912 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:16:48.858935 kernel: loop1: detected capacity change from 0 to 221472 Jul 11 00:16:48.573420 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:16:48.599333 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:48.601019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:16:48.627550 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:16:48.629244 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:16:48.647535 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:16:48.659899 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:16:48.701375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:48.712676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:16:48.735872 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:16:48.739989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:16:48.751179 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:16:48.755271 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:16:48.853545 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:16:48.873446 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:16:48.878959 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:16:48.979444 kernel: loop2: detected capacity change from 0 to 142488 Jul 11 00:16:49.062524 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 11 00:16:49.062561 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 11 00:16:49.073642 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:49.088164 kernel: loop3: detected capacity change from 0 to 140768 Jul 11 00:16:49.143153 kernel: loop4: detected capacity change from 0 to 221472 Jul 11 00:16:49.150075 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:16:49.151019 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:16:49.165155 kernel: loop5: detected capacity change from 0 to 142488 Jul 11 00:16:49.173780 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:16:49.174456 (sd-merge)[1195]: Merged extensions into '/usr'. Jul 11 00:16:49.184193 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:16:49.184210 systemd[1]: Reloading... Jul 11 00:16:49.269034 zram_generator::config[1222]: No configuration found. Jul 11 00:16:49.369947 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:16:49.406555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:49.458643 systemd[1]: Reloading finished in 273 ms. Jul 11 00:16:49.488959 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:16:49.490632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:16:49.504309 systemd[1]: Starting ensure-sysext.service... Jul 11 00:16:49.507424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:16:49.525843 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:16:49.525863 systemd[1]: Reloading... Jul 11 00:16:49.538760 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:16:49.539158 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:16:49.540236 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:16:49.540559 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jul 11 00:16:49.540641 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jul 11 00:16:49.544851 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:16:49.544865 systemd-tmpfiles[1260]: Skipping /boot Jul 11 00:16:49.558250 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:16:49.558265 systemd-tmpfiles[1260]: Skipping /boot Jul 11 00:16:49.608135 zram_generator::config[1290]: No configuration found. Jul 11 00:16:49.715580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:49.767473 systemd[1]: Reloading finished in 241 ms. Jul 11 00:16:49.800925 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:49.819908 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:16:49.824354 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:16:49.827095 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:16:49.832754 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:16:49.836080 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:16:49.840745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:49.840964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:49.852473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:49.856950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:49.860867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:49.862444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:49.862658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:49.864300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:16:49.866036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:49.866489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:49.869578 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:49.869840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:49.871894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:49.872606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:49.877964 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:16:49.884280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:16:49.901932 augenrules[1353]: No rules Jul 11 00:16:49.903269 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:16:49.906506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:49.906709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:49.914274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:49.926886 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:16:49.929506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:49.933105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:49.934562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:49.937046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:49.942330 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:16:49.946065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:16:49.947277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:16:49.948369 systemd[1]: Finished ensure-sysext.service. Jul 11 00:16:49.949908 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:16:49.952475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:49.952898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:49.954842 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:16:49.955409 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:16:49.957038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:49.964531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:49.966686 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:49.966929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:49.968470 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:16:49.977790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:16:49.977909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:16:49.981867 systemd-udevd[1364]: Using default interface naming scheme 'v255'. Jul 11 00:16:49.986305 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:16:49.990939 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:16:50.006326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:50.007752 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:16:50.018668 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:16:50.191293 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:16:50.193240 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:16:50.195151 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:16:50.227342 systemd-resolved[1329]: Positive Trust Anchors: Jul 11 00:16:50.227364 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:16:50.227396 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:16:50.231807 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jul 11 00:16:50.234102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:16:50.238135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1384) Jul 11 00:16:50.235595 systemd-networkd[1386]: lo: Link UP Jul 11 00:16:50.235611 systemd-networkd[1386]: lo: Gained carrier Jul 11 00:16:50.235854 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:50.237683 systemd-networkd[1386]: Enumeration completed Jul 11 00:16:50.238529 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:16:50.238961 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:50.239030 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:16:50.240031 systemd[1]: Reached target network.target - Network. Jul 11 00:16:50.240275 systemd-networkd[1386]: eth0: Link UP Jul 11 00:16:50.240281 systemd-networkd[1386]: eth0: Gained carrier Jul 11 00:16:50.240296 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:50.250343 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:16:50.251214 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:16:50.255678 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Jul 11 00:16:50.257266 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:16:50.257331 systemd-timesyncd[1374]: Initial clock synchronization to Fri 2025-07-11 00:16:50.040034 UTC. Jul 11 00:16:50.270154 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 00:16:50.278171 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:16:50.284049 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:50.296492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:16:50.309862 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 11 00:16:50.314450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:16:50.327907 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:16:50.328430 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:16:50.328689 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:16:50.362330 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:16:50.383227 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:16:50.441490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:50.459574 kernel: kvm_amd: TSC scaling supported Jul 11 00:16:50.459663 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:16:50.459685 kernel: kvm_amd: Nested Paging enabled Jul 11 00:16:50.459713 kernel: kvm_amd: LBR virtualization supported Jul 11 00:16:50.460674 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:16:50.460709 kernel: kvm_amd: Virtual GIF supported Jul 11 00:16:50.487138 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:16:50.532542 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:16:50.562255 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:50.579567 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:16:50.596460 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:16:50.636684 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:16:50.640039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:50.641221 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:16:50.642445 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:16:50.643769 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:16:50.645347 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:16:50.678196 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:16:50.679522 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:16:50.680785 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:16:50.680820 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:16:50.681724 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:16:50.683715 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:16:50.686628 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:16:50.702890 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:16:50.737789 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:16:50.739426 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:16:50.740708 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:16:50.741836 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:16:50.742863 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:16:50.742900 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:16:50.744223 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:16:50.748695 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:16:50.790639 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:16:50.792837 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:16:50.797437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:16:50.816008 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:16:50.819516 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:16:50.822715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:16:50.825493 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:16:50.826142 jq[1432]: false Jul 11 00:16:50.831150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:16:50.835687 extend-filesystems[1433]: Found loop3 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found loop4 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found loop5 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found sr0 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda1 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda2 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda3 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found usr Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda4 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda6 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda7 Jul 11 00:16:50.836689 extend-filesystems[1433]: Found vda9 Jul 11 00:16:50.836689 extend-filesystems[1433]: Checking size of /dev/vda9 Jul 11 00:16:50.841175 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:16:50.843408 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:16:50.844061 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:16:50.846995 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:16:50.851254 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:16:50.854208 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:16:50.861328 jq[1448]: true Jul 11 00:16:50.864139 extend-filesystems[1433]: Resized partition /dev/vda9 Jul 11 00:16:50.869750 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:16:50.880289 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:16:50.880328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1390) Jul 11 00:16:50.875250 dbus-daemon[1431]: [system] SELinux support is enabled Jul 11 00:16:50.870702 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:16:50.870952 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:16:50.871365 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:16:50.871595 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:16:50.876351 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:16:50.883637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:16:50.884041 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:16:50.890406 update_engine[1447]: I20250711 00:16:50.890285 1447 main.cc:92] Flatcar Update Engine starting Jul 11 00:16:50.895791 update_engine[1447]: I20250711 00:16:50.895614 1447 update_check_scheduler.cc:74] Next update check in 8m43s Jul 11 00:16:50.912309 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:16:50.924430 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:16:50.946778 jq[1458]: true Jul 11 00:16:50.949687 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:16:50.951985 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:16:50.952040 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:16:50.954249 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:16:50.954313 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:16:50.956673 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:16:50.956673 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:16:50.956673 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:16:50.964262 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jul 11 00:16:50.967413 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:16:50.969543 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:16:50.969835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:16:50.973880 tar[1455]: linux-amd64/helm Jul 11 00:16:50.976074 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:16:50.976101 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:16:50.985420 systemd-logind[1445]: New seat seat0. Jul 11 00:16:51.005864 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:16:51.020698 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:16:51.044304 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:16:51.045676 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:16:51.143187 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:16:51.162162 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:16:51.183743 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:16:51.195487 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:16:51.210950 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:16:51.211582 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:16:51.222670 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:16:51.263437 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:16:51.283647 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:16:51.287212 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:16:51.288616 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:16:51.337538 containerd[1459]: time="2025-07-11T00:16:51.337375404Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:16:51.367874 containerd[1459]: time="2025-07-11T00:16:51.367803368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.370798 containerd[1459]: time="2025-07-11T00:16:51.370769461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.370924 containerd[1459]: time="2025-07-11T00:16:51.370856581Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:16:51.370924 containerd[1459]: time="2025-07-11T00:16:51.370877332Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:16:51.371276 containerd[1459]: time="2025-07-11T00:16:51.371247867Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:16:51.371396 containerd[1459]: time="2025-07-11T00:16:51.371331644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.371509 containerd[1459]: time="2025-07-11T00:16:51.371479994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.371570 containerd[1459]: time="2025-07-11T00:16:51.371556889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.371857 containerd[1459]: time="2025-07-11T00:16:51.371835450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.371989 containerd[1459]: time="2025-07-11T00:16:51.371934471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.371989 containerd[1459]: time="2025-07-11T00:16:51.371957971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.371989 containerd[1459]: time="2025-07-11T00:16:51.371968254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.372283 containerd[1459]: time="2025-07-11T00:16:51.372223549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.373322 containerd[1459]: time="2025-07-11T00:16:51.373298136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.373643 containerd[1459]: time="2025-07-11T00:16:51.373545964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.373643 containerd[1459]: time="2025-07-11T00:16:51.373600831Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:16:51.373890 containerd[1459]: time="2025-07-11T00:16:51.373869266Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:16:51.374054 containerd[1459]: time="2025-07-11T00:16:51.374032744Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:16:51.417631 containerd[1459]: time="2025-07-11T00:16:51.417562415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:16:51.417631 containerd[1459]: time="2025-07-11T00:16:51.417627915Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:16:51.417631 containerd[1459]: time="2025-07-11T00:16:51.417645168Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:16:51.417792 containerd[1459]: time="2025-07-11T00:16:51.417660247Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:16:51.417792 containerd[1459]: time="2025-07-11T00:16:51.417675666Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:16:51.417932 containerd[1459]: time="2025-07-11T00:16:51.417894995Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:16:51.418491 containerd[1459]: time="2025-07-11T00:16:51.418300425Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:16:51.418727 containerd[1459]: time="2025-07-11T00:16:51.418671486Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:16:51.418727 containerd[1459]: time="2025-07-11T00:16:51.418706040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:16:51.418799 containerd[1459]: time="2025-07-11T00:16:51.418727785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:16:51.418799 containerd[1459]: time="2025-07-11T00:16:51.418750622Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418799 containerd[1459]: time="2025-07-11T00:16:51.418769483Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418869 containerd[1459]: time="2025-07-11T00:16:51.418801775Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418869 containerd[1459]: time="2025-07-11T00:16:51.418822030Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418869 containerd[1459]: time="2025-07-11T00:16:51.418842830Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418869 containerd[1459]: time="2025-07-11T00:16:51.418861876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418970 containerd[1459]: time="2025-07-11T00:16:51.418880268Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418970 containerd[1459]: time="2025-07-11T00:16:51.418897755Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.418970 containerd[1459]: time="2025-07-11T00:16:51.418933663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.418970 containerd[1459]: time="2025-07-11T00:16:51.418953752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419080 containerd[1459]: time="2025-07-11T00:16:51.418972466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419080 containerd[1459]: time="2025-07-11T00:16:51.418991316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419080 containerd[1459]: time="2025-07-11T00:16:51.419008978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419080 containerd[1459]: time="2025-07-11T00:16:51.419029535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419080 containerd[1459]: time="2025-07-11T00:16:51.419046524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419080 containerd[1459]: time="2025-07-11T00:16:51.419065862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419097326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419147328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419165087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419182847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419200080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419220919Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419249672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419265902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419283203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:16:51.419395 containerd[1459]: time="2025-07-11T00:16:51.419382662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419412819Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419432294Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419453660Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419469957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419528284Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419552378Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:16:51.419675 containerd[1459]: time="2025-07-11T00:16:51.419566248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.420094 containerd[1459]: time="2025-07-11T00:16:51.419997156Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:16:51.420094 containerd[1459]: time="2025-07-11T00:16:51.420105495Z" level=info msg="Connect containerd service" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.420213630Z" level=info msg="using legacy CRI server" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.420243202Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.420401241Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421459617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421631585Z" level=info msg="Start subscribing containerd event" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421758814Z" level=info msg="Start recovering state" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421845875Z" level=info msg="Start event monitor" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421869316Z" level=info msg="Start snapshots syncer" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421880262Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.421888615Z" level=info msg="Start streaming server" Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.422211292Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.422301511Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:16:51.438955 containerd[1459]: time="2025-07-11T00:16:51.422396809Z" level=info msg="containerd successfully booted in 0.086239s" Jul 11 00:16:51.422710 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:16:51.554150 tar[1455]: linux-amd64/LICENSE Jul 11 00:16:51.554150 tar[1455]: linux-amd64/README.md Jul 11 00:16:51.571571 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:16:52.257360 systemd-networkd[1386]: eth0: Gained IPv6LL Jul 11 00:16:52.261523 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:16:52.263669 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:16:52.274706 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:16:52.284691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:52.287820 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:16:52.310807 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:16:52.311464 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:16:52.313165 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:16:52.314536 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:16:53.425947 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:16:53.463365 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:55516.service - OpenSSH per-connection server daemon (10.0.0.1:55516). Jul 11 00:16:53.517829 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 55516 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:53.520463 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:53.530332 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:16:53.542738 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:16:53.546532 systemd-logind[1445]: New session 1 of user core. Jul 11 00:16:53.564555 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:16:53.575367 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:16:53.582838 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:16:53.721700 systemd[1544]: Queued start job for default target default.target. Jul 11 00:16:53.737706 systemd[1544]: Created slice app.slice - User Application Slice. Jul 11 00:16:53.737739 systemd[1544]: Reached target paths.target - Paths. Jul 11 00:16:53.737753 systemd[1544]: Reached target timers.target - Timers. Jul 11 00:16:53.739682 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:16:53.754522 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:16:53.754692 systemd[1544]: Reached target sockets.target - Sockets. Jul 11 00:16:53.754708 systemd[1544]: Reached target basic.target - Basic System. Jul 11 00:16:53.754760 systemd[1544]: Reached target default.target - Main User Target. Jul 11 00:16:53.754798 systemd[1544]: Startup finished in 153ms. Jul 11 00:16:53.754982 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:16:53.760531 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:16:53.762776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:53.765838 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:16:53.768158 systemd[1]: Startup finished in 1.383s (kernel) + 7.291s (initrd) + 6.718s (userspace) = 15.393s. Jul 11 00:16:53.797689 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:16:53.829757 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:55526.service - OpenSSH per-connection server daemon (10.0.0.1:55526). Jul 11 00:16:53.883347 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 55526 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:53.885058 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:53.890249 systemd-logind[1445]: New session 2 of user core. Jul 11 00:16:53.904396 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:16:53.964384 sshd[1565]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:53.978158 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:55526.service: Deactivated successfully. Jul 11 00:16:53.980051 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:16:53.981915 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:16:53.997458 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:55534.service - OpenSSH per-connection server daemon (10.0.0.1:55534). Jul 11 00:16:53.998912 systemd-logind[1445]: Removed session 2. Jul 11 00:16:54.030771 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 55534 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:54.032643 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.037319 systemd-logind[1445]: New session 3 of user core. Jul 11 00:16:54.046263 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:16:54.098978 sshd[1572]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:54.111143 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:55534.service: Deactivated successfully. Jul 11 00:16:54.112862 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:16:54.115018 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:16:54.116513 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Jul 11 00:16:54.117543 systemd-logind[1445]: Removed session 3. Jul 11 00:16:54.154377 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:54.156627 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.160874 systemd-logind[1445]: New session 4 of user core. Jul 11 00:16:54.171252 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:16:54.228528 sshd[1584]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:54.241772 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:55546.service: Deactivated successfully. Jul 11 00:16:54.243456 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:16:54.244823 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:16:54.246067 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:55558.service - OpenSSH per-connection server daemon (10.0.0.1:55558). Jul 11 00:16:54.246973 systemd-logind[1445]: Removed session 4. Jul 11 00:16:54.298134 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 55558 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:54.299969 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.304333 systemd-logind[1445]: New session 5 of user core. Jul 11 00:16:54.315261 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:16:54.550797 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:16:54.551312 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:54.568970 sudo[1594]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:54.571231 sshd[1591]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:54.583314 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:55558.service: Deactivated successfully. Jul 11 00:16:54.585361 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:16:54.587012 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:16:54.595526 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:55568.service - OpenSSH per-connection server daemon (10.0.0.1:55568). Jul 11 00:16:54.598529 systemd-logind[1445]: Removed session 5. Jul 11 00:16:54.630961 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 55568 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:54.642738 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.647335 systemd-logind[1445]: New session 6 of user core. Jul 11 00:16:54.663282 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:16:54.721701 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:16:54.722046 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:54.726241 sudo[1604]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:54.734518 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:16:54.734940 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:54.762718 kubelet[1556]: E0711 00:16:54.762635 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:16:54.772445 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:16:54.772865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:16:54.773211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:16:54.773674 systemd[1]: kubelet.service: Consumed 1.988s CPU time. Jul 11 00:16:54.775201 auditctl[1607]: No rules Jul 11 00:16:54.777458 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:16:54.777860 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:16:54.781295 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:16:54.824874 augenrules[1626]: No rules Jul 11 00:16:54.827197 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:16:54.828749 sudo[1603]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:54.831132 sshd[1600]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:54.843449 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:55568.service: Deactivated successfully. Jul 11 00:16:54.845778 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:16:54.847784 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:16:54.863657 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:55576.service - OpenSSH per-connection server daemon (10.0.0.1:55576). Jul 11 00:16:54.864842 systemd-logind[1445]: Removed session 6. Jul 11 00:16:54.893121 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 55576 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:16:54.894745 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:54.899692 systemd-logind[1445]: New session 7 of user core. Jul 11 00:16:54.913281 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:16:54.969487 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:16:54.969929 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:55.441718 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:16:55.442006 (dockerd)[1656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:16:56.005290 dockerd[1656]: time="2025-07-11T00:16:56.005196347Z" level=info msg="Starting up" Jul 11 00:16:57.062141 dockerd[1656]: time="2025-07-11T00:16:57.062014331Z" level=info msg="Loading containers: start." Jul 11 00:16:57.280137 kernel: Initializing XFRM netlink socket Jul 11 00:16:57.377932 systemd-networkd[1386]: docker0: Link UP Jul 11 00:16:57.405216 dockerd[1656]: time="2025-07-11T00:16:57.405095384Z" level=info msg="Loading containers: done." Jul 11 00:16:57.430154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2275984178-merged.mount: Deactivated successfully. Jul 11 00:16:57.433058 dockerd[1656]: time="2025-07-11T00:16:57.432984744Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:16:57.433302 dockerd[1656]: time="2025-07-11T00:16:57.433264429Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:16:57.433481 dockerd[1656]: time="2025-07-11T00:16:57.433447501Z" level=info msg="Daemon has completed initialization" Jul 11 00:16:57.483612 dockerd[1656]: time="2025-07-11T00:16:57.483488040Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:16:57.483761 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:16:58.468049 containerd[1459]: time="2025-07-11T00:16:58.467939964Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:17:00.386941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137257800.mount: Deactivated successfully. Jul 11 00:17:01.845644 containerd[1459]: time="2025-07-11T00:17:01.845553184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:01.846377 containerd[1459]: time="2025-07-11T00:17:01.846304991Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 11 00:17:01.847690 containerd[1459]: time="2025-07-11T00:17:01.847651896Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:01.851288 containerd[1459]: time="2025-07-11T00:17:01.851248714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:01.852769 containerd[1459]: time="2025-07-11T00:17:01.852741915Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 3.384714722s" Jul 11 00:17:01.852829 containerd[1459]: time="2025-07-11T00:17:01.852785593Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 00:17:01.853799 containerd[1459]: time="2025-07-11T00:17:01.853764329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:17:03.800905 containerd[1459]: time="2025-07-11T00:17:03.800756699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:03.801711 containerd[1459]: time="2025-07-11T00:17:03.801660909Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 11 00:17:03.803230 containerd[1459]: time="2025-07-11T00:17:03.803080638Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:03.806538 containerd[1459]: time="2025-07-11T00:17:03.806491006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:03.807709 containerd[1459]: time="2025-07-11T00:17:03.807671101Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.953876627s" Jul 11 00:17:03.807709 containerd[1459]: time="2025-07-11T00:17:03.807703135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 00:17:03.808317 containerd[1459]: time="2025-07-11T00:17:03.808286112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:17:05.023747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:17:05.062500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:05.407798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:05.413162 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:05.491406 kubelet[1873]: E0711 00:17:05.490877 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:05.498962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:05.499237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:06.011354 containerd[1459]: time="2025-07-11T00:17:06.011258104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:06.064551 containerd[1459]: time="2025-07-11T00:17:06.064414460Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 11 00:17:06.170055 containerd[1459]: time="2025-07-11T00:17:06.169960245Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:06.197106 containerd[1459]: time="2025-07-11T00:17:06.197017226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:06.198542 containerd[1459]: time="2025-07-11T00:17:06.198479077Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.39015915s" Jul 11 00:17:06.198542 containerd[1459]: time="2025-07-11T00:17:06.198515513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 00:17:06.199168 containerd[1459]: time="2025-07-11T00:17:06.199136411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:17:09.134204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247805276.mount: Deactivated successfully. Jul 11 00:17:10.881698 containerd[1459]: time="2025-07-11T00:17:10.881564828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:10.883448 containerd[1459]: time="2025-07-11T00:17:10.883364972Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 11 00:17:10.885026 containerd[1459]: time="2025-07-11T00:17:10.884992132Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:10.887889 containerd[1459]: time="2025-07-11T00:17:10.887816129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:10.888895 containerd[1459]: time="2025-07-11T00:17:10.888613473Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 4.689440006s" Jul 11 00:17:10.888895 containerd[1459]: time="2025-07-11T00:17:10.888667119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 00:17:10.889585 containerd[1459]: time="2025-07-11T00:17:10.889554258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:17:11.839347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055413735.mount: Deactivated successfully. Jul 11 00:17:15.089887 containerd[1459]: time="2025-07-11T00:17:15.089772010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:15.191062 containerd[1459]: time="2025-07-11T00:17:15.190943298Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:17:15.218726 containerd[1459]: time="2025-07-11T00:17:15.218622365Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:15.245130 containerd[1459]: time="2025-07-11T00:17:15.244979837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:15.246952 containerd[1459]: time="2025-07-11T00:17:15.246891561Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.357297688s" Jul 11 00:17:15.246952 containerd[1459]: time="2025-07-11T00:17:15.246952928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:17:15.247791 containerd[1459]: time="2025-07-11T00:17:15.247588581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:17:15.636155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:17:15.645398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:15.843670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:15.848898 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:15.953754 kubelet[1950]: E0711 00:17:15.953556 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:15.958707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:15.958923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:16.610930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947216175.mount: Deactivated successfully. Jul 11 00:17:16.644104 containerd[1459]: time="2025-07-11T00:17:16.643927097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:16.646515 containerd[1459]: time="2025-07-11T00:17:16.646430264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:17:16.650691 containerd[1459]: time="2025-07-11T00:17:16.650593597Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:16.657903 containerd[1459]: time="2025-07-11T00:17:16.657728935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:16.661231 containerd[1459]: time="2025-07-11T00:17:16.661104068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.413463193s" Jul 11 00:17:16.661231 containerd[1459]: time="2025-07-11T00:17:16.661227300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:17:16.662131 containerd[1459]: time="2025-07-11T00:17:16.662075866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:17:17.567731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702527374.mount: Deactivated successfully. Jul 11 00:17:22.185630 containerd[1459]: time="2025-07-11T00:17:22.184350247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:22.186636 containerd[1459]: time="2025-07-11T00:17:22.186569782Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 11 00:17:22.189065 containerd[1459]: time="2025-07-11T00:17:22.188989466Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:22.193914 containerd[1459]: time="2025-07-11T00:17:22.193785020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:22.195148 containerd[1459]: time="2025-07-11T00:17:22.195086780Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.532939642s" Jul 11 00:17:22.195148 containerd[1459]: time="2025-07-11T00:17:22.195147788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 00:17:25.947799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:25.957628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:25.990022 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Jul 11 00:17:25.990049 systemd[1]: Reloading... Jul 11 00:17:26.146173 zram_generator::config[2088]: No configuration found. Jul 11 00:17:26.656340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:17:26.746010 systemd[1]: Reloading finished in 755 ms. Jul 11 00:17:26.811546 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:17:26.811679 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:17:26.812080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:26.816313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:27.029962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:27.035967 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:17:27.096660 kubelet[2134]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:27.096660 kubelet[2134]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:17:27.096660 kubelet[2134]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:27.097203 kubelet[2134]: I0711 00:17:27.096746 2134 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:17:29.250262 kubelet[2134]: I0711 00:17:29.250208 2134 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:17:29.250262 kubelet[2134]: I0711 00:17:29.250245 2134 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:17:29.250745 kubelet[2134]: I0711 00:17:29.250521 2134 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:17:29.284955 kubelet[2134]: I0711 00:17:29.283911 2134 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:29.285214 kubelet[2134]: E0711 00:17:29.285103 2134 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:29.291374 kubelet[2134]: E0711 00:17:29.291327 2134 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:17:29.291374 kubelet[2134]: I0711 00:17:29.291366 2134 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:17:29.299205 kubelet[2134]: I0711 00:17:29.299177 2134 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:17:29.299938 kubelet[2134]: I0711 00:17:29.299911 2134 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:17:29.300167 kubelet[2134]: I0711 00:17:29.300096 2134 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:17:29.300374 kubelet[2134]: I0711 00:17:29.300159 2134 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:17:29.300487 kubelet[2134]: I0711 00:17:29.300387 2134 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:17:29.300487 kubelet[2134]: I0711 00:17:29.300396 2134 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:17:29.300564 kubelet[2134]: I0711 00:17:29.300547 2134 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:29.303176 kubelet[2134]: I0711 00:17:29.303096 2134 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:17:29.303176 kubelet[2134]: I0711 00:17:29.303149 2134 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:17:29.303258 kubelet[2134]: I0711 00:17:29.303195 2134 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:17:29.303258 kubelet[2134]: I0711 00:17:29.303217 2134 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:17:29.309420 kubelet[2134]: W0711 00:17:29.309260 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:29.309420 kubelet[2134]: I0711 00:17:29.309293 2134 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:17:29.309420 kubelet[2134]: E0711 00:17:29.309339 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:29.309847 kubelet[2134]: I0711 00:17:29.309816 2134 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:17:29.309953 kubelet[2134]: W0711 00:17:29.309920 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:29.309997 kubelet[2134]: E0711 00:17:29.309955 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:29.310398 kubelet[2134]: W0711 00:17:29.310364 2134 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:17:29.312354 kubelet[2134]: I0711 00:17:29.312322 2134 server.go:1274] "Started kubelet" Jul 11 00:17:29.318141 kubelet[2134]: I0711 00:17:29.315957 2134 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:17:29.318141 kubelet[2134]: I0711 00:17:29.316462 2134 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:17:29.318141 kubelet[2134]: I0711 00:17:29.317572 2134 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:17:29.320776 kubelet[2134]: I0711 00:17:29.320312 2134 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:17:29.320776 kubelet[2134]: I0711 00:17:29.320699 2134 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:17:29.321173 kubelet[2134]: E0711 00:17:29.321143 2134 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:29.321173 kubelet[2134]: I0711 00:17:29.321096 2134 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:17:29.321488 kubelet[2134]: I0711 00:17:29.321459 2134 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:17:29.322011 kubelet[2134]: I0711 00:17:29.321983 2134 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:17:29.322134 kubelet[2134]: I0711 00:17:29.322100 2134 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:17:29.322267 kubelet[2134]: E0711 00:17:29.322224 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jul 11 00:17:29.322572 kubelet[2134]: W0711 00:17:29.322529 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:29.322704 kubelet[2134]: E0711 00:17:29.322680 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:29.324955 kubelet[2134]: E0711 00:17:29.323565 2134 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a4b7a2afc29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:17:29.312295977 +0000 UTC m=+2.269163293,LastTimestamp:2025-07-11 00:17:29.312295977 +0000 UTC m=+2.269163293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:17:29.326370 kubelet[2134]: I0711 00:17:29.326346 2134 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:17:29.326370 kubelet[2134]: I0711 00:17:29.326364 2134 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:17:29.326498 kubelet[2134]: I0711 00:17:29.326429 2134 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:17:29.326700 kubelet[2134]: E0711 00:17:29.326669 2134 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:17:29.339950 kubelet[2134]: I0711 00:17:29.339909 2134 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:17:29.339950 kubelet[2134]: I0711 00:17:29.339935 2134 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:17:29.339950 kubelet[2134]: I0711 00:17:29.339958 2134 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:29.422375 kubelet[2134]: E0711 00:17:29.422315 2134 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:29.523202 kubelet[2134]: E0711 00:17:29.522993 2134 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:29.523500 kubelet[2134]: E0711 00:17:29.523441 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jul 11 00:17:29.529401 kubelet[2134]: I0711 00:17:29.529330 2134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:17:29.531314 kubelet[2134]: I0711 00:17:29.531278 2134 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:17:29.531398 kubelet[2134]: I0711 00:17:29.531330 2134 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:17:29.531398 kubelet[2134]: I0711 00:17:29.531365 2134 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:17:29.532331 kubelet[2134]: E0711 00:17:29.531430 2134 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:17:29.532856 kubelet[2134]: W0711 00:17:29.532828 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:29.532975 kubelet[2134]: E0711 00:17:29.532955 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:29.623750 kubelet[2134]: E0711 00:17:29.623668 2134 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:29.624814 kubelet[2134]: I0711 00:17:29.624776 2134 policy_none.go:49] "None policy: Start" Jul 11 00:17:29.625787 kubelet[2134]: I0711 00:17:29.625753 2134 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:17:29.625868 kubelet[2134]: I0711 00:17:29.625794 2134 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:17:29.632009 kubelet[2134]: E0711 00:17:29.631975 2134 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:17:29.720638 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:17:29.724680 kubelet[2134]: E0711 00:17:29.724644 2134 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:29.736105 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:17:29.741835 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:17:29.753236 kubelet[2134]: I0711 00:17:29.753130 2134 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:17:29.755262 kubelet[2134]: I0711 00:17:29.753456 2134 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:17:29.755262 kubelet[2134]: I0711 00:17:29.753534 2134 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:17:29.755262 kubelet[2134]: I0711 00:17:29.753803 2134 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:17:29.755262 kubelet[2134]: E0711 00:17:29.755136 2134 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:17:29.845369 systemd[1]: Created slice kubepods-burstable-podea55c58640effd1c0fe810dbb3558152.slice - libcontainer container kubepods-burstable-podea55c58640effd1c0fe810dbb3558152.slice. Jul 11 00:17:29.856502 kubelet[2134]: I0711 00:17:29.856424 2134 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:29.857081 kubelet[2134]: E0711 00:17:29.857006 2134 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 11 00:17:29.864435 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 11 00:17:29.868715 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 11 00:17:29.924682 kubelet[2134]: E0711 00:17:29.924619 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jul 11 00:17:29.925743 kubelet[2134]: I0711 00:17:29.925685 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:29.925800 kubelet[2134]: I0711 00:17:29.925744 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:29.925800 kubelet[2134]: I0711 00:17:29.925778 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea55c58640effd1c0fe810dbb3558152-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea55c58640effd1c0fe810dbb3558152\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:29.925862 kubelet[2134]: I0711 00:17:29.925807 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea55c58640effd1c0fe810dbb3558152-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ea55c58640effd1c0fe810dbb3558152\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:29.925862 kubelet[2134]: I0711 00:17:29.925831 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:29.925862 kubelet[2134]: I0711 00:17:29.925852 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:29.925972 kubelet[2134]: I0711 00:17:29.925876 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea55c58640effd1c0fe810dbb3558152-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea55c58640effd1c0fe810dbb3558152\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:29.925972 kubelet[2134]: I0711 00:17:29.925925 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:29.925972 kubelet[2134]: I0711 00:17:29.925963 2134 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:30.060039 kubelet[2134]: I0711 00:17:30.059980 2134 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:30.060677 kubelet[2134]: E0711 00:17:30.060565 2134 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 11 00:17:30.162332 kubelet[2134]: E0711 00:17:30.162275 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:30.163378 containerd[1459]: time="2025-07-11T00:17:30.163303829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ea55c58640effd1c0fe810dbb3558152,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:30.167364 kubelet[2134]: E0711 00:17:30.167302 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:30.168056 containerd[1459]: time="2025-07-11T00:17:30.168014647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:30.172437 kubelet[2134]: E0711 00:17:30.172313 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:30.173095 containerd[1459]: time="2025-07-11T00:17:30.173040017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:30.463309 kubelet[2134]: I0711 00:17:30.463166 2134 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:30.463872 kubelet[2134]: E0711 00:17:30.463826 2134 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 11 00:17:30.567561 kubelet[2134]: W0711 00:17:30.567438 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:30.567561 kubelet[2134]: E0711 00:17:30.567559 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:30.654250 kubelet[2134]: W0711 00:17:30.654163 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:30.654250 kubelet[2134]: E0711 00:17:30.654250 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:30.662687 kubelet[2134]: W0711 00:17:30.662446 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:30.662687 kubelet[2134]: E0711 00:17:30.662685 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:30.666791 kubelet[2134]: W0711 00:17:30.666712 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:30.666791 kubelet[2134]: E0711 00:17:30.666783 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:30.725708 kubelet[2134]: E0711 00:17:30.725360 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" Jul 11 00:17:31.266206 kubelet[2134]: I0711 00:17:31.266159 2134 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:31.266754 kubelet[2134]: E0711 00:17:31.266674 2134 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 11 00:17:31.399280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517629811.mount: Deactivated successfully. Jul 11 00:17:31.411310 containerd[1459]: time="2025-07-11T00:17:31.411137719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:31.419304 containerd[1459]: time="2025-07-11T00:17:31.418063111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:17:31.420418 containerd[1459]: time="2025-07-11T00:17:31.420356222Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:31.422196 containerd[1459]: time="2025-07-11T00:17:31.422013372Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:31.424781 containerd[1459]: time="2025-07-11T00:17:31.424662271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:31.425836 containerd[1459]: time="2025-07-11T00:17:31.425787777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:17:31.427264 containerd[1459]: time="2025-07-11T00:17:31.427133623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:17:31.429273 containerd[1459]: time="2025-07-11T00:17:31.429168552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:31.432909 containerd[1459]: time="2025-07-11T00:17:31.432052559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.268646044s" Jul 11 00:17:31.434630 containerd[1459]: time="2025-07-11T00:17:31.434593584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.261413761s" Jul 11 00:17:31.436453 containerd[1459]: time="2025-07-11T00:17:31.436420857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.268272344s" Jul 11 00:17:31.453606 kubelet[2134]: E0711 00:17:31.453528 2134 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:32.035147 containerd[1459]: time="2025-07-11T00:17:32.034788808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:32.035147 containerd[1459]: time="2025-07-11T00:17:32.034864502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:32.035147 containerd[1459]: time="2025-07-11T00:17:32.034894279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.035147 containerd[1459]: time="2025-07-11T00:17:32.034993127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.035147 containerd[1459]: time="2025-07-11T00:17:32.035010750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:32.035668 containerd[1459]: time="2025-07-11T00:17:32.035165546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:32.035668 containerd[1459]: time="2025-07-11T00:17:32.035224368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.035668 containerd[1459]: time="2025-07-11T00:17:32.035345638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.037952 containerd[1459]: time="2025-07-11T00:17:32.037428667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:32.037952 containerd[1459]: time="2025-07-11T00:17:32.037479884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:32.037952 containerd[1459]: time="2025-07-11T00:17:32.037494212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.037952 containerd[1459]: time="2025-07-11T00:17:32.037674324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.067407 systemd[1]: Started cri-containerd-28181c0d042e241ea4fe0d5fa63e0cd2060c5410f9986e05f5fccbd1976e7a73.scope - libcontainer container 28181c0d042e241ea4fe0d5fa63e0cd2060c5410f9986e05f5fccbd1976e7a73. Jul 11 00:17:32.069641 systemd[1]: Started cri-containerd-fe317f97ed88f9cff8cb62a0ff6dbf35dc20afb2ff0821683c626cd82e9d15a9.scope - libcontainer container fe317f97ed88f9cff8cb62a0ff6dbf35dc20afb2ff0821683c626cd82e9d15a9. Jul 11 00:17:32.076520 systemd[1]: Started cri-containerd-9a793f458c274c8126438979cfee0b0a1d5c5ca9afa9d84f9a68e8d559fa2b80.scope - libcontainer container 9a793f458c274c8126438979cfee0b0a1d5c5ca9afa9d84f9a68e8d559fa2b80. Jul 11 00:17:32.123908 containerd[1459]: time="2025-07-11T00:17:32.123851960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe317f97ed88f9cff8cb62a0ff6dbf35dc20afb2ff0821683c626cd82e9d15a9\"" Jul 11 00:17:32.126773 kubelet[2134]: E0711 00:17:32.126744 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:32.127546 containerd[1459]: time="2025-07-11T00:17:32.127255283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"28181c0d042e241ea4fe0d5fa63e0cd2060c5410f9986e05f5fccbd1976e7a73\"" Jul 11 00:17:32.128379 kubelet[2134]: E0711 00:17:32.128358 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:32.129720 containerd[1459]: time="2025-07-11T00:17:32.129684741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ea55c58640effd1c0fe810dbb3558152,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a793f458c274c8126438979cfee0b0a1d5c5ca9afa9d84f9a68e8d559fa2b80\"" Jul 11 00:17:32.130619 kubelet[2134]: E0711 00:17:32.130600 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:32.130792 containerd[1459]: time="2025-07-11T00:17:32.130749939Z" level=info msg="CreateContainer within sandbox \"fe317f97ed88f9cff8cb62a0ff6dbf35dc20afb2ff0821683c626cd82e9d15a9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:17:32.131472 containerd[1459]: time="2025-07-11T00:17:32.131409556Z" level=info msg="CreateContainer within sandbox \"28181c0d042e241ea4fe0d5fa63e0cd2060c5410f9986e05f5fccbd1976e7a73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:17:32.133035 containerd[1459]: time="2025-07-11T00:17:32.133004434Z" level=info msg="CreateContainer within sandbox \"9a793f458c274c8126438979cfee0b0a1d5c5ca9afa9d84f9a68e8d559fa2b80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:17:32.277769 kubelet[2134]: W0711 00:17:32.277671 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:32.277769 kubelet[2134]: E0711 00:17:32.277768 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:32.292199 kubelet[2134]: W0711 00:17:32.291968 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:32.292199 kubelet[2134]: E0711 00:17:32.292065 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:32.326761 kubelet[2134]: E0711 00:17:32.326681 2134 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="3.2s" Jul 11 00:17:32.414766 kubelet[2134]: E0711 00:17:32.414595 2134 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a4b7a2afc29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:17:29.312295977 +0000 UTC m=+2.269163293,LastTimestamp:2025-07-11 00:17:29.312295977 +0000 UTC m=+2.269163293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:17:32.869856 kubelet[2134]: I0711 00:17:32.869774 2134 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:32.870517 kubelet[2134]: E0711 00:17:32.870429 2134 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 11 00:17:32.875337 kubelet[2134]: W0711 00:17:32.875225 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:32.875548 kubelet[2134]: E0711 00:17:32.875338 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:33.074474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739160669.mount: Deactivated successfully. Jul 11 00:17:33.099724 containerd[1459]: time="2025-07-11T00:17:33.099446283Z" level=info msg="CreateContainer within sandbox \"28181c0d042e241ea4fe0d5fa63e0cd2060c5410f9986e05f5fccbd1976e7a73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93291e3be6813b3bbe3af6ac9fa17bb5b2b1e4dd01b4a75700ecebabdb06bd78\"" Jul 11 00:17:33.100548 containerd[1459]: time="2025-07-11T00:17:33.100438391Z" level=info msg="StartContainer for \"93291e3be6813b3bbe3af6ac9fa17bb5b2b1e4dd01b4a75700ecebabdb06bd78\"" Jul 11 00:17:33.102710 containerd[1459]: time="2025-07-11T00:17:33.102643027Z" level=info msg="CreateContainer within sandbox \"fe317f97ed88f9cff8cb62a0ff6dbf35dc20afb2ff0821683c626cd82e9d15a9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ced72440fcd8f6f4fe344c0f731b673377265795f384682dc766590049fd824\"" Jul 11 00:17:33.103522 containerd[1459]: time="2025-07-11T00:17:33.103136106Z" level=info msg="StartContainer for \"5ced72440fcd8f6f4fe344c0f731b673377265795f384682dc766590049fd824\"" Jul 11 00:17:33.111551 containerd[1459]: time="2025-07-11T00:17:33.111454748Z" level=info msg="CreateContainer within sandbox \"9a793f458c274c8126438979cfee0b0a1d5c5ca9afa9d84f9a68e8d559fa2b80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d48bf62df6d1c4456a28c9762322844c3ca92b1b518eada46097dea6cbe955fe\"" Jul 11 00:17:33.112489 containerd[1459]: time="2025-07-11T00:17:33.112267475Z" level=info msg="StartContainer for \"d48bf62df6d1c4456a28c9762322844c3ca92b1b518eada46097dea6cbe955fe\"" Jul 11 00:17:33.139569 systemd[1]: Started cri-containerd-5ced72440fcd8f6f4fe344c0f731b673377265795f384682dc766590049fd824.scope - libcontainer container 5ced72440fcd8f6f4fe344c0f731b673377265795f384682dc766590049fd824. Jul 11 00:17:33.151689 systemd[1]: Started cri-containerd-93291e3be6813b3bbe3af6ac9fa17bb5b2b1e4dd01b4a75700ecebabdb06bd78.scope - libcontainer container 93291e3be6813b3bbe3af6ac9fa17bb5b2b1e4dd01b4a75700ecebabdb06bd78. Jul 11 00:17:33.156830 systemd[1]: Started cri-containerd-d48bf62df6d1c4456a28c9762322844c3ca92b1b518eada46097dea6cbe955fe.scope - libcontainer container d48bf62df6d1c4456a28c9762322844c3ca92b1b518eada46097dea6cbe955fe. Jul 11 00:17:33.223587 kubelet[2134]: W0711 00:17:33.223042 2134 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused Jul 11 00:17:33.223587 kubelet[2134]: E0711 00:17:33.223155 2134 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:33.265478 containerd[1459]: time="2025-07-11T00:17:33.265398100Z" level=info msg="StartContainer for \"5ced72440fcd8f6f4fe344c0f731b673377265795f384682dc766590049fd824\" returns successfully" Jul 11 00:17:33.271942 containerd[1459]: time="2025-07-11T00:17:33.271829921Z" level=info msg="StartContainer for \"d48bf62df6d1c4456a28c9762322844c3ca92b1b518eada46097dea6cbe955fe\" returns successfully" Jul 11 00:17:33.272211 containerd[1459]: time="2025-07-11T00:17:33.271829951Z" level=info msg="StartContainer for \"93291e3be6813b3bbe3af6ac9fa17bb5b2b1e4dd01b4a75700ecebabdb06bd78\" returns successfully" Jul 11 00:17:33.551867 kubelet[2134]: E0711 00:17:33.551179 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:33.551867 kubelet[2134]: E0711 00:17:33.551578 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:33.557486 kubelet[2134]: E0711 00:17:33.557252 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:34.645257 kubelet[2134]: E0711 00:17:34.645214 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:34.645788 kubelet[2134]: E0711 00:17:34.645349 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:34.645788 kubelet[2134]: E0711 00:17:34.645456 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:35.647341 kubelet[2134]: E0711 00:17:35.647288 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:35.686720 kubelet[2134]: E0711 00:17:35.686676 2134 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:17:35.687659 kubelet[2134]: E0711 00:17:35.687623 2134 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 11 00:17:36.074384 kubelet[2134]: I0711 00:17:36.073499 2134 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:36.111768 kubelet[2134]: I0711 00:17:36.111709 2134 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:17:36.111768 kubelet[2134]: E0711 00:17:36.111768 2134 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:17:36.308133 kubelet[2134]: I0711 00:17:36.308062 2134 apiserver.go:52] "Watching apiserver" Jul 11 00:17:36.322372 kubelet[2134]: I0711 00:17:36.322277 2134 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:17:36.380290 update_engine[1447]: I20250711 00:17:36.378716 1447 update_attempter.cc:509] Updating boot flags... Jul 11 00:17:36.435449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2424) Jul 11 00:17:36.503151 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2425) Jul 11 00:17:37.998461 kubelet[2134]: E0711 00:17:37.998401 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.250070 kubelet[2134]: E0711 00:17:38.249901 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.575622 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Jul 11 00:17:38.575640 systemd[1]: Reloading... Jul 11 00:17:38.633258 kubelet[2134]: E0711 00:17:38.633185 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.652818 kubelet[2134]: E0711 00:17:38.652094 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.654034 kubelet[2134]: E0711 00:17:38.653521 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.654132 kubelet[2134]: E0711 00:17:38.653791 2134 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:38.674187 zram_generator::config[2471]: No configuration found. Jul 11 00:17:38.803371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:17:38.917447 systemd[1]: Reloading finished in 341 ms. Jul 11 00:17:38.977382 kubelet[2134]: I0711 00:17:38.977074 2134 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:38.977200 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:38.994440 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:17:38.994901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:39.012881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:39.220136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:39.227760 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:17:39.278930 kubelet[2516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:39.278930 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:17:39.278930 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:39.280450 kubelet[2516]: I0711 00:17:39.279003 2516 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:17:39.306073 kubelet[2516]: I0711 00:17:39.306014 2516 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:17:39.306073 kubelet[2516]: I0711 00:17:39.306057 2516 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:17:39.306399 kubelet[2516]: I0711 00:17:39.306374 2516 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:17:39.307743 kubelet[2516]: I0711 00:17:39.307709 2516 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:17:39.310561 kubelet[2516]: I0711 00:17:39.310515 2516 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:39.314534 kubelet[2516]: E0711 00:17:39.314500 2516 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:17:39.314534 kubelet[2516]: I0711 00:17:39.314523 2516 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:17:39.321334 kubelet[2516]: I0711 00:17:39.321300 2516 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:17:39.321579 kubelet[2516]: I0711 00:17:39.321551 2516 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:17:39.321801 kubelet[2516]: I0711 00:17:39.321733 2516 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:17:39.321988 kubelet[2516]: I0711 00:17:39.321779 2516 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:17:39.322095 kubelet[2516]: I0711 00:17:39.321995 2516 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:17:39.322095 kubelet[2516]: I0711 00:17:39.322007 2516 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:17:39.322095 kubelet[2516]: I0711 00:17:39.322046 2516 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:39.322244 kubelet[2516]: I0711 00:17:39.322225 2516 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:17:39.322244 kubelet[2516]: I0711 00:17:39.322242 2516 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:17:39.322338 kubelet[2516]: I0711 00:17:39.322322 2516 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:17:39.322338 kubelet[2516]: I0711 00:17:39.322337 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:17:39.324838 kubelet[2516]: I0711 00:17:39.324701 2516 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:17:39.325391 kubelet[2516]: I0711 00:17:39.325349 2516 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:17:39.326200 kubelet[2516]: I0711 00:17:39.326175 2516 server.go:1274] "Started kubelet" Jul 11 00:17:39.330559 kubelet[2516]: I0711 00:17:39.328763 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:17:39.330559 kubelet[2516]: I0711 00:17:39.329278 2516 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:17:39.330977 kubelet[2516]: I0711 00:17:39.330951 2516 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:17:39.334308 kubelet[2516]: E0711 00:17:39.334215 2516 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:39.335622 kubelet[2516]: I0711 00:17:39.335577 2516 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:17:39.337562 kubelet[2516]: I0711 00:17:39.337215 2516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:17:39.337562 kubelet[2516]: I0711 00:17:39.337485 2516 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:17:39.337752 kubelet[2516]: I0711 00:17:39.337702 2516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:17:39.338924 kubelet[2516]: I0711 00:17:39.338891 2516 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:17:39.342160 kubelet[2516]: I0711 00:17:39.339344 2516 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:17:39.342160 kubelet[2516]: I0711 00:17:39.340629 2516 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:17:39.342426 kubelet[2516]: I0711 00:17:39.342385 2516 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:17:39.342639 kubelet[2516]: I0711 00:17:39.342601 2516 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:17:39.352397 kubelet[2516]: I0711 00:17:39.352310 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:17:39.354012 kubelet[2516]: I0711 00:17:39.353973 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:17:39.354012 kubelet[2516]: I0711 00:17:39.354002 2516 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:17:39.354099 kubelet[2516]: I0711 00:17:39.354032 2516 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:17:39.354166 kubelet[2516]: E0711 00:17:39.354093 2516 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:17:39.385428 kubelet[2516]: I0711 00:17:39.385392 2516 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:17:39.385428 kubelet[2516]: I0711 00:17:39.385414 2516 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:17:39.385428 kubelet[2516]: I0711 00:17:39.385443 2516 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:39.385642 kubelet[2516]: I0711 00:17:39.385632 2516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:17:39.385669 kubelet[2516]: I0711 00:17:39.385644 2516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:17:39.385694 kubelet[2516]: I0711 00:17:39.385677 2516 policy_none.go:49] "None policy: Start" Jul 11 00:17:39.386614 kubelet[2516]: I0711 00:17:39.386580 2516 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:17:39.386614 kubelet[2516]: I0711 00:17:39.386612 2516 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:17:39.386814 kubelet[2516]: I0711 00:17:39.386773 2516 state_mem.go:75] "Updated machine memory state" Jul 11 00:17:39.391242 kubelet[2516]: I0711 00:17:39.391098 2516 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:17:39.391334 kubelet[2516]: I0711 00:17:39.391322 2516 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:17:39.391496 kubelet[2516]: I0711 00:17:39.391460 2516 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:17:39.391708 kubelet[2516]: I0711 00:17:39.391684 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:17:39.498954 kubelet[2516]: I0711 00:17:39.497635 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:17:39.643721 kubelet[2516]: I0711 00:17:39.643629 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:39.643721 kubelet[2516]: I0711 00:17:39.643707 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:39.644058 kubelet[2516]: I0711 00:17:39.643741 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea55c58640effd1c0fe810dbb3558152-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea55c58640effd1c0fe810dbb3558152\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:39.644058 kubelet[2516]: I0711 00:17:39.643767 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea55c58640effd1c0fe810dbb3558152-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ea55c58640effd1c0fe810dbb3558152\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:39.644058 kubelet[2516]: I0711 00:17:39.643794 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:39.644058 kubelet[2516]: I0711 00:17:39.643815 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:39.644058 kubelet[2516]: I0711 00:17:39.643836 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:39.644195 kubelet[2516]: I0711 00:17:39.643860 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea55c58640effd1c0fe810dbb3558152-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ea55c58640effd1c0fe810dbb3558152\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:39.644195 kubelet[2516]: I0711 00:17:39.644001 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:39.719799 kubelet[2516]: E0711 00:17:39.719715 2516 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:39.721130 kubelet[2516]: E0711 00:17:39.721089 2516 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:39.721351 kubelet[2516]: E0711 00:17:39.721102 2516 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:39.743069 sudo[2553]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:17:39.743589 sudo[2553]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 00:17:39.777962 kubelet[2516]: I0711 00:17:39.777738 2516 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:17:39.777962 kubelet[2516]: I0711 00:17:39.777862 2516 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:17:40.020931 kubelet[2516]: E0711 00:17:40.020872 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.022191 kubelet[2516]: E0711 00:17:40.021848 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.022191 kubelet[2516]: E0711 00:17:40.022048 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.240580 sudo[2553]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:40.322843 kubelet[2516]: I0711 00:17:40.322763 2516 apiserver.go:52] "Watching apiserver" Jul 11 00:17:40.343296 kubelet[2516]: I0711 00:17:40.343194 2516 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:17:40.368254 kubelet[2516]: E0711 00:17:40.367403 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.368254 kubelet[2516]: E0711 00:17:40.368132 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.369177 kubelet[2516]: E0711 00:17:40.368410 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:40.375818 kubelet[2516]: I0711 00:17:40.375740 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.37571799 podStartE2EDuration="2.37571799s" podCreationTimestamp="2025-07-11 00:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:40.363562815 +0000 UTC m=+1.129509690" watchObservedRunningTime="2025-07-11 00:17:40.37571799 +0000 UTC m=+1.141664855" Jul 11 00:17:40.395506 kubelet[2516]: I0711 00:17:40.395416 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.395356436 podStartE2EDuration="2.395356436s" podCreationTimestamp="2025-07-11 00:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:40.393975088 +0000 UTC m=+1.159921973" watchObservedRunningTime="2025-07-11 00:17:40.395356436 +0000 UTC m=+1.161303301" Jul 11 00:17:40.395848 kubelet[2516]: I0711 00:17:40.395579 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.395572926 podStartE2EDuration="3.395572926s" podCreationTimestamp="2025-07-11 00:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:40.375717479 +0000 UTC m=+1.141664354" watchObservedRunningTime="2025-07-11 00:17:40.395572926 +0000 UTC m=+1.161519801" Jul 11 00:17:41.368338 kubelet[2516]: E0711 00:17:41.368302 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:41.860012 sudo[1637]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:41.862915 sshd[1634]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:41.868622 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:55576.service: Deactivated successfully. Jul 11 00:17:41.872397 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:17:41.872661 systemd[1]: session-7.scope: Consumed 6.419s CPU time, 160.0M memory peak, 0B memory swap peak. Jul 11 00:17:41.874310 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:17:41.875763 systemd-logind[1445]: Removed session 7. Jul 11 00:17:41.944159 kubelet[2516]: E0711 00:17:41.944092 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:42.370270 kubelet[2516]: E0711 00:17:42.370205 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:43.088730 kubelet[2516]: I0711 00:17:43.088678 2516 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:17:43.089214 containerd[1459]: time="2025-07-11T00:17:43.089153081Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:17:43.089735 kubelet[2516]: I0711 00:17:43.089436 2516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:17:44.354970 systemd[1]: Created slice kubepods-besteffort-pod9cd1d012_b361_48bc_a30d_615a6dde4907.slice - libcontainer container kubepods-besteffort-pod9cd1d012_b361_48bc_a30d_615a6dde4907.slice. Jul 11 00:17:44.375015 kubelet[2516]: I0711 00:17:44.374963 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cd1d012-b361-48bc-a30d-615a6dde4907-xtables-lock\") pod \"kube-proxy-xd6j8\" (UID: \"9cd1d012-b361-48bc-a30d-615a6dde4907\") " pod="kube-system/kube-proxy-xd6j8" Jul 11 00:17:44.375015 kubelet[2516]: I0711 00:17:44.375015 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-kernel\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.375015 kubelet[2516]: I0711 00:17:44.375047 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cd1d012-b361-48bc-a30d-615a6dde4907-kube-proxy\") pod \"kube-proxy-xd6j8\" (UID: \"9cd1d012-b361-48bc-a30d-615a6dde4907\") " pod="kube-system/kube-proxy-xd6j8" Jul 11 00:17:44.375015 kubelet[2516]: I0711 00:17:44.375070 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-net\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.375015 kubelet[2516]: I0711 00:17:44.375091 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-xtables-lock\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.375779 kubelet[2516]: I0711 00:17:44.375148 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-run\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.375779 kubelet[2516]: I0711 00:17:44.375207 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-bpf-maps\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.375779 kubelet[2516]: I0711 00:17:44.375231 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-etc-cni-netd\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.375779 kubelet[2516]: I0711 00:17:44.375256 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cd1d012-b361-48bc-a30d-615a6dde4907-lib-modules\") pod \"kube-proxy-xd6j8\" (UID: \"9cd1d012-b361-48bc-a30d-615a6dde4907\") " pod="kube-system/kube-proxy-xd6j8" Jul 11 00:17:44.375779 kubelet[2516]: I0711 00:17:44.375294 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-cgroup\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.376261 kubelet[2516]: I0711 00:17:44.375328 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cni-path\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.376349 kubelet[2516]: I0711 00:17:44.376317 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-lib-modules\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.376846 kubelet[2516]: I0711 00:17:44.376363 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgv2\" (UniqueName: \"kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-kube-api-access-phgv2\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.376846 kubelet[2516]: I0711 00:17:44.376392 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klrsx\" (UniqueName: \"kubernetes.io/projected/9cd1d012-b361-48bc-a30d-615a6dde4907-kube-api-access-klrsx\") pod \"kube-proxy-xd6j8\" (UID: \"9cd1d012-b361-48bc-a30d-615a6dde4907\") " pod="kube-system/kube-proxy-xd6j8" Jul 11 00:17:44.376846 kubelet[2516]: I0711 00:17:44.376414 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-hostproc\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.376846 kubelet[2516]: I0711 00:17:44.376437 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-config-path\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.376846 kubelet[2516]: I0711 00:17:44.376462 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-hubble-tls\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.377060 kubelet[2516]: I0711 00:17:44.376488 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d33e498-1869-4418-a4a0-051fdb0311eb-clustermesh-secrets\") pod \"cilium-mx5ts\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " pod="kube-system/cilium-mx5ts" Jul 11 00:17:44.382888 systemd[1]: Created slice kubepods-burstable-pod0d33e498_1869_4418_a4a0_051fdb0311eb.slice - libcontainer container kubepods-burstable-pod0d33e498_1869_4418_a4a0_051fdb0311eb.slice. Jul 11 00:17:44.676023 kubelet[2516]: E0711 00:17:44.675944 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:44.677006 containerd[1459]: time="2025-07-11T00:17:44.676928515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xd6j8,Uid:9cd1d012-b361-48bc-a30d-615a6dde4907,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:44.691526 kubelet[2516]: E0711 00:17:44.691444 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:44.692246 containerd[1459]: time="2025-07-11T00:17:44.692186521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx5ts,Uid:0d33e498-1869-4418-a4a0-051fdb0311eb,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:44.761892 systemd[1]: Created slice kubepods-besteffort-pod7ac59773_73ec_4e6e_99e8_237bd5089b1c.slice - libcontainer container kubepods-besteffort-pod7ac59773_73ec_4e6e_99e8_237bd5089b1c.slice. Jul 11 00:17:44.780625 kubelet[2516]: I0711 00:17:44.780536 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ac59773-73ec-4e6e-99e8-237bd5089b1c-cilium-config-path\") pod \"cilium-operator-5d85765b45-ljps5\" (UID: \"7ac59773-73ec-4e6e-99e8-237bd5089b1c\") " pod="kube-system/cilium-operator-5d85765b45-ljps5" Jul 11 00:17:44.780625 kubelet[2516]: I0711 00:17:44.780604 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4jt\" (UniqueName: \"kubernetes.io/projected/7ac59773-73ec-4e6e-99e8-237bd5089b1c-kube-api-access-qp4jt\") pod \"cilium-operator-5d85765b45-ljps5\" (UID: \"7ac59773-73ec-4e6e-99e8-237bd5089b1c\") " pod="kube-system/cilium-operator-5d85765b45-ljps5" Jul 11 00:17:45.065952 kubelet[2516]: E0711 00:17:45.065758 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.066601 containerd[1459]: time="2025-07-11T00:17:45.066531119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ljps5,Uid:7ac59773-73ec-4e6e-99e8-237bd5089b1c,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:45.206926 containerd[1459]: time="2025-07-11T00:17:45.206721073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:45.206926 containerd[1459]: time="2025-07-11T00:17:45.206832714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:45.206926 containerd[1459]: time="2025-07-11T00:17:45.206850739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:45.207241 containerd[1459]: time="2025-07-11T00:17:45.206981205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:45.230371 systemd[1]: Started cri-containerd-35cee65c6a326b408c8733336b2f8079591e7a98be319cb9e67d595260418052.scope - libcontainer container 35cee65c6a326b408c8733336b2f8079591e7a98be319cb9e67d595260418052. Jul 11 00:17:45.261871 containerd[1459]: time="2025-07-11T00:17:45.261824650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xd6j8,Uid:9cd1d012-b361-48bc-a30d-615a6dde4907,Namespace:kube-system,Attempt:0,} returns sandbox id \"35cee65c6a326b408c8733336b2f8079591e7a98be319cb9e67d595260418052\"" Jul 11 00:17:45.262810 kubelet[2516]: E0711 00:17:45.262748 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.266752 containerd[1459]: time="2025-07-11T00:17:45.266703085Z" level=info msg="CreateContainer within sandbox \"35cee65c6a326b408c8733336b2f8079591e7a98be319cb9e67d595260418052\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:17:45.280298 containerd[1459]: time="2025-07-11T00:17:45.280156888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:45.280298 containerd[1459]: time="2025-07-11T00:17:45.280225688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:45.280298 containerd[1459]: time="2025-07-11T00:17:45.280236679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:45.280564 containerd[1459]: time="2025-07-11T00:17:45.280373818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:45.307523 systemd[1]: Started cri-containerd-1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425.scope - libcontainer container 1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425. Jul 11 00:17:45.339198 containerd[1459]: time="2025-07-11T00:17:45.338805258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx5ts,Uid:0d33e498-1869-4418-a4a0-051fdb0311eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\"" Jul 11 00:17:45.339822 kubelet[2516]: E0711 00:17:45.339745 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.341210 containerd[1459]: time="2025-07-11T00:17:45.341102933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:17:45.724637 kubelet[2516]: E0711 00:17:45.724587 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:45.862295 containerd[1459]: time="2025-07-11T00:17:45.861520097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:45.862835 containerd[1459]: time="2025-07-11T00:17:45.862281086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:45.862835 containerd[1459]: time="2025-07-11T00:17:45.862321913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:45.862835 containerd[1459]: time="2025-07-11T00:17:45.862576295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:45.881571 systemd[1]: Started cri-containerd-d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674.scope - libcontainer container d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674. Jul 11 00:17:45.926614 containerd[1459]: time="2025-07-11T00:17:45.926552250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ljps5,Uid:7ac59773-73ec-4e6e-99e8-237bd5089b1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674\"" Jul 11 00:17:45.927584 kubelet[2516]: E0711 00:17:45.927527 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:46.195227 containerd[1459]: time="2025-07-11T00:17:46.195142126Z" level=info msg="CreateContainer within sandbox \"35cee65c6a326b408c8733336b2f8079591e7a98be319cb9e67d595260418052\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3185f5f3acf8625b55c9e0a860f3166ea74b12c722bc44d7d9cc65407ec55362\"" Jul 11 00:17:46.196360 containerd[1459]: time="2025-07-11T00:17:46.196314262Z" level=info msg="StartContainer for \"3185f5f3acf8625b55c9e0a860f3166ea74b12c722bc44d7d9cc65407ec55362\"" Jul 11 00:17:46.241415 systemd[1]: Started cri-containerd-3185f5f3acf8625b55c9e0a860f3166ea74b12c722bc44d7d9cc65407ec55362.scope - libcontainer container 3185f5f3acf8625b55c9e0a860f3166ea74b12c722bc44d7d9cc65407ec55362. Jul 11 00:17:46.295821 containerd[1459]: time="2025-07-11T00:17:46.295733940Z" level=info msg="StartContainer for \"3185f5f3acf8625b55c9e0a860f3166ea74b12c722bc44d7d9cc65407ec55362\" returns successfully" Jul 11 00:17:46.385490 kubelet[2516]: E0711 00:17:46.385427 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:46.387231 kubelet[2516]: E0711 00:17:46.387178 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:46.401083 kubelet[2516]: I0711 00:17:46.400787 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xd6j8" podStartSLOduration=2.400763293 podStartE2EDuration="2.400763293s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:46.400209837 +0000 UTC m=+7.166156722" watchObservedRunningTime="2025-07-11 00:17:46.400763293 +0000 UTC m=+7.166710168" Jul 11 00:17:47.388858 kubelet[2516]: E0711 00:17:47.388729 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:50.398616 kubelet[2516]: E0711 00:17:50.398468 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:51.396626 kubelet[2516]: E0711 00:17:51.396557 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:51.947987 kubelet[2516]: E0711 00:17:51.947937 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:55.323951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593376694.mount: Deactivated successfully. Jul 11 00:18:02.298497 containerd[1459]: time="2025-07-11T00:18:02.298398474Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:02.317907 containerd[1459]: time="2025-07-11T00:18:02.317797475Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 11 00:18:02.342994 containerd[1459]: time="2025-07-11T00:18:02.342434967Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:02.346299 containerd[1459]: time="2025-07-11T00:18:02.345233069Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.00401035s" Jul 11 00:18:02.346299 containerd[1459]: time="2025-07-11T00:18:02.345317218Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 11 00:18:02.349202 containerd[1459]: time="2025-07-11T00:18:02.349081721Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:18:02.350234 containerd[1459]: time="2025-07-11T00:18:02.350187344Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:18:02.613427 containerd[1459]: time="2025-07-11T00:18:02.613222643Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\"" Jul 11 00:18:02.614374 containerd[1459]: time="2025-07-11T00:18:02.614064218Z" level=info msg="StartContainer for \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\"" Jul 11 00:18:02.658672 systemd[1]: Started cri-containerd-0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc.scope - libcontainer container 0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc. Jul 11 00:18:02.697805 containerd[1459]: time="2025-07-11T00:18:02.697733858Z" level=info msg="StartContainer for \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\" returns successfully" Jul 11 00:18:02.715023 systemd[1]: cri-containerd-0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc.scope: Deactivated successfully. Jul 11 00:18:03.424139 kubelet[2516]: E0711 00:18:03.424068 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:03.554236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc-rootfs.mount: Deactivated successfully. Jul 11 00:18:03.686669 containerd[1459]: time="2025-07-11T00:18:03.683518164Z" level=info msg="shim disconnected" id=0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc namespace=k8s.io Jul 11 00:18:03.686669 containerd[1459]: time="2025-07-11T00:18:03.686546140Z" level=warning msg="cleaning up after shim disconnected" id=0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc namespace=k8s.io Jul 11 00:18:03.686669 containerd[1459]: time="2025-07-11T00:18:03.686578591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:04.428573 kubelet[2516]: E0711 00:18:04.428532 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:04.430876 containerd[1459]: time="2025-07-11T00:18:04.430823871Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:18:04.948969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121154151.mount: Deactivated successfully. Jul 11 00:18:05.148761 containerd[1459]: time="2025-07-11T00:18:05.148681222Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\"" Jul 11 00:18:05.150220 containerd[1459]: time="2025-07-11T00:18:05.150131744Z" level=info msg="StartContainer for \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\"" Jul 11 00:18:05.189538 systemd[1]: Started cri-containerd-4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42.scope - libcontainer container 4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42. Jul 11 00:18:05.230962 containerd[1459]: time="2025-07-11T00:18:05.230797664Z" level=info msg="StartContainer for \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\" returns successfully" Jul 11 00:18:05.251159 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:18:05.251608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:18:05.251735 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:18:05.260594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:18:05.260898 systemd[1]: cri-containerd-4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42.scope: Deactivated successfully. Jul 11 00:18:05.286936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42-rootfs.mount: Deactivated successfully. Jul 11 00:18:05.303536 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:18:05.308992 containerd[1459]: time="2025-07-11T00:18:05.308915796Z" level=info msg="shim disconnected" id=4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42 namespace=k8s.io Jul 11 00:18:05.308992 containerd[1459]: time="2025-07-11T00:18:05.308990187Z" level=warning msg="cleaning up after shim disconnected" id=4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42 namespace=k8s.io Jul 11 00:18:05.309200 containerd[1459]: time="2025-07-11T00:18:05.309004965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:05.434136 kubelet[2516]: E0711 00:18:05.433432 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:05.437677 containerd[1459]: time="2025-07-11T00:18:05.437630931Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:18:07.169072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197702960.mount: Deactivated successfully. Jul 11 00:18:08.912534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673482049.mount: Deactivated successfully. Jul 11 00:18:09.430555 containerd[1459]: time="2025-07-11T00:18:09.430481221Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\"" Jul 11 00:18:09.431740 containerd[1459]: time="2025-07-11T00:18:09.431705866Z" level=info msg="StartContainer for \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\"" Jul 11 00:18:09.481510 systemd[1]: Started cri-containerd-66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea.scope - libcontainer container 66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea. Jul 11 00:18:09.526067 systemd[1]: cri-containerd-66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea.scope: Deactivated successfully. Jul 11 00:18:09.734091 containerd[1459]: time="2025-07-11T00:18:09.733891531Z" level=info msg="StartContainer for \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\" returns successfully" Jul 11 00:18:09.909524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea-rootfs.mount: Deactivated successfully. Jul 11 00:18:10.194045 containerd[1459]: time="2025-07-11T00:18:10.193956514Z" level=info msg="shim disconnected" id=66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea namespace=k8s.io Jul 11 00:18:10.194045 containerd[1459]: time="2025-07-11T00:18:10.194026806Z" level=warning msg="cleaning up after shim disconnected" id=66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea namespace=k8s.io Jul 11 00:18:10.194045 containerd[1459]: time="2025-07-11T00:18:10.194042646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:10.447464 kubelet[2516]: E0711 00:18:10.447302 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:10.448929 containerd[1459]: time="2025-07-11T00:18:10.448769654Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:18:11.095146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346879655.mount: Deactivated successfully. Jul 11 00:18:11.949434 containerd[1459]: time="2025-07-11T00:18:11.949352748Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\"" Jul 11 00:18:11.950169 containerd[1459]: time="2025-07-11T00:18:11.950079175Z" level=info msg="StartContainer for \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\"" Jul 11 00:18:11.982291 systemd[1]: run-containerd-runc-k8s.io-b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f-runc.KHHXBb.mount: Deactivated successfully. Jul 11 00:18:11.993601 systemd[1]: Started cri-containerd-b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f.scope - libcontainer container b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f. Jul 11 00:18:12.007387 containerd[1459]: time="2025-07-11T00:18:12.007099625Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:12.010861 containerd[1459]: time="2025-07-11T00:18:12.010726082Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 11 00:18:12.014709 containerd[1459]: time="2025-07-11T00:18:12.014636774Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:12.016225 containerd[1459]: time="2025-07-11T00:18:12.016174749Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 9.667016333s" Jul 11 00:18:12.016517 containerd[1459]: time="2025-07-11T00:18:12.016398210Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 11 00:18:12.023393 containerd[1459]: time="2025-07-11T00:18:12.023301927Z" level=info msg="CreateContainer within sandbox \"d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:18:12.031862 systemd[1]: cri-containerd-b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f.scope: Deactivated successfully. Jul 11 00:18:12.071357 containerd[1459]: time="2025-07-11T00:18:12.070916553Z" level=info msg="StartContainer for \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\" returns successfully" Jul 11 00:18:12.097609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f-rootfs.mount: Deactivated successfully. Jul 11 00:18:12.453579 kubelet[2516]: E0711 00:18:12.453525 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:12.491473 containerd[1459]: time="2025-07-11T00:18:12.491363402Z" level=info msg="shim disconnected" id=b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f namespace=k8s.io Jul 11 00:18:12.491473 containerd[1459]: time="2025-07-11T00:18:12.491438473Z" level=warning msg="cleaning up after shim disconnected" id=b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f namespace=k8s.io Jul 11 00:18:12.491473 containerd[1459]: time="2025-07-11T00:18:12.491450837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:18:12.770624 containerd[1459]: time="2025-07-11T00:18:12.770289681Z" level=info msg="CreateContainer within sandbox \"d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\"" Jul 11 00:18:12.771261 containerd[1459]: time="2025-07-11T00:18:12.771217619Z" level=info msg="StartContainer for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\"" Jul 11 00:18:12.806271 systemd[1]: Started cri-containerd-d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8.scope - libcontainer container d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8. Jul 11 00:18:13.010222 containerd[1459]: time="2025-07-11T00:18:13.010157242Z" level=info msg="StartContainer for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" returns successfully" Jul 11 00:18:13.457805 kubelet[2516]: E0711 00:18:13.457762 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:13.460220 kubelet[2516]: E0711 00:18:13.460170 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:13.460385 containerd[1459]: time="2025-07-11T00:18:13.460184717Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:18:14.107666 containerd[1459]: time="2025-07-11T00:18:14.107560203Z" level=info msg="CreateContainer within sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\"" Jul 11 00:18:14.108278 containerd[1459]: time="2025-07-11T00:18:14.108235734Z" level=info msg="StartContainer for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\"" Jul 11 00:18:14.154300 systemd[1]: Started cri-containerd-305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f.scope - libcontainer container 305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f. Jul 11 00:18:14.298352 containerd[1459]: time="2025-07-11T00:18:14.298253148Z" level=info msg="StartContainer for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" returns successfully" Jul 11 00:18:14.446371 kubelet[2516]: I0711 00:18:14.446299 2516 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:18:14.472492 kubelet[2516]: E0711 00:18:14.472441 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:14.473171 kubelet[2516]: E0711 00:18:14.472764 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:14.590771 kubelet[2516]: I0711 00:18:14.590641 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ljps5" podStartSLOduration=4.499936202 podStartE2EDuration="30.590609346s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="2025-07-11 00:17:45.928131867 +0000 UTC m=+6.694078732" lastFinishedPulling="2025-07-11 00:18:12.018805011 +0000 UTC m=+32.784751876" observedRunningTime="2025-07-11 00:18:14.464142125 +0000 UTC m=+35.230088990" watchObservedRunningTime="2025-07-11 00:18:14.590609346 +0000 UTC m=+35.356556231" Jul 11 00:18:14.607226 systemd[1]: Created slice kubepods-burstable-pode1aeb14e_2a5a_40cd_bc25_ac97fed634f0.slice - libcontainer container kubepods-burstable-pode1aeb14e_2a5a_40cd_bc25_ac97fed634f0.slice. Jul 11 00:18:14.625918 systemd[1]: Created slice kubepods-burstable-podb4727daa_86d0_433b_b1bf_f76f10310acd.slice - libcontainer container kubepods-burstable-podb4727daa_86d0_433b_b1bf_f76f10310acd.slice. Jul 11 00:18:14.648494 kubelet[2516]: I0711 00:18:14.648417 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mx5ts" podStartSLOduration=13.640752278 podStartE2EDuration="30.648390729s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="2025-07-11 00:17:45.340499221 +0000 UTC m=+6.106446086" lastFinishedPulling="2025-07-11 00:18:02.348137672 +0000 UTC m=+23.114084537" observedRunningTime="2025-07-11 00:18:14.619417823 +0000 UTC m=+35.385364708" watchObservedRunningTime="2025-07-11 00:18:14.648390729 +0000 UTC m=+35.414337594" Jul 11 00:18:14.706808 kubelet[2516]: I0711 00:18:14.706217 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1aeb14e-2a5a-40cd-bc25-ac97fed634f0-config-volume\") pod \"coredns-7c65d6cfc9-b8kgk\" (UID: \"e1aeb14e-2a5a-40cd-bc25-ac97fed634f0\") " pod="kube-system/coredns-7c65d6cfc9-b8kgk" Jul 11 00:18:14.706808 kubelet[2516]: I0711 00:18:14.706343 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4727daa-86d0-433b-b1bf-f76f10310acd-config-volume\") pod \"coredns-7c65d6cfc9-n659k\" (UID: \"b4727daa-86d0-433b-b1bf-f76f10310acd\") " pod="kube-system/coredns-7c65d6cfc9-n659k" Jul 11 00:18:14.706808 kubelet[2516]: I0711 00:18:14.706364 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjr27\" (UniqueName: \"kubernetes.io/projected/e1aeb14e-2a5a-40cd-bc25-ac97fed634f0-kube-api-access-jjr27\") pod \"coredns-7c65d6cfc9-b8kgk\" (UID: \"e1aeb14e-2a5a-40cd-bc25-ac97fed634f0\") " pod="kube-system/coredns-7c65d6cfc9-b8kgk" Jul 11 00:18:14.706808 kubelet[2516]: I0711 00:18:14.706390 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h97hq\" (UniqueName: \"kubernetes.io/projected/b4727daa-86d0-433b-b1bf-f76f10310acd-kube-api-access-h97hq\") pod \"coredns-7c65d6cfc9-n659k\" (UID: \"b4727daa-86d0-433b-b1bf-f76f10310acd\") " pod="kube-system/coredns-7c65d6cfc9-n659k" Jul 11 00:18:15.219783 kubelet[2516]: E0711 00:18:15.219724 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:15.234418 kubelet[2516]: E0711 00:18:15.234345 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:15.240187 containerd[1459]: time="2025-07-11T00:18:15.240144147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n659k,Uid:b4727daa-86d0-433b-b1bf-f76f10310acd,Namespace:kube-system,Attempt:0,}" Jul 11 00:18:15.240599 containerd[1459]: time="2025-07-11T00:18:15.240371804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b8kgk,Uid:e1aeb14e-2a5a-40cd-bc25-ac97fed634f0,Namespace:kube-system,Attempt:0,}" Jul 11 00:18:15.241414 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:59266.service - OpenSSH per-connection server daemon (10.0.0.1:59266). Jul 11 00:18:15.369532 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 59266 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:15.372380 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:15.382890 systemd-logind[1445]: New session 8 of user core. Jul 11 00:18:15.389573 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:18:15.527401 kubelet[2516]: E0711 00:18:15.527246 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:15.633548 sshd[3329]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:15.637749 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:59266.service: Deactivated successfully. Jul 11 00:18:15.640514 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:18:15.642686 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:18:15.644590 systemd-logind[1445]: Removed session 8. Jul 11 00:18:16.529240 kubelet[2516]: E0711 00:18:16.529196 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:17.637536 systemd-networkd[1386]: cilium_host: Link UP Jul 11 00:18:17.637860 systemd-networkd[1386]: cilium_net: Link UP Jul 11 00:18:17.638257 systemd-networkd[1386]: cilium_net: Gained carrier Jul 11 00:18:17.638465 systemd-networkd[1386]: cilium_host: Gained carrier Jul 11 00:18:17.798330 systemd-networkd[1386]: cilium_vxlan: Link UP Jul 11 00:18:17.798344 systemd-networkd[1386]: cilium_vxlan: Gained carrier Jul 11 00:18:17.996287 systemd-networkd[1386]: cilium_host: Gained IPv6LL Jul 11 00:18:18.099173 kernel: NET: Registered PF_ALG protocol family Jul 11 00:18:18.402405 systemd-networkd[1386]: cilium_net: Gained IPv6LL Jul 11 00:18:18.970518 systemd-networkd[1386]: lxc_health: Link UP Jul 11 00:18:18.985194 systemd-networkd[1386]: lxc_health: Gained carrier Jul 11 00:18:19.550265 systemd-networkd[1386]: lxcdeba81686893: Link UP Jul 11 00:18:19.559153 kernel: eth0: renamed from tmp59b28 Jul 11 00:18:19.568886 systemd-networkd[1386]: lxcdeba81686893: Gained carrier Jul 11 00:18:19.581852 systemd-networkd[1386]: lxc46fdd67a5bd2: Link UP Jul 11 00:18:19.591146 kernel: eth0: renamed from tmp26088 Jul 11 00:18:19.600399 systemd-networkd[1386]: lxc46fdd67a5bd2: Gained carrier Jul 11 00:18:19.809341 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Jul 11 00:18:20.385329 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jul 11 00:18:20.647102 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:46102.service - OpenSSH per-connection server daemon (10.0.0.1:46102). Jul 11 00:18:20.693928 kubelet[2516]: E0711 00:18:20.693818 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:20.769381 systemd-networkd[1386]: lxc46fdd67a5bd2: Gained IPv6LL Jul 11 00:18:20.956239 sshd[3753]: Accepted publickey for core from 10.0.0.1 port 46102 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:20.958710 sshd[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:20.961471 systemd-networkd[1386]: lxcdeba81686893: Gained IPv6LL Jul 11 00:18:20.966915 systemd-logind[1445]: New session 9 of user core. Jul 11 00:18:20.973760 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:18:21.365094 sshd[3753]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:21.369704 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:46102.service: Deactivated successfully. Jul 11 00:18:21.372255 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:18:21.373008 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:18:21.374148 systemd-logind[1445]: Removed session 9. Jul 11 00:18:21.538914 kubelet[2516]: E0711 00:18:21.538868 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:23.892810 containerd[1459]: time="2025-07-11T00:18:23.892598628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:23.892810 containerd[1459]: time="2025-07-11T00:18:23.892748811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:23.892810 containerd[1459]: time="2025-07-11T00:18:23.892776162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:23.893474 containerd[1459]: time="2025-07-11T00:18:23.892919582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:23.917975 containerd[1459]: time="2025-07-11T00:18:23.917430723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:23.917975 containerd[1459]: time="2025-07-11T00:18:23.917551220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:23.917975 containerd[1459]: time="2025-07-11T00:18:23.917569203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:23.917975 containerd[1459]: time="2025-07-11T00:18:23.917694278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:23.936382 systemd[1]: Started cri-containerd-59b28c721c857720e0535fe8bc8d892e10d3746d456d32a00b591e5762e6916d.scope - libcontainer container 59b28c721c857720e0535fe8bc8d892e10d3746d456d32a00b591e5762e6916d. Jul 11 00:18:23.943254 systemd[1]: Started cri-containerd-26088e75be3b1795f7c7085b421835877fa01ec046d8bb05d429bbb61fcd11bc.scope - libcontainer container 26088e75be3b1795f7c7085b421835877fa01ec046d8bb05d429bbb61fcd11bc. Jul 11 00:18:23.955479 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:23.960853 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:23.992448 containerd[1459]: time="2025-07-11T00:18:23.992314596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b8kgk,Uid:e1aeb14e-2a5a-40cd-bc25-ac97fed634f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"59b28c721c857720e0535fe8bc8d892e10d3746d456d32a00b591e5762e6916d\"" Jul 11 00:18:23.993161 containerd[1459]: time="2025-07-11T00:18:23.993009413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n659k,Uid:b4727daa-86d0-433b-b1bf-f76f10310acd,Namespace:kube-system,Attempt:0,} returns sandbox id \"26088e75be3b1795f7c7085b421835877fa01ec046d8bb05d429bbb61fcd11bc\"" Jul 11 00:18:23.993344 kubelet[2516]: E0711 00:18:23.993322 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:23.994319 kubelet[2516]: E0711 00:18:23.994258 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:23.997774 containerd[1459]: time="2025-07-11T00:18:23.997739002Z" level=info msg="CreateContainer within sandbox \"26088e75be3b1795f7c7085b421835877fa01ec046d8bb05d429bbb61fcd11bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:18:23.998622 containerd[1459]: time="2025-07-11T00:18:23.998572130Z" level=info msg="CreateContainer within sandbox \"59b28c721c857720e0535fe8bc8d892e10d3746d456d32a00b591e5762e6916d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:18:24.230959 containerd[1459]: time="2025-07-11T00:18:24.229632103Z" level=info msg="CreateContainer within sandbox \"59b28c721c857720e0535fe8bc8d892e10d3746d456d32a00b591e5762e6916d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b513b93c04ff2abc140e11bacb8b134e11de11ab7c119b74ac870e9cc1320673\"" Jul 11 00:18:24.230959 containerd[1459]: time="2025-07-11T00:18:24.230407472Z" level=info msg="StartContainer for \"b513b93c04ff2abc140e11bacb8b134e11de11ab7c119b74ac870e9cc1320673\"" Jul 11 00:18:24.237937 containerd[1459]: time="2025-07-11T00:18:24.237588835Z" level=info msg="CreateContainer within sandbox \"26088e75be3b1795f7c7085b421835877fa01ec046d8bb05d429bbb61fcd11bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2a41bcb09904bc245867202dcdf30264434650b777137c9769cae5ea1632500\"" Jul 11 00:18:24.238566 containerd[1459]: time="2025-07-11T00:18:24.238528403Z" level=info msg="StartContainer for \"d2a41bcb09904bc245867202dcdf30264434650b777137c9769cae5ea1632500\"" Jul 11 00:18:24.280291 systemd[1]: Started cri-containerd-b513b93c04ff2abc140e11bacb8b134e11de11ab7c119b74ac870e9cc1320673.scope - libcontainer container b513b93c04ff2abc140e11bacb8b134e11de11ab7c119b74ac870e9cc1320673. Jul 11 00:18:24.286269 systemd[1]: Started cri-containerd-d2a41bcb09904bc245867202dcdf30264434650b777137c9769cae5ea1632500.scope - libcontainer container d2a41bcb09904bc245867202dcdf30264434650b777137c9769cae5ea1632500. Jul 11 00:18:24.374490 containerd[1459]: time="2025-07-11T00:18:24.373087534Z" level=info msg="StartContainer for \"d2a41bcb09904bc245867202dcdf30264434650b777137c9769cae5ea1632500\" returns successfully" Jul 11 00:18:24.374490 containerd[1459]: time="2025-07-11T00:18:24.373087544Z" level=info msg="StartContainer for \"b513b93c04ff2abc140e11bacb8b134e11de11ab7c119b74ac870e9cc1320673\" returns successfully" Jul 11 00:18:24.551186 kubelet[2516]: E0711 00:18:24.549589 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:24.552427 kubelet[2516]: E0711 00:18:24.552391 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:24.573682 kubelet[2516]: I0711 00:18:24.572924 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n659k" podStartSLOduration=40.5728987 podStartE2EDuration="40.5728987s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:24.572509218 +0000 UTC m=+45.338456083" watchObservedRunningTime="2025-07-11 00:18:24.5728987 +0000 UTC m=+45.338845576" Jul 11 00:18:24.898443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013399021.mount: Deactivated successfully. Jul 11 00:18:25.292368 kubelet[2516]: I0711 00:18:25.292126 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b8kgk" podStartSLOduration=41.292082344 podStartE2EDuration="41.292082344s" podCreationTimestamp="2025-07-11 00:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:24.598272122 +0000 UTC m=+45.364218987" watchObservedRunningTime="2025-07-11 00:18:25.292082344 +0000 UTC m=+46.058029209" Jul 11 00:18:25.554480 kubelet[2516]: E0711 00:18:25.554342 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:25.554682 kubelet[2516]: E0711 00:18:25.554586 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:26.391728 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:46110.service - OpenSSH per-connection server daemon (10.0.0.1:46110). Jul 11 00:18:26.475405 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 46110 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:26.485690 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:26.490629 systemd-logind[1445]: New session 10 of user core. Jul 11 00:18:26.497288 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:18:26.557678 kubelet[2516]: E0711 00:18:26.557371 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:26.557678 kubelet[2516]: E0711 00:18:26.557569 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:26.984736 sshd[3947]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:26.990361 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:46110.service: Deactivated successfully. Jul 11 00:18:26.992827 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:18:26.993579 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:18:26.994683 systemd-logind[1445]: Removed session 10. Jul 11 00:18:31.999611 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:59904.service - OpenSSH per-connection server daemon (10.0.0.1:59904). Jul 11 00:18:32.037247 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:32.039291 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:32.044273 systemd-logind[1445]: New session 11 of user core. Jul 11 00:18:32.056316 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:18:32.183822 sshd[3968]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:32.188922 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:59904.service: Deactivated successfully. Jul 11 00:18:32.191611 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:18:32.192344 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:18:32.193887 systemd-logind[1445]: Removed session 11. Jul 11 00:18:37.197725 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:59920.service - OpenSSH per-connection server daemon (10.0.0.1:59920). Jul 11 00:18:37.236488 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 59920 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:37.239264 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:37.245741 systemd-logind[1445]: New session 12 of user core. Jul 11 00:18:37.255511 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:18:37.393407 sshd[3984]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:37.398046 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:59920.service: Deactivated successfully. Jul 11 00:18:37.400796 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:18:37.401601 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:18:37.402664 systemd-logind[1445]: Removed session 12. Jul 11 00:18:42.404174 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:48712.service - OpenSSH per-connection server daemon (10.0.0.1:48712). Jul 11 00:18:42.438652 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 48712 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:42.440489 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:42.445376 systemd-logind[1445]: New session 13 of user core. Jul 11 00:18:42.460291 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:18:42.830085 sshd[4002]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:42.834763 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:48712.service: Deactivated successfully. Jul 11 00:18:42.837209 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:18:42.838107 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:18:42.839572 systemd-logind[1445]: Removed session 13. Jul 11 00:18:47.845293 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:48726.service - OpenSSH per-connection server daemon (10.0.0.1:48726). Jul 11 00:18:47.886787 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 48726 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:47.889188 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:47.894798 systemd-logind[1445]: New session 14 of user core. Jul 11 00:18:47.909390 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:18:48.057484 sshd[4020]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:48.065346 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:48726.service: Deactivated successfully. Jul 11 00:18:48.067491 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:18:48.069193 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:18:48.075507 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:48736.service - OpenSSH per-connection server daemon (10.0.0.1:48736). Jul 11 00:18:48.076900 systemd-logind[1445]: Removed session 14. Jul 11 00:18:48.106979 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 48736 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:48.108713 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:48.112973 systemd-logind[1445]: New session 15 of user core. Jul 11 00:18:48.119223 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:18:48.637924 sshd[4036]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:48.651874 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:48736.service: Deactivated successfully. Jul 11 00:18:48.654276 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:18:48.656934 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:18:48.661509 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:48744.service - OpenSSH per-connection server daemon (10.0.0.1:48744). Jul 11 00:18:48.662794 systemd-logind[1445]: Removed session 15. Jul 11 00:18:48.695548 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 48744 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:48.697631 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:48.703710 systemd-logind[1445]: New session 16 of user core. Jul 11 00:18:48.714287 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:18:48.928009 sshd[4049]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:48.933740 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:48744.service: Deactivated successfully. Jul 11 00:18:48.936456 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:18:48.937473 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:18:48.938838 systemd-logind[1445]: Removed session 16. Jul 11 00:18:52.355070 kubelet[2516]: E0711 00:18:52.355001 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:53.948438 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:55130.service - OpenSSH per-connection server daemon (10.0.0.1:55130). Jul 11 00:18:53.986583 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 55130 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:53.988535 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:53.993087 systemd-logind[1445]: New session 17 of user core. Jul 11 00:18:54.006610 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:18:54.344635 sshd[4065]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:54.351174 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:55130.service: Deactivated successfully. Jul 11 00:18:54.353936 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:18:54.355012 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:18:54.355226 kubelet[2516]: E0711 00:18:54.355193 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:54.356424 systemd-logind[1445]: Removed session 17. Jul 11 00:18:58.355415 kubelet[2516]: E0711 00:18:58.355327 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:59.149679 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:55144.service - OpenSSH per-connection server daemon (10.0.0.1:55144). Jul 11 00:18:59.186354 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 55144 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:18:59.188209 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:59.192807 systemd-logind[1445]: New session 18 of user core. Jul 11 00:18:59.208346 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:18:59.323857 sshd[4080]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:59.328695 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:55144.service: Deactivated successfully. Jul 11 00:18:59.330950 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:18:59.331615 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:18:59.332513 systemd-logind[1445]: Removed session 18. Jul 11 00:19:04.338198 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:46320.service - OpenSSH per-connection server daemon (10.0.0.1:46320). Jul 11 00:19:04.376808 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 46320 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:04.379227 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:04.385534 systemd-logind[1445]: New session 19 of user core. Jul 11 00:19:04.395401 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:19:04.616202 sshd[4095]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:04.621139 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:46320.service: Deactivated successfully. Jul 11 00:19:04.623463 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:19:04.625615 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:19:04.626681 systemd-logind[1445]: Removed session 19. Jul 11 00:19:08.354829 kubelet[2516]: E0711 00:19:08.354745 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:09.629569 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:48896.service - OpenSSH per-connection server daemon (10.0.0.1:48896). Jul 11 00:19:09.667024 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 48896 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:09.669180 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:09.674194 systemd-logind[1445]: New session 20 of user core. Jul 11 00:19:09.684280 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:19:09.807064 sshd[4109]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:09.812526 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:48896.service: Deactivated successfully. Jul 11 00:19:09.815223 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:19:09.816009 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:19:09.816981 systemd-logind[1445]: Removed session 20. Jul 11 00:19:14.824490 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:48898.service - OpenSSH per-connection server daemon (10.0.0.1:48898). Jul 11 00:19:14.864915 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 48898 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:14.867441 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:14.875261 systemd-logind[1445]: New session 21 of user core. Jul 11 00:19:14.891642 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:19:15.031979 sshd[4123]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:15.041569 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:48898.service: Deactivated successfully. Jul 11 00:19:15.044621 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:19:15.047267 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:19:15.059740 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:48902.service - OpenSSH per-connection server daemon (10.0.0.1:48902). Jul 11 00:19:15.061106 systemd-logind[1445]: Removed session 21. Jul 11 00:19:15.097234 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 48902 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:15.100127 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:15.107383 systemd-logind[1445]: New session 22 of user core. Jul 11 00:19:15.116460 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:19:15.440167 sshd[4137]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:15.451532 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:48902.service: Deactivated successfully. Jul 11 00:19:15.454061 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:19:15.456282 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:19:15.466824 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:48912.service - OpenSSH per-connection server daemon (10.0.0.1:48912). Jul 11 00:19:15.468594 systemd-logind[1445]: Removed session 22. Jul 11 00:19:15.501859 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 48912 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:15.504733 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:15.510399 systemd-logind[1445]: New session 23 of user core. Jul 11 00:19:15.521515 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:19:17.578733 sshd[4149]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:17.589606 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:48912.service: Deactivated successfully. Jul 11 00:19:17.592540 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:19:17.595770 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:19:17.602479 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:48916.service - OpenSSH per-connection server daemon (10.0.0.1:48916). Jul 11 00:19:17.605135 systemd-logind[1445]: Removed session 23. Jul 11 00:19:17.659489 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 48916 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:17.661823 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:17.670131 systemd-logind[1445]: New session 24 of user core. Jul 11 00:19:17.678796 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:19:18.079358 sshd[4188]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:18.090582 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:48916.service: Deactivated successfully. Jul 11 00:19:18.095768 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:19:18.096836 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:19:18.110882 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:48924.service - OpenSSH per-connection server daemon (10.0.0.1:48924). Jul 11 00:19:18.113877 systemd-logind[1445]: Removed session 24. Jul 11 00:19:18.157145 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 48924 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:18.159235 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:18.164427 systemd-logind[1445]: New session 25 of user core. Jul 11 00:19:18.171485 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:19:18.361100 sshd[4200]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:18.366530 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:48924.service: Deactivated successfully. Jul 11 00:19:18.369297 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:19:18.370096 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:19:18.371888 systemd-logind[1445]: Removed session 25. Jul 11 00:19:23.378741 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:47408.service - OpenSSH per-connection server daemon (10.0.0.1:47408). Jul 11 00:19:23.415821 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 47408 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:23.418016 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:23.422901 systemd-logind[1445]: New session 26 of user core. Jul 11 00:19:23.430271 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:19:23.635499 sshd[4215]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:23.640747 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:47408.service: Deactivated successfully. Jul 11 00:19:23.643886 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:19:23.644685 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:19:23.645696 systemd-logind[1445]: Removed session 26. Jul 11 00:19:28.653228 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:47418.service - OpenSSH per-connection server daemon (10.0.0.1:47418). Jul 11 00:19:28.691795 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 47418 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:28.693673 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:28.698406 systemd-logind[1445]: New session 27 of user core. Jul 11 00:19:28.708443 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:19:28.823217 sshd[4233]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:28.828089 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:47418.service: Deactivated successfully. Jul 11 00:19:28.830649 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:19:28.831449 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:19:28.832737 systemd-logind[1445]: Removed session 27. Jul 11 00:19:33.836492 systemd[1]: Started sshd@27-10.0.0.79:22-10.0.0.1:40552.service - OpenSSH per-connection server daemon (10.0.0.1:40552). Jul 11 00:19:33.876881 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 40552 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:33.878910 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:33.883980 systemd-logind[1445]: New session 28 of user core. Jul 11 00:19:33.894309 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 00:19:34.002880 sshd[4247]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:34.006619 systemd[1]: sshd@27-10.0.0.79:22-10.0.0.1:40552.service: Deactivated successfully. Jul 11 00:19:34.008634 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 00:19:34.009254 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Jul 11 00:19:34.010083 systemd-logind[1445]: Removed session 28. Jul 11 00:19:39.017267 systemd[1]: Started sshd@28-10.0.0.79:22-10.0.0.1:40556.service - OpenSSH per-connection server daemon (10.0.0.1:40556). Jul 11 00:19:39.053659 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 40556 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:39.055539 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:39.060250 systemd-logind[1445]: New session 29 of user core. Jul 11 00:19:39.072290 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 00:19:39.237739 sshd[4261]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:39.250554 systemd[1]: sshd@28-10.0.0.79:22-10.0.0.1:40556.service: Deactivated successfully. Jul 11 00:19:39.252688 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 00:19:39.254441 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Jul 11 00:19:39.262440 systemd[1]: Started sshd@29-10.0.0.79:22-10.0.0.1:40568.service - OpenSSH per-connection server daemon (10.0.0.1:40568). Jul 11 00:19:39.263656 systemd-logind[1445]: Removed session 29. Jul 11 00:19:39.293820 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 40568 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:39.295732 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:39.300315 systemd-logind[1445]: New session 30 of user core. Jul 11 00:19:39.307254 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 00:19:39.355559 kubelet[2516]: E0711 00:19:39.355517 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:40.355647 kubelet[2516]: E0711 00:19:40.355594 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:41.376875 containerd[1459]: time="2025-07-11T00:19:41.376398052Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:19:41.383065 containerd[1459]: time="2025-07-11T00:19:41.382870094Z" level=info msg="StopContainer for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" with timeout 30 (s)" Jul 11 00:19:41.383648 containerd[1459]: time="2025-07-11T00:19:41.383101911Z" level=info msg="StopContainer for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" with timeout 2 (s)" Jul 11 00:19:41.383648 containerd[1459]: time="2025-07-11T00:19:41.383438318Z" level=info msg="Stop container \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" with signal terminated" Jul 11 00:19:41.383648 containerd[1459]: time="2025-07-11T00:19:41.383438629Z" level=info msg="Stop container \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" with signal terminated" Jul 11 00:19:41.408094 systemd-networkd[1386]: lxc_health: Link DOWN Jul 11 00:19:41.408133 systemd-networkd[1386]: lxc_health: Lost carrier Jul 11 00:19:41.419418 systemd[1]: cri-containerd-d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8.scope: Deactivated successfully. Jul 11 00:19:41.440429 systemd[1]: cri-containerd-305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f.scope: Deactivated successfully. Jul 11 00:19:41.440855 systemd[1]: cri-containerd-305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f.scope: Consumed 8.618s CPU time. Jul 11 00:19:41.449880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8-rootfs.mount: Deactivated successfully. Jul 11 00:19:41.462094 containerd[1459]: time="2025-07-11T00:19:41.461990960Z" level=info msg="shim disconnected" id=d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8 namespace=k8s.io Jul 11 00:19:41.462680 containerd[1459]: time="2025-07-11T00:19:41.462147937Z" level=warning msg="cleaning up after shim disconnected" id=d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8 namespace=k8s.io Jul 11 00:19:41.462680 containerd[1459]: time="2025-07-11T00:19:41.462164979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:41.471435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f-rootfs.mount: Deactivated successfully. Jul 11 00:19:41.476295 containerd[1459]: time="2025-07-11T00:19:41.476196047Z" level=info msg="shim disconnected" id=305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f namespace=k8s.io Jul 11 00:19:41.476651 containerd[1459]: time="2025-07-11T00:19:41.476524218Z" level=warning msg="cleaning up after shim disconnected" id=305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f namespace=k8s.io Jul 11 00:19:41.476651 containerd[1459]: time="2025-07-11T00:19:41.476547162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:41.498378 containerd[1459]: time="2025-07-11T00:19:41.498302349Z" level=info msg="StopContainer for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" returns successfully" Jul 11 00:19:41.504026 containerd[1459]: time="2025-07-11T00:19:41.503935553Z" level=info msg="StopPodSandbox for \"d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674\"" Jul 11 00:19:41.504026 containerd[1459]: time="2025-07-11T00:19:41.504010715Z" level=info msg="Container to stop \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:41.504350 containerd[1459]: time="2025-07-11T00:19:41.504204312Z" level=info msg="StopContainer for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" returns successfully" Jul 11 00:19:41.504852 containerd[1459]: time="2025-07-11T00:19:41.504807292Z" level=info msg="StopPodSandbox for \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\"" Jul 11 00:19:41.504916 containerd[1459]: time="2025-07-11T00:19:41.504852137Z" level=info msg="Container to stop \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:41.504916 containerd[1459]: time="2025-07-11T00:19:41.504869359Z" level=info msg="Container to stop \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:41.504916 containerd[1459]: time="2025-07-11T00:19:41.504878767Z" level=info msg="Container to stop \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:41.504916 containerd[1459]: time="2025-07-11T00:19:41.504888396Z" level=info msg="Container to stop \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:41.504916 containerd[1459]: time="2025-07-11T00:19:41.504897282Z" level=info msg="Container to stop \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:19:41.506776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674-shm.mount: Deactivated successfully. Jul 11 00:19:41.509969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425-shm.mount: Deactivated successfully. Jul 11 00:19:41.513497 systemd[1]: cri-containerd-d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674.scope: Deactivated successfully. Jul 11 00:19:41.514812 systemd[1]: cri-containerd-1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425.scope: Deactivated successfully. Jul 11 00:19:41.551647 containerd[1459]: time="2025-07-11T00:19:41.551573805Z" level=info msg="shim disconnected" id=1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425 namespace=k8s.io Jul 11 00:19:41.551647 containerd[1459]: time="2025-07-11T00:19:41.551640000Z" level=warning msg="cleaning up after shim disconnected" id=1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425 namespace=k8s.io Jul 11 00:19:41.551647 containerd[1459]: time="2025-07-11T00:19:41.551649829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:41.551952 containerd[1459]: time="2025-07-11T00:19:41.551713690Z" level=info msg="shim disconnected" id=d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674 namespace=k8s.io Jul 11 00:19:41.551952 containerd[1459]: time="2025-07-11T00:19:41.551838305Z" level=warning msg="cleaning up after shim disconnected" id=d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674 namespace=k8s.io Jul 11 00:19:41.551952 containerd[1459]: time="2025-07-11T00:19:41.551850298Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:41.593125 containerd[1459]: time="2025-07-11T00:19:41.593013934Z" level=info msg="TearDown network for sandbox \"d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674\" successfully" Jul 11 00:19:41.593125 containerd[1459]: time="2025-07-11T00:19:41.593099405Z" level=info msg="StopPodSandbox for \"d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674\" returns successfully" Jul 11 00:19:41.594466 containerd[1459]: time="2025-07-11T00:19:41.594362384Z" level=info msg="TearDown network for sandbox \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" successfully" Jul 11 00:19:41.594466 containerd[1459]: time="2025-07-11T00:19:41.594442665Z" level=info msg="StopPodSandbox for \"1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425\" returns successfully" Jul 11 00:19:41.728903 kubelet[2516]: I0711 00:19:41.728833 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-bpf-maps\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.728903 kubelet[2516]: I0711 00:19:41.728911 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-hubble-tls\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730187 kubelet[2516]: I0711 00:19:41.728937 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp4jt\" (UniqueName: \"kubernetes.io/projected/7ac59773-73ec-4e6e-99e8-237bd5089b1c-kube-api-access-qp4jt\") pod \"7ac59773-73ec-4e6e-99e8-237bd5089b1c\" (UID: \"7ac59773-73ec-4e6e-99e8-237bd5089b1c\") " Jul 11 00:19:41.730187 kubelet[2516]: I0711 00:19:41.728966 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-run\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730187 kubelet[2516]: I0711 00:19:41.728987 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ac59773-73ec-4e6e-99e8-237bd5089b1c-cilium-config-path\") pod \"7ac59773-73ec-4e6e-99e8-237bd5089b1c\" (UID: \"7ac59773-73ec-4e6e-99e8-237bd5089b1c\") " Jul 11 00:19:41.730187 kubelet[2516]: I0711 00:19:41.729003 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgv2\" (UniqueName: \"kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-kube-api-access-phgv2\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730187 kubelet[2516]: I0711 00:19:41.729018 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-lib-modules\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730187 kubelet[2516]: I0711 00:19:41.729000 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.730357 kubelet[2516]: I0711 00:19:41.729062 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.730357 kubelet[2516]: I0711 00:19:41.729032 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-xtables-lock\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730357 kubelet[2516]: I0711 00:19:41.729095 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.730357 kubelet[2516]: I0711 00:19:41.729137 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-kernel\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730357 kubelet[2516]: I0711 00:19:41.729157 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-cgroup\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730478 kubelet[2516]: I0711 00:19:41.729176 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-etc-cni-netd\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730478 kubelet[2516]: I0711 00:19:41.729192 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cni-path\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730478 kubelet[2516]: I0711 00:19:41.729214 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-config-path\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730478 kubelet[2516]: I0711 00:19:41.729230 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-net\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730478 kubelet[2516]: I0711 00:19:41.729245 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-hostproc\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730478 kubelet[2516]: I0711 00:19:41.729271 2516 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d33e498-1869-4418-a4a0-051fdb0311eb-clustermesh-secrets\") pod \"0d33e498-1869-4418-a4a0-051fdb0311eb\" (UID: \"0d33e498-1869-4418-a4a0-051fdb0311eb\") " Jul 11 00:19:41.730649 kubelet[2516]: I0711 00:19:41.729308 2516 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.730649 kubelet[2516]: I0711 00:19:41.729322 2516 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.730649 kubelet[2516]: I0711 00:19:41.729331 2516 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.733237 kubelet[2516]: I0711 00:19:41.733047 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac59773-73ec-4e6e-99e8-237bd5089b1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ac59773-73ec-4e6e-99e8-237bd5089b1c" (UID: "7ac59773-73ec-4e6e-99e8-237bd5089b1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:19:41.733237 kubelet[2516]: I0711 00:19:41.733135 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.733237 kubelet[2516]: I0711 00:19:41.733174 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.733237 kubelet[2516]: I0711 00:19:41.733241 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.733424 kubelet[2516]: I0711 00:19:41.733272 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.733424 kubelet[2516]: I0711 00:19:41.733310 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.733424 kubelet[2516]: I0711 00:19:41.733347 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.733424 kubelet[2516]: I0711 00:19:41.733383 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:19:41.734509 kubelet[2516]: I0711 00:19:41.734434 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac59773-73ec-4e6e-99e8-237bd5089b1c-kube-api-access-qp4jt" (OuterVolumeSpecName: "kube-api-access-qp4jt") pod "7ac59773-73ec-4e6e-99e8-237bd5089b1c" (UID: "7ac59773-73ec-4e6e-99e8-237bd5089b1c"). InnerVolumeSpecName "kube-api-access-qp4jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:19:41.734660 kubelet[2516]: I0711 00:19:41.734512 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d33e498-1869-4418-a4a0-051fdb0311eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:19:41.734660 kubelet[2516]: I0711 00:19:41.734531 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-kube-api-access-phgv2" (OuterVolumeSpecName: "kube-api-access-phgv2") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "kube-api-access-phgv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:19:41.737081 kubelet[2516]: I0711 00:19:41.737037 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:19:41.737859 kubelet[2516]: I0711 00:19:41.737781 2516 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d33e498-1869-4418-a4a0-051fdb0311eb" (UID: "0d33e498-1869-4418-a4a0-051fdb0311eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:19:41.739735 kubelet[2516]: I0711 00:19:41.739691 2516 scope.go:117] "RemoveContainer" containerID="d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8" Jul 11 00:19:41.741492 containerd[1459]: time="2025-07-11T00:19:41.741442397Z" level=info msg="RemoveContainer for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\"" Jul 11 00:19:41.746966 containerd[1459]: time="2025-07-11T00:19:41.746919887Z" level=info msg="RemoveContainer for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" returns successfully" Jul 11 00:19:41.747406 kubelet[2516]: I0711 00:19:41.747360 2516 scope.go:117] "RemoveContainer" containerID="d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8" Jul 11 00:19:41.747613 systemd[1]: Removed slice kubepods-besteffort-pod7ac59773_73ec_4e6e_99e8_237bd5089b1c.slice - libcontainer container kubepods-besteffort-pod7ac59773_73ec_4e6e_99e8_237bd5089b1c.slice. Jul 11 00:19:41.751753 containerd[1459]: time="2025-07-11T00:19:41.751655474Z" level=error msg="ContainerStatus for \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\": not found" Jul 11 00:19:41.753026 systemd[1]: Removed slice kubepods-burstable-pod0d33e498_1869_4418_a4a0_051fdb0311eb.slice - libcontainer container kubepods-burstable-pod0d33e498_1869_4418_a4a0_051fdb0311eb.slice. Jul 11 00:19:41.753801 systemd[1]: kubepods-burstable-pod0d33e498_1869_4418_a4a0_051fdb0311eb.slice: Consumed 8.757s CPU time. Jul 11 00:19:41.764738 kubelet[2516]: E0711 00:19:41.764692 2516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\": not found" containerID="d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8" Jul 11 00:19:41.764924 kubelet[2516]: I0711 00:19:41.764744 2516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8"} err="failed to get container status \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0e869ebaf3690089c11dba88c36c87419b9a6f15f0c467bd8ecb3a6fc289ed8\": not found" Jul 11 00:19:41.764924 kubelet[2516]: I0711 00:19:41.764819 2516 scope.go:117] "RemoveContainer" containerID="305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f" Jul 11 00:19:41.767574 containerd[1459]: time="2025-07-11T00:19:41.767512164Z" level=info msg="RemoveContainer for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\"" Jul 11 00:19:41.772370 containerd[1459]: time="2025-07-11T00:19:41.772300521Z" level=info msg="RemoveContainer for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" returns successfully" Jul 11 00:19:41.772695 kubelet[2516]: I0711 00:19:41.772597 2516 scope.go:117] "RemoveContainer" containerID="b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f" Jul 11 00:19:41.775480 containerd[1459]: time="2025-07-11T00:19:41.775366309Z" level=info msg="RemoveContainer for \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\"" Jul 11 00:19:41.779374 containerd[1459]: time="2025-07-11T00:19:41.779332030Z" level=info msg="RemoveContainer for \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\" returns successfully" Jul 11 00:19:41.779659 kubelet[2516]: I0711 00:19:41.779632 2516 scope.go:117] "RemoveContainer" containerID="66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea" Jul 11 00:19:41.781332 containerd[1459]: time="2025-07-11T00:19:41.781249868Z" level=info msg="RemoveContainer for \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\"" Jul 11 00:19:41.784631 containerd[1459]: time="2025-07-11T00:19:41.784586638Z" level=info msg="RemoveContainer for \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\" returns successfully" Jul 11 00:19:41.785006 kubelet[2516]: I0711 00:19:41.784887 2516 scope.go:117] "RemoveContainer" containerID="4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42" Jul 11 00:19:41.786591 containerd[1459]: time="2025-07-11T00:19:41.786563949Z" level=info msg="RemoveContainer for \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\"" Jul 11 00:19:41.790159 containerd[1459]: time="2025-07-11T00:19:41.790092162Z" level=info msg="RemoveContainer for \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\" returns successfully" Jul 11 00:19:41.790421 kubelet[2516]: I0711 00:19:41.790318 2516 scope.go:117] "RemoveContainer" containerID="0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc" Jul 11 00:19:41.791674 containerd[1459]: time="2025-07-11T00:19:41.791642595Z" level=info msg="RemoveContainer for \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\"" Jul 11 00:19:41.794885 containerd[1459]: time="2025-07-11T00:19:41.794830995Z" level=info msg="RemoveContainer for \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\" returns successfully" Jul 11 00:19:41.795074 kubelet[2516]: I0711 00:19:41.794993 2516 scope.go:117] "RemoveContainer" containerID="305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f" Jul 11 00:19:41.795410 containerd[1459]: time="2025-07-11T00:19:41.795351168Z" level=error msg="ContainerStatus for \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\": not found" Jul 11 00:19:41.795564 kubelet[2516]: E0711 00:19:41.795528 2516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\": not found" containerID="305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f" Jul 11 00:19:41.795656 kubelet[2516]: I0711 00:19:41.795568 2516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f"} err="failed to get container status \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"305ea59bad93abc07e0a7c2268b322569f019e49e32f28b98a7549851675ca2f\": not found" Jul 11 00:19:41.795656 kubelet[2516]: I0711 00:19:41.795593 2516 scope.go:117] "RemoveContainer" containerID="b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f" Jul 11 00:19:41.795843 containerd[1459]: time="2025-07-11T00:19:41.795798184Z" level=error msg="ContainerStatus for \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\": not found" Jul 11 00:19:41.795967 kubelet[2516]: E0711 00:19:41.795942 2516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\": not found" containerID="b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f" Jul 11 00:19:41.796029 kubelet[2516]: I0711 00:19:41.795972 2516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f"} err="failed to get container status \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b17f44ae74f8103894cc008f2126b144d0075d4c1cf0a951bc8bd236f957493f\": not found" Jul 11 00:19:41.796029 kubelet[2516]: I0711 00:19:41.795992 2516 scope.go:117] "RemoveContainer" containerID="66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea" Jul 11 00:19:41.796204 containerd[1459]: time="2025-07-11T00:19:41.796163004Z" level=error msg="ContainerStatus for \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\": not found" Jul 11 00:19:41.796434 kubelet[2516]: E0711 00:19:41.796406 2516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\": not found" containerID="66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea" Jul 11 00:19:41.796482 kubelet[2516]: I0711 00:19:41.796442 2516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea"} err="failed to get container status \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\": rpc error: code = NotFound desc = an error occurred when try to find container \"66c23671e7f0afe21f91f4d7a88b5f6539a6cc3b4c0c4b6ca76a395c62266bea\": not found" Jul 11 00:19:41.796482 kubelet[2516]: I0711 00:19:41.796458 2516 scope.go:117] "RemoveContainer" containerID="4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42" Jul 11 00:19:41.796689 containerd[1459]: time="2025-07-11T00:19:41.796621781Z" level=error msg="ContainerStatus for \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\": not found" Jul 11 00:19:41.796751 kubelet[2516]: E0711 00:19:41.796734 2516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\": not found" containerID="4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42" Jul 11 00:19:41.796793 kubelet[2516]: I0711 00:19:41.796752 2516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42"} err="failed to get container status \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d4d97f9ac7b9af415ed735ec3d3f35646b92016d5e663bb8b3bffc4e0c32a42\": not found" Jul 11 00:19:41.796793 kubelet[2516]: I0711 00:19:41.796766 2516 scope.go:117] "RemoveContainer" containerID="0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc" Jul 11 00:19:41.796982 containerd[1459]: time="2025-07-11T00:19:41.796945955Z" level=error msg="ContainerStatus for \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\": not found" Jul 11 00:19:41.797126 kubelet[2516]: E0711 00:19:41.797083 2516 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\": not found" containerID="0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc" Jul 11 00:19:41.797179 kubelet[2516]: I0711 00:19:41.797135 2516 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc"} err="failed to get container status \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d5fd35d24e9efa6ef84bd21324f2ddb22436488458b2f46ac6a65bb5e194dfc\": not found" Jul 11 00:19:41.830277 kubelet[2516]: I0711 00:19:41.830208 2516 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830277 kubelet[2516]: I0711 00:19:41.830247 2516 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d33e498-1869-4418-a4a0-051fdb0311eb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830277 kubelet[2516]: I0711 00:19:41.830272 2516 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830277 kubelet[2516]: I0711 00:19:41.830284 2516 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830277 kubelet[2516]: I0711 00:19:41.830299 2516 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp4jt\" (UniqueName: \"kubernetes.io/projected/7ac59773-73ec-4e6e-99e8-237bd5089b1c-kube-api-access-qp4jt\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830326 2516 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgv2\" (UniqueName: \"kubernetes.io/projected/0d33e498-1869-4418-a4a0-051fdb0311eb-kube-api-access-phgv2\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830342 2516 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ac59773-73ec-4e6e-99e8-237bd5089b1c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830355 2516 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830368 2516 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830379 2516 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830390 2516 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d33e498-1869-4418-a4a0-051fdb0311eb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830400 2516 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:41.830539 kubelet[2516]: I0711 00:19:41.830410 2516 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d33e498-1869-4418-a4a0-051fdb0311eb-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:19:42.347742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d94d8c82e1d88ae416b3a5ec66aa34b3bc1d95363983d379c32f2209f5ae8674-rootfs.mount: Deactivated successfully. Jul 11 00:19:42.349104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e1fbc056554cd5550b54fd636c71f3b695cef6bf6375cbbc6d178e82e028425-rootfs.mount: Deactivated successfully. Jul 11 00:19:42.349236 systemd[1]: var-lib-kubelet-pods-7ac59773\x2d73ec\x2d4e6e\x2d99e8\x2d237bd5089b1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqp4jt.mount: Deactivated successfully. Jul 11 00:19:42.349338 systemd[1]: var-lib-kubelet-pods-0d33e498\x2d1869\x2d4418\x2da4a0\x2d051fdb0311eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dphgv2.mount: Deactivated successfully. Jul 11 00:19:42.349455 systemd[1]: var-lib-kubelet-pods-0d33e498\x2d1869\x2d4418\x2da4a0\x2d051fdb0311eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:19:42.349571 systemd[1]: var-lib-kubelet-pods-0d33e498\x2d1869\x2d4418\x2da4a0\x2d051fdb0311eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:19:43.143540 sshd[4275]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:43.153165 systemd[1]: sshd@29-10.0.0.79:22-10.0.0.1:40568.service: Deactivated successfully. Jul 11 00:19:43.155035 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 00:19:43.156658 systemd-logind[1445]: Session 30 logged out. Waiting for processes to exit. Jul 11 00:19:43.163719 systemd[1]: Started sshd@30-10.0.0.79:22-10.0.0.1:36308.service - OpenSSH per-connection server daemon (10.0.0.1:36308). Jul 11 00:19:43.164952 systemd-logind[1445]: Removed session 30. Jul 11 00:19:43.206071 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 36308 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:43.207769 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:43.213001 systemd-logind[1445]: New session 31 of user core. Jul 11 00:19:43.220292 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 00:19:43.359975 kubelet[2516]: I0711 00:19:43.359532 2516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" path="/var/lib/kubelet/pods/0d33e498-1869-4418-a4a0-051fdb0311eb/volumes" Jul 11 00:19:43.360534 kubelet[2516]: I0711 00:19:43.360490 2516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac59773-73ec-4e6e-99e8-237bd5089b1c" path="/var/lib/kubelet/pods/7ac59773-73ec-4e6e-99e8-237bd5089b1c/volumes" Jul 11 00:19:43.783826 sshd[4435]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:43.797609 kubelet[2516]: E0711 00:19:43.794073 2516 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" containerName="mount-cgroup" Jul 11 00:19:43.797609 kubelet[2516]: E0711 00:19:43.794125 2516 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" containerName="mount-bpf-fs" Jul 11 00:19:43.797609 kubelet[2516]: E0711 00:19:43.794133 2516 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" containerName="cilium-agent" Jul 11 00:19:43.797609 kubelet[2516]: E0711 00:19:43.794143 2516 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" containerName="apply-sysctl-overwrites" Jul 11 00:19:43.797609 kubelet[2516]: E0711 00:19:43.794149 2516 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" containerName="clean-cilium-state" Jul 11 00:19:43.797609 kubelet[2516]: E0711 00:19:43.794156 2516 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ac59773-73ec-4e6e-99e8-237bd5089b1c" containerName="cilium-operator" Jul 11 00:19:43.797609 kubelet[2516]: I0711 00:19:43.794189 2516 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac59773-73ec-4e6e-99e8-237bd5089b1c" containerName="cilium-operator" Jul 11 00:19:43.797609 kubelet[2516]: I0711 00:19:43.794200 2516 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d33e498-1869-4418-a4a0-051fdb0311eb" containerName="cilium-agent" Jul 11 00:19:43.801471 systemd[1]: sshd@30-10.0.0.79:22-10.0.0.1:36308.service: Deactivated successfully. Jul 11 00:19:43.807683 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 00:19:43.813465 systemd-logind[1445]: Session 31 logged out. Waiting for processes to exit. Jul 11 00:19:43.825554 systemd[1]: Started sshd@31-10.0.0.79:22-10.0.0.1:36320.service - OpenSSH per-connection server daemon (10.0.0.1:36320). Jul 11 00:19:43.835397 systemd-logind[1445]: Removed session 31. Jul 11 00:19:43.838827 systemd[1]: Created slice kubepods-burstable-poddc7a9ae2_9d9a_4f20_b1be_1d4dde759f19.slice - libcontainer container kubepods-burstable-poddc7a9ae2_9d9a_4f20_b1be_1d4dde759f19.slice. Jul 11 00:19:43.842292 kubelet[2516]: I0711 00:19:43.841483 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-cilium-config-path\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842292 kubelet[2516]: I0711 00:19:43.841530 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-cilium-ipsec-secrets\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842292 kubelet[2516]: I0711 00:19:43.841557 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-host-proc-sys-net\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842292 kubelet[2516]: I0711 00:19:43.841577 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-cilium-run\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842292 kubelet[2516]: I0711 00:19:43.841595 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-bpf-maps\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842292 kubelet[2516]: I0711 00:19:43.841608 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-lib-modules\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842519 kubelet[2516]: I0711 00:19:43.841621 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-cni-path\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842519 kubelet[2516]: I0711 00:19:43.841635 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-host-proc-sys-kernel\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842519 kubelet[2516]: I0711 00:19:43.841650 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-hubble-tls\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842519 kubelet[2516]: I0711 00:19:43.841663 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-cilium-cgroup\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842519 kubelet[2516]: I0711 00:19:43.841685 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-etc-cni-netd\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842519 kubelet[2516]: I0711 00:19:43.841703 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-clustermesh-secrets\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842747 kubelet[2516]: I0711 00:19:43.841717 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-hostproc\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842747 kubelet[2516]: I0711 00:19:43.841734 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-xtables-lock\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.842747 kubelet[2516]: I0711 00:19:43.841753 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8q4\" (UniqueName: \"kubernetes.io/projected/dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19-kube-api-access-hn8q4\") pod \"cilium-cbn5v\" (UID: \"dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19\") " pod="kube-system/cilium-cbn5v" Jul 11 00:19:43.864829 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 36320 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:43.867376 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:43.871998 systemd-logind[1445]: New session 32 of user core. Jul 11 00:19:43.884270 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 11 00:19:43.939926 sshd[4448]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:43.961003 systemd[1]: sshd@31-10.0.0.79:22-10.0.0.1:36320.service: Deactivated successfully. Jul 11 00:19:43.963484 systemd[1]: session-32.scope: Deactivated successfully. Jul 11 00:19:43.965431 systemd-logind[1445]: Session 32 logged out. Waiting for processes to exit. Jul 11 00:19:43.975425 systemd[1]: Started sshd@32-10.0.0.79:22-10.0.0.1:36324.service - OpenSSH per-connection server daemon (10.0.0.1:36324). Jul 11 00:19:43.976519 systemd-logind[1445]: Removed session 32. Jul 11 00:19:44.008832 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 36324 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:19:44.010730 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:44.015577 systemd-logind[1445]: New session 33 of user core. Jul 11 00:19:44.025314 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 11 00:19:44.142549 kubelet[2516]: E0711 00:19:44.142489 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:44.143221 containerd[1459]: time="2025-07-11T00:19:44.143166114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbn5v,Uid:dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19,Namespace:kube-system,Attempt:0,}" Jul 11 00:19:44.168141 containerd[1459]: time="2025-07-11T00:19:44.167029318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:19:44.168141 containerd[1459]: time="2025-07-11T00:19:44.168070968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:19:44.168141 containerd[1459]: time="2025-07-11T00:19:44.168091606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:19:44.168373 containerd[1459]: time="2025-07-11T00:19:44.168245778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:19:44.197405 systemd[1]: Started cri-containerd-dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172.scope - libcontainer container dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172. Jul 11 00:19:44.229162 containerd[1459]: time="2025-07-11T00:19:44.229070394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbn5v,Uid:dc7a9ae2-9d9a-4f20-b1be-1d4dde759f19,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\"" Jul 11 00:19:44.230381 kubelet[2516]: E0711 00:19:44.230108 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:44.233630 containerd[1459]: time="2025-07-11T00:19:44.233433004Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:19:44.251357 containerd[1459]: time="2025-07-11T00:19:44.251279461Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22\"" Jul 11 00:19:44.251939 containerd[1459]: time="2025-07-11T00:19:44.251915614Z" level=info msg="StartContainer for \"92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22\"" Jul 11 00:19:44.283297 systemd[1]: Started cri-containerd-92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22.scope - libcontainer container 92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22. Jul 11 00:19:44.317487 containerd[1459]: time="2025-07-11T00:19:44.317416163Z" level=info msg="StartContainer for \"92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22\" returns successfully" Jul 11 00:19:44.332396 systemd[1]: cri-containerd-92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22.scope: Deactivated successfully. Jul 11 00:19:44.373531 containerd[1459]: time="2025-07-11T00:19:44.373448477Z" level=info msg="shim disconnected" id=92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22 namespace=k8s.io Jul 11 00:19:44.373531 containerd[1459]: time="2025-07-11T00:19:44.373521965Z" level=warning msg="cleaning up after shim disconnected" id=92279d1bae6877ca268c3a79fa172c7d2680ff8be3269a75818a7664e9257d22 namespace=k8s.io Jul 11 00:19:44.373531 containerd[1459]: time="2025-07-11T00:19:44.373531433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:44.425849 kubelet[2516]: E0711 00:19:44.425629 2516 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:19:44.754802 kubelet[2516]: E0711 00:19:44.754644 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:44.756648 containerd[1459]: time="2025-07-11T00:19:44.756495530Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:19:44.771481 containerd[1459]: time="2025-07-11T00:19:44.771422698Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28\"" Jul 11 00:19:44.772030 containerd[1459]: time="2025-07-11T00:19:44.771981564Z" level=info msg="StartContainer for \"f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28\"" Jul 11 00:19:44.805374 systemd[1]: Started cri-containerd-f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28.scope - libcontainer container f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28. Jul 11 00:19:44.837242 containerd[1459]: time="2025-07-11T00:19:44.837181223Z" level=info msg="StartContainer for \"f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28\" returns successfully" Jul 11 00:19:44.846739 systemd[1]: cri-containerd-f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28.scope: Deactivated successfully. Jul 11 00:19:44.876461 containerd[1459]: time="2025-07-11T00:19:44.876340241Z" level=info msg="shim disconnected" id=f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28 namespace=k8s.io Jul 11 00:19:44.876461 containerd[1459]: time="2025-07-11T00:19:44.876431164Z" level=warning msg="cleaning up after shim disconnected" id=f7f100dbfd187bbaec917d63a31eda2262c918210a45456feadcfd47f03e1c28 namespace=k8s.io Jul 11 00:19:44.876461 containerd[1459]: time="2025-07-11T00:19:44.876441323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:45.758828 kubelet[2516]: E0711 00:19:45.758775 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:45.761504 containerd[1459]: time="2025-07-11T00:19:45.761100492Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:19:45.924283 containerd[1459]: time="2025-07-11T00:19:45.924015526Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a\"" Jul 11 00:19:45.925097 containerd[1459]: time="2025-07-11T00:19:45.925029494Z" level=info msg="StartContainer for \"e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a\"" Jul 11 00:19:45.976347 systemd[1]: Started cri-containerd-e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a.scope - libcontainer container e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a. Jul 11 00:19:46.016385 containerd[1459]: time="2025-07-11T00:19:46.016250243Z" level=info msg="StartContainer for \"e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a\" returns successfully" Jul 11 00:19:46.016519 systemd[1]: cri-containerd-e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a.scope: Deactivated successfully. Jul 11 00:19:46.048052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a-rootfs.mount: Deactivated successfully. Jul 11 00:19:46.054677 containerd[1459]: time="2025-07-11T00:19:46.054588539Z" level=info msg="shim disconnected" id=e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a namespace=k8s.io Jul 11 00:19:46.054677 containerd[1459]: time="2025-07-11T00:19:46.054669552Z" level=warning msg="cleaning up after shim disconnected" id=e5cb4a21fad7092ae22d5cd2b99121f1609dd94c7c65c29f8bc63c0e8d42791a namespace=k8s.io Jul 11 00:19:46.054677 containerd[1459]: time="2025-07-11T00:19:46.054678850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:46.355615 kubelet[2516]: E0711 00:19:46.355394 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-n659k" podUID="b4727daa-86d0-433b-b1bf-f76f10310acd" Jul 11 00:19:46.765966 kubelet[2516]: E0711 00:19:46.765898 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:46.768880 containerd[1459]: time="2025-07-11T00:19:46.768797361Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:19:46.786525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033652705.mount: Deactivated successfully. Jul 11 00:19:46.789495 containerd[1459]: time="2025-07-11T00:19:46.789411607Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d\"" Jul 11 00:19:46.791154 containerd[1459]: time="2025-07-11T00:19:46.791088277Z" level=info msg="StartContainer for \"269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d\"" Jul 11 00:19:46.834573 systemd[1]: Started cri-containerd-269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d.scope - libcontainer container 269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d. Jul 11 00:19:46.871616 systemd[1]: cri-containerd-269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d.scope: Deactivated successfully. Jul 11 00:19:46.885203 containerd[1459]: time="2025-07-11T00:19:46.883989324Z" level=info msg="StartContainer for \"269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d\" returns successfully" Jul 11 00:19:46.889765 containerd[1459]: time="2025-07-11T00:19:46.878187354Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc7a9ae2_9d9a_4f20_b1be_1d4dde759f19.slice/cri-containerd-269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d.scope/memory.events\": no such file or directory" Jul 11 00:19:46.947660 containerd[1459]: time="2025-07-11T00:19:46.947584654Z" level=info msg="shim disconnected" id=269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d namespace=k8s.io Jul 11 00:19:46.947660 containerd[1459]: time="2025-07-11T00:19:46.947649336Z" level=warning msg="cleaning up after shim disconnected" id=269c057e0ad7b315d006b2d7e040b55c47933e7fce84f3db7b28d08f8c4ecc8d namespace=k8s.io Jul 11 00:19:46.947660 containerd[1459]: time="2025-07-11T00:19:46.947659165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:19:47.772183 kubelet[2516]: E0711 00:19:47.772100 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:47.774893 containerd[1459]: time="2025-07-11T00:19:47.774842521Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:19:47.801777 containerd[1459]: time="2025-07-11T00:19:47.801711389Z" level=info msg="CreateContainer within sandbox \"dd1a9dd295bf689c257c8b83e6f7d2007634fb443521174028f470f1e3822172\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08423735cd453ee45fab5bb888c166c8554fb0344672da3d345300d7aaefa5cc\"" Jul 11 00:19:47.803375 containerd[1459]: time="2025-07-11T00:19:47.802328767Z" level=info msg="StartContainer for \"08423735cd453ee45fab5bb888c166c8554fb0344672da3d345300d7aaefa5cc\"" Jul 11 00:19:47.851325 systemd[1]: Started cri-containerd-08423735cd453ee45fab5bb888c166c8554fb0344672da3d345300d7aaefa5cc.scope - libcontainer container 08423735cd453ee45fab5bb888c166c8554fb0344672da3d345300d7aaefa5cc. Jul 11 00:19:47.894224 containerd[1459]: time="2025-07-11T00:19:47.894152035Z" level=info msg="StartContainer for \"08423735cd453ee45fab5bb888c166c8554fb0344672da3d345300d7aaefa5cc\" returns successfully" Jul 11 00:19:48.354872 kubelet[2516]: E0711 00:19:48.354580 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-n659k" podUID="b4727daa-86d0-433b-b1bf-f76f10310acd" Jul 11 00:19:48.435098 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 11 00:19:48.781288 kubelet[2516]: E0711 00:19:48.781250 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:48.947931 kubelet[2516]: I0711 00:19:48.947836 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cbn5v" podStartSLOduration=5.947808706 podStartE2EDuration="5.947808706s" podCreationTimestamp="2025-07-11 00:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:19:48.947735758 +0000 UTC m=+129.713682643" watchObservedRunningTime="2025-07-11 00:19:48.947808706 +0000 UTC m=+129.713755591" Jul 11 00:19:50.145166 kubelet[2516]: E0711 00:19:50.143787 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:50.355104 kubelet[2516]: E0711 00:19:50.355039 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:51.861753 systemd-networkd[1386]: lxc_health: Link UP Jul 11 00:19:51.876576 systemd-networkd[1386]: lxc_health: Gained carrier Jul 11 00:19:52.146374 kubelet[2516]: E0711 00:19:52.145147 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:52.794707 kubelet[2516]: E0711 00:19:52.794656 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:53.569615 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jul 11 00:19:53.797136 kubelet[2516]: E0711 00:19:53.797077 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:59.356106 kubelet[2516]: E0711 00:19:59.356006 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:19:59.536011 sshd[4460]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:59.541221 systemd[1]: sshd@32-10.0.0.79:22-10.0.0.1:36324.service: Deactivated successfully. Jul 11 00:19:59.544479 systemd[1]: session-33.scope: Deactivated successfully. Jul 11 00:19:59.545896 systemd-logind[1445]: Session 33 logged out. Waiting for processes to exit. Jul 11 00:19:59.547551 systemd-logind[1445]: Removed session 33.