Jul 6 23:57:19.002001 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:57:19.002032 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:57:19.002046 kernel: BIOS-provided physical RAM map: Jul 6 23:57:19.002053 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:57:19.002059 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:57:19.002066 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:57:19.002074 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jul 6 23:57:19.002081 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jul 6 23:57:19.002087 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:57:19.002097 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:57:19.002105 kernel: NX (Execute Disable) protection: active Jul 6 23:57:19.002111 kernel: APIC: Static calls initialized Jul 6 23:57:19.002123 kernel: SMBIOS 2.8 present. Jul 6 23:57:19.002131 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 6 23:57:19.002139 kernel: Hypervisor detected: KVM Jul 6 23:57:19.002151 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:57:19.002162 kernel: kvm-clock: using sched offset of 3118987368 cycles Jul 6 23:57:19.002171 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:57:19.002179 kernel: tsc: Detected 2494.140 MHz processor Jul 6 23:57:19.002187 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:57:19.002196 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:57:19.002203 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jul 6 23:57:19.002211 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:57:19.002219 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:57:19.002231 kernel: ACPI: Early table checksum verification disabled Jul 6 23:57:19.002238 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jul 6 23:57:19.002246 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002254 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002261 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002269 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 6 23:57:19.002277 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002284 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002292 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002303 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:57:19.002311 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 6 23:57:19.002318 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 6 23:57:19.002326 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 6 23:57:19.002333 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 6 23:57:19.002341 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 6 23:57:19.002348 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 6 23:57:19.002377 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 6 23:57:19.002389 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:57:19.002400 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 6 23:57:19.002415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:57:19.006613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:57:19.006661 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jul 6 23:57:19.006676 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jul 6 23:57:19.006708 kernel: Zone ranges: Jul 6 23:57:19.006717 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:57:19.006726 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jul 6 23:57:19.006735 kernel: Normal empty Jul 6 23:57:19.006744 kernel: Movable zone start for each node Jul 6 23:57:19.006754 kernel: Early memory node ranges Jul 6 23:57:19.006769 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:57:19.006782 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jul 6 23:57:19.006797 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jul 6 23:57:19.006811 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:57:19.006820 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:57:19.006833 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jul 6 23:57:19.006842 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:57:19.006851 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:57:19.006859 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:57:19.006869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:57:19.006877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:57:19.006886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:57:19.006898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:57:19.006907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:57:19.006915 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:57:19.006923 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:57:19.006932 kernel: TSC deadline timer available Jul 6 23:57:19.006941 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:57:19.006949 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:57:19.006957 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 6 23:57:19.006970 kernel: Booting paravirtualized kernel on KVM Jul 6 23:57:19.006979 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:57:19.006991 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:57:19.007000 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:57:19.007008 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:57:19.007017 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:57:19.007025 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:57:19.007036 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:57:19.007045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:57:19.007057 kernel: random: crng init done Jul 6 23:57:19.007065 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:57:19.007074 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:57:19.007082 kernel: Fallback order for Node 0: 0 Jul 6 23:57:19.007091 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jul 6 23:57:19.007099 kernel: Policy zone: DMA32 Jul 6 23:57:19.007107 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:57:19.007116 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125152K reserved, 0K cma-reserved) Jul 6 23:57:19.007125 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:57:19.007140 kernel: Kernel/User page tables isolation: enabled Jul 6 23:57:19.007149 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:57:19.007157 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:57:19.007166 kernel: Dynamic Preempt: voluntary Jul 6 23:57:19.007174 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:57:19.007191 kernel: rcu: RCU event tracing is enabled. Jul 6 23:57:19.007200 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:57:19.007208 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:57:19.007217 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:57:19.007229 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:57:19.007237 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:57:19.007246 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:57:19.007255 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:57:19.007263 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:57:19.007279 kernel: Console: colour VGA+ 80x25 Jul 6 23:57:19.007287 kernel: printk: console [tty0] enabled Jul 6 23:57:19.007296 kernel: printk: console [ttyS0] enabled Jul 6 23:57:19.007304 kernel: ACPI: Core revision 20230628 Jul 6 23:57:19.007313 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:57:19.007326 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:57:19.007334 kernel: x2apic enabled Jul 6 23:57:19.007342 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:57:19.007351 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:57:19.007360 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 6 23:57:19.007368 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 6 23:57:19.007376 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 6 23:57:19.007385 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 6 23:57:19.007408 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:57:19.007417 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:57:19.007520 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:57:19.007541 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 6 23:57:19.007554 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:57:19.007567 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:57:19.007580 kernel: MDS: Mitigation: Clear CPU buffers Jul 6 23:57:19.007594 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:57:19.007608 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:57:19.007621 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:57:19.007630 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:57:19.007649 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:57:19.007659 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:57:19.007668 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 6 23:57:19.007677 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:57:19.007686 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:57:19.007696 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:57:19.007713 kernel: landlock: Up and running. Jul 6 23:57:19.007726 kernel: SELinux: Initializing. Jul 6 23:57:19.007743 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:57:19.007757 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:57:19.007769 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 6 23:57:19.007784 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:57:19.007796 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:57:19.007805 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:57:19.007814 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 6 23:57:19.007827 kernel: signal: max sigframe size: 1776 Jul 6 23:57:19.007836 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:57:19.007846 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:57:19.007855 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:57:19.007864 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:57:19.007873 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:57:19.007882 kernel: .... node #0, CPUs: #1 Jul 6 23:57:19.007891 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:57:19.007904 kernel: smpboot: Max logical packages: 1 Jul 6 23:57:19.007917 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 6 23:57:19.007926 kernel: devtmpfs: initialized Jul 6 23:57:19.007935 kernel: x86/mm: Memory block size: 128MB Jul 6 23:57:19.007944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:57:19.007953 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:57:19.007962 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:57:19.007971 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:57:19.007979 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:57:19.007988 kernel: audit: type=2000 audit(1751846238.211:1): state=initialized audit_enabled=0 res=1 Jul 6 23:57:19.008001 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:57:19.008009 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:57:19.008018 kernel: cpuidle: using governor menu Jul 6 23:57:19.008027 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:57:19.008036 kernel: dca service started, version 1.12.1 Jul 6 23:57:19.008045 kernel: PCI: Using configuration type 1 for base access Jul 6 23:57:19.008054 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:57:19.008063 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:57:19.008072 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:57:19.008084 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:57:19.008093 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:57:19.008102 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:57:19.008111 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:57:19.008120 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:57:19.008128 kernel: ACPI: Interpreter enabled Jul 6 23:57:19.008137 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:57:19.008146 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:57:19.008155 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:57:19.008167 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:57:19.008176 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:57:19.008184 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:57:19.008507 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:57:19.008624 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:57:19.008725 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:57:19.008737 kernel: acpiphp: Slot [3] registered Jul 6 23:57:19.008752 kernel: acpiphp: Slot [4] registered Jul 6 23:57:19.008761 kernel: acpiphp: Slot [5] registered Jul 6 23:57:19.008770 kernel: acpiphp: Slot [6] registered Jul 6 23:57:19.008779 kernel: acpiphp: Slot [7] registered Jul 6 23:57:19.008788 kernel: acpiphp: Slot [8] registered Jul 6 23:57:19.008797 kernel: acpiphp: Slot [9] registered Jul 6 23:57:19.008806 kernel: acpiphp: Slot [10] registered Jul 6 23:57:19.008815 kernel: acpiphp: Slot [11] registered Jul 6 23:57:19.008826 kernel: acpiphp: Slot [12] registered Jul 6 23:57:19.008840 kernel: acpiphp: Slot [13] registered Jul 6 23:57:19.008857 kernel: acpiphp: Slot [14] registered Jul 6 23:57:19.008871 kernel: acpiphp: Slot [15] registered Jul 6 23:57:19.008883 kernel: acpiphp: Slot [16] registered Jul 6 23:57:19.008893 kernel: acpiphp: Slot [17] registered Jul 6 23:57:19.008902 kernel: acpiphp: Slot [18] registered Jul 6 23:57:19.008913 kernel: acpiphp: Slot [19] registered Jul 6 23:57:19.008926 kernel: acpiphp: Slot [20] registered Jul 6 23:57:19.008939 kernel: acpiphp: Slot [21] registered Jul 6 23:57:19.008953 kernel: acpiphp: Slot [22] registered Jul 6 23:57:19.008971 kernel: acpiphp: Slot [23] registered Jul 6 23:57:19.008984 kernel: acpiphp: Slot [24] registered Jul 6 23:57:19.008998 kernel: acpiphp: Slot [25] registered Jul 6 23:57:19.009012 kernel: acpiphp: Slot [26] registered Jul 6 23:57:19.009021 kernel: acpiphp: Slot [27] registered Jul 6 23:57:19.009030 kernel: acpiphp: Slot [28] registered Jul 6 23:57:19.009039 kernel: acpiphp: Slot [29] registered Jul 6 23:57:19.009048 kernel: acpiphp: Slot [30] registered Jul 6 23:57:19.009057 kernel: acpiphp: Slot [31] registered Jul 6 23:57:19.009070 kernel: PCI host bridge to bus 0000:00 Jul 6 23:57:19.009206 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:57:19.009296 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:57:19.009383 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:57:19.010550 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 6 23:57:19.010697 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 6 23:57:19.010804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:57:19.010969 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:57:19.011128 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:57:19.011308 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 6 23:57:19.012596 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 6 23:57:19.012769 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:57:19.012910 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:57:19.013012 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:57:19.013119 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:57:19.013239 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 6 23:57:19.013385 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 6 23:57:19.014674 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 6 23:57:19.014816 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 6 23:57:19.014922 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 6 23:57:19.015065 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:57:19.015171 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 6 23:57:19.015267 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 6 23:57:19.015363 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 6 23:57:19.016614 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 6 23:57:19.016751 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:57:19.016880 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:57:19.016991 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 6 23:57:19.017091 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 6 23:57:19.017188 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 6 23:57:19.017310 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:57:19.018485 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 6 23:57:19.018674 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 6 23:57:19.018809 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 6 23:57:19.018984 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 6 23:57:19.019088 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 6 23:57:19.019187 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 6 23:57:19.019285 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 6 23:57:19.019392 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:57:19.020604 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:57:19.020741 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 6 23:57:19.020845 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 6 23:57:19.021003 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:57:19.021152 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 6 23:57:19.021253 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 6 23:57:19.021389 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 6 23:57:19.022707 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 6 23:57:19.022866 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 6 23:57:19.022970 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 6 23:57:19.022983 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:57:19.022993 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:57:19.023002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:57:19.023011 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:57:19.023019 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:57:19.023028 kernel: iommu: Default domain type: Translated Jul 6 23:57:19.023042 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:57:19.023051 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:57:19.023060 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:57:19.023069 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:57:19.023078 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jul 6 23:57:19.023182 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 6 23:57:19.023282 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 6 23:57:19.023453 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:57:19.023475 kernel: vgaarb: loaded Jul 6 23:57:19.023485 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:57:19.023494 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:57:19.023503 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:57:19.023512 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:57:19.023522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:57:19.023531 kernel: pnp: PnP ACPI init Jul 6 23:57:19.023540 kernel: pnp: PnP ACPI: found 4 devices Jul 6 23:57:19.023549 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:57:19.023561 kernel: NET: Registered PF_INET protocol family Jul 6 23:57:19.023570 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:57:19.023579 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:57:19.023588 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:57:19.023597 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:57:19.023606 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:57:19.023615 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:57:19.023624 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:57:19.023633 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:57:19.023645 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:57:19.023654 kernel: NET: Registered PF_XDP protocol family Jul 6 23:57:19.023819 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:57:19.023942 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:57:19.024030 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:57:19.024124 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 6 23:57:19.024210 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 6 23:57:19.024317 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 6 23:57:19.025465 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:57:19.025491 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:57:19.025670 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 32167 usecs Jul 6 23:57:19.025686 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:57:19.025696 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:57:19.025706 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 6 23:57:19.025715 kernel: Initialise system trusted keyrings Jul 6 23:57:19.025725 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:57:19.025741 kernel: Key type asymmetric registered Jul 6 23:57:19.025750 kernel: Asymmetric key parser 'x509' registered Jul 6 23:57:19.025759 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:57:19.025768 kernel: io scheduler mq-deadline registered Jul 6 23:57:19.025776 kernel: io scheduler kyber registered Jul 6 23:57:19.025785 kernel: io scheduler bfq registered Jul 6 23:57:19.025794 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:57:19.025804 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 6 23:57:19.025813 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 6 23:57:19.025822 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 6 23:57:19.025835 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:57:19.025844 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:57:19.025853 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:57:19.025862 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:57:19.025870 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:57:19.025879 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:57:19.026017 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:57:19.026112 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:57:19.026208 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:57:18 UTC (1751846238) Jul 6 23:57:19.026297 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 6 23:57:19.026309 kernel: intel_pstate: CPU model not supported Jul 6 23:57:19.026318 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:57:19.026327 kernel: Segment Routing with IPv6 Jul 6 23:57:19.026336 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:57:19.026344 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:57:19.026353 kernel: Key type dns_resolver registered Jul 6 23:57:19.026365 kernel: IPI shorthand broadcast: enabled Jul 6 23:57:19.026378 kernel: sched_clock: Marking stable (998003456, 127120268)->(1256734868, -131611144) Jul 6 23:57:19.026392 kernel: registered taskstats version 1 Jul 6 23:57:19.026401 kernel: Loading compiled-in X.509 certificates Jul 6 23:57:19.026410 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:57:19.026419 kernel: Key type .fscrypt registered Jul 6 23:57:19.028472 kernel: Key type fscrypt-provisioning registered Jul 6 23:57:19.028500 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:57:19.028509 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:57:19.028525 kernel: ima: No architecture policies found Jul 6 23:57:19.028534 kernel: clk: Disabling unused clocks Jul 6 23:57:19.028543 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:57:19.028552 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:57:19.028561 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:57:19.028590 kernel: Run /init as init process Jul 6 23:57:19.028602 kernel: with arguments: Jul 6 23:57:19.028612 kernel: /init Jul 6 23:57:19.028624 kernel: with environment: Jul 6 23:57:19.028635 kernel: HOME=/ Jul 6 23:57:19.028645 kernel: TERM=linux Jul 6 23:57:19.028654 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:57:19.028668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:57:19.028681 systemd[1]: Detected virtualization kvm. Jul 6 23:57:19.028691 systemd[1]: Detected architecture x86-64. Jul 6 23:57:19.028701 systemd[1]: Running in initrd. Jul 6 23:57:19.028711 systemd[1]: No hostname configured, using default hostname. Jul 6 23:57:19.028723 systemd[1]: Hostname set to . Jul 6 23:57:19.028733 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:57:19.028743 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:57:19.028753 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:57:19.028763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:57:19.028774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:57:19.028784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:57:19.028794 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:57:19.028806 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:57:19.028828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:57:19.028843 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:57:19.028858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:57:19.028871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:57:19.028885 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:57:19.028905 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:57:19.028919 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:57:19.028933 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:57:19.028952 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:57:19.028966 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:57:19.028982 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:57:19.028997 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:57:19.029007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:57:19.029017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:57:19.029027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:57:19.029037 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:57:19.029047 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:57:19.029057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:57:19.029067 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:57:19.029080 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:57:19.029090 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:57:19.029099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:57:19.029109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:19.029119 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:57:19.029129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:57:19.029139 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:57:19.029152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:57:19.029207 systemd-journald[182]: Collecting audit messages is disabled. Jul 6 23:57:19.029234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:57:19.029246 systemd-journald[182]: Journal started Jul 6 23:57:19.029268 systemd-journald[182]: Runtime Journal (/run/log/journal/33168a4352bd4ee586c7235fa99edd38) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:57:19.009999 systemd-modules-load[183]: Inserted module 'overlay' Jul 6 23:57:19.059979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:57:19.060034 kernel: Bridge firewalling registered Jul 6 23:57:19.060053 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:57:19.046517 systemd-modules-load[183]: Inserted module 'br_netfilter' Jul 6 23:57:19.060642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:57:19.066370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:19.082809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:57:19.084479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:57:19.086666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:57:19.099766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:57:19.121538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:19.128663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:57:19.129413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:57:19.131667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:57:19.138782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:57:19.144760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:57:19.158920 dracut-cmdline[216]: dracut-dracut-053 Jul 6 23:57:19.165920 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:57:19.190239 systemd-resolved[219]: Positive Trust Anchors: Jul 6 23:57:19.190260 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:57:19.190297 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:57:19.193457 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 6 23:57:19.195117 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:57:19.195789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:57:19.275504 kernel: SCSI subsystem initialized Jul 6 23:57:19.286474 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:57:19.299495 kernel: iscsi: registered transport (tcp) Jul 6 23:57:19.324496 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:57:19.324609 kernel: QLogic iSCSI HBA Driver Jul 6 23:57:19.385593 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:57:19.393904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:57:19.424615 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:57:19.424720 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:57:19.426093 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:57:19.475523 kernel: raid6: avx2x4 gen() 18241 MB/s Jul 6 23:57:19.492519 kernel: raid6: avx2x2 gen() 18092 MB/s Jul 6 23:57:19.509593 kernel: raid6: avx2x1 gen() 10863 MB/s Jul 6 23:57:19.509725 kernel: raid6: using algorithm avx2x4 gen() 18241 MB/s Jul 6 23:57:19.527661 kernel: raid6: .... xor() 6225 MB/s, rmw enabled Jul 6 23:57:19.527767 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:57:19.561625 kernel: xor: automatically using best checksumming function avx Jul 6 23:57:19.759461 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:57:19.775847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:57:19.782844 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:57:19.811817 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 6 23:57:19.819186 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:57:19.827642 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:57:19.862452 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jul 6 23:57:19.907657 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:57:19.913905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:57:20.011852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:57:20.021703 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:57:20.046750 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:57:20.049880 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:57:20.051927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:57:20.053151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:57:20.061132 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:57:20.085588 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:57:20.143345 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 6 23:57:20.154100 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:57:20.158287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:57:20.160997 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 6 23:57:20.161639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:57:20.166567 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:57:20.164933 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:57:20.165361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:57:20.165585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:20.165984 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:20.175852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:20.193652 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:57:20.193733 kernel: GPT:9289727 != 125829119 Jul 6 23:57:20.193747 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:57:20.193759 kernel: GPT:9289727 != 125829119 Jul 6 23:57:20.194618 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:57:20.194668 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:57:20.199699 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:57:20.199776 kernel: AES CTR mode by8 optimization enabled Jul 6 23:57:20.214462 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 6 23:57:20.217470 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jul 6 23:57:20.238465 kernel: libata version 3.00 loaded. Jul 6 23:57:20.252760 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 6 23:57:20.259722 kernel: scsi host1: ata_piix Jul 6 23:57:20.260014 kernel: scsi host2: ata_piix Jul 6 23:57:20.260166 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 6 23:57:20.261462 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 6 23:57:20.263467 kernel: ACPI: bus type USB registered Jul 6 23:57:20.265224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:20.272734 kernel: usbcore: registered new interface driver usbfs Jul 6 23:57:20.274478 kernel: usbcore: registered new interface driver hub Jul 6 23:57:20.278760 kernel: usbcore: registered new device driver usb Jul 6 23:57:20.278336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:57:20.292507 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Jul 6 23:57:20.302370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:57:20.319457 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (459) Jul 6 23:57:20.316224 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:57:20.321465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:57:20.322760 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:57:20.337075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:57:20.338548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:57:20.345711 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:57:20.357860 disk-uuid[541]: Primary Header is updated. Jul 6 23:57:20.357860 disk-uuid[541]: Secondary Entries is updated. Jul 6 23:57:20.357860 disk-uuid[541]: Secondary Header is updated. Jul 6 23:57:20.368480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:57:20.381563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:57:20.508499 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 6 23:57:20.508795 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 6 23:57:20.510474 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 6 23:57:20.513542 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 6 23:57:20.519643 kernel: hub 1-0:1.0: USB hub found Jul 6 23:57:20.522305 kernel: hub 1-0:1.0: 2 ports detected Jul 6 23:57:21.389501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:57:21.390459 disk-uuid[542]: The operation has completed successfully. Jul 6 23:57:21.439869 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:57:21.440029 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:57:21.454724 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:57:21.459038 sh[564]: Success Jul 6 23:57:21.475466 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:57:21.536639 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:57:21.550619 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:57:21.552712 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:57:21.575734 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:57:21.575847 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:57:21.575863 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:57:21.576625 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:57:21.577844 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:57:21.586662 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:57:21.587984 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:57:21.597794 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:57:21.600659 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:57:21.615240 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:57:21.615310 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:57:21.615324 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:57:21.618464 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:57:21.632229 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:57:21.632899 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:57:21.642997 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:57:21.650187 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:57:21.799125 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:57:21.808883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:57:21.836402 ignition[652]: Ignition 2.19.0 Jul 6 23:57:21.836420 ignition[652]: Stage: fetch-offline Jul 6 23:57:21.838224 ignition[652]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:21.838262 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:21.838492 ignition[652]: parsed url from cmdline: "" Jul 6 23:57:21.838499 ignition[652]: no config URL provided Jul 6 23:57:21.838508 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:57:21.838520 ignition[652]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:57:21.838529 ignition[652]: failed to fetch config: resource requires networking Jul 6 23:57:21.838804 ignition[652]: Ignition finished successfully Jul 6 23:57:21.846276 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:57:21.857332 systemd-networkd[751]: lo: Link UP Jul 6 23:57:21.857349 systemd-networkd[751]: lo: Gained carrier Jul 6 23:57:21.861173 systemd-networkd[751]: Enumeration completed Jul 6 23:57:21.861795 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:57:21.861802 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 6 23:57:21.861981 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:57:21.862964 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:57:21.862971 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:57:21.863860 systemd-networkd[751]: eth0: Link UP Jul 6 23:57:21.863866 systemd-networkd[751]: eth0: Gained carrier Jul 6 23:57:21.863878 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 6 23:57:21.864614 systemd[1]: Reached target network.target - Network. Jul 6 23:57:21.867064 systemd-networkd[751]: eth1: Link UP Jul 6 23:57:21.867071 systemd-networkd[751]: eth1: Gained carrier Jul 6 23:57:21.867089 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:57:21.872791 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:57:21.883580 systemd-networkd[751]: eth0: DHCPv4 address 64.23.136.149/20, gateway 64.23.128.1 acquired from 169.254.169.253 Jul 6 23:57:21.887557 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Jul 6 23:57:21.903873 ignition[756]: Ignition 2.19.0 Jul 6 23:57:21.903890 ignition[756]: Stage: fetch Jul 6 23:57:21.904137 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:21.904148 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:21.904262 ignition[756]: parsed url from cmdline: "" Jul 6 23:57:21.904266 ignition[756]: no config URL provided Jul 6 23:57:21.904272 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:57:21.904280 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:57:21.904311 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 6 23:57:21.920656 ignition[756]: GET result: OK Jul 6 23:57:21.920836 ignition[756]: parsing config with SHA512: b9a1aabf827094166bba48e929115f0b0fc277b1fed9217efa4e7be2284032b757a7bbcafb2fa2c65afd640fabf6415fa865f3eaedfd93712ba9573373f11813 Jul 6 23:57:21.926189 unknown[756]: fetched base config from "system" Jul 6 23:57:21.926222 unknown[756]: fetched base config from "system" Jul 6 23:57:21.926776 ignition[756]: fetch: fetch complete Jul 6 23:57:21.926231 unknown[756]: fetched user config from "digitalocean" Jul 6 23:57:21.926783 ignition[756]: fetch: fetch passed Jul 6 23:57:21.928584 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:57:21.926848 ignition[756]: Ignition finished successfully Jul 6 23:57:21.933702 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:57:21.961503 ignition[764]: Ignition 2.19.0 Jul 6 23:57:21.961518 ignition[764]: Stage: kargs Jul 6 23:57:21.961719 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:21.961730 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:21.964546 ignition[764]: kargs: kargs passed Jul 6 23:57:21.964669 ignition[764]: Ignition finished successfully Jul 6 23:57:21.966448 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:57:21.973764 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:57:21.997079 ignition[770]: Ignition 2.19.0 Jul 6 23:57:21.997091 ignition[770]: Stage: disks Jul 6 23:57:21.997283 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:21.997295 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:21.999758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:57:21.998402 ignition[770]: disks: disks passed Jul 6 23:57:21.998476 ignition[770]: Ignition finished successfully Jul 6 23:57:22.003954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:57:22.004820 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:57:22.005620 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:57:22.006521 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:57:22.007044 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:57:22.012744 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:57:22.040033 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:57:22.061464 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:57:22.068600 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:57:22.200464 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:57:22.201617 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:57:22.203395 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:57:22.211589 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:57:22.214561 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:57:22.217697 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 6 23:57:22.228478 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Jul 6 23:57:22.229666 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:57:22.237463 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:57:22.237505 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:57:22.237527 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:57:22.241508 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:57:22.238348 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:57:22.238396 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:57:22.245096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:57:22.247201 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:57:22.257862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:57:22.321178 coreos-metadata[789]: Jul 06 23:57:22.321 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:57:22.336670 coreos-metadata[789]: Jul 06 23:57:22.335 INFO Fetch successful Jul 6 23:57:22.345571 coreos-metadata[788]: Jul 06 23:57:22.345 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:57:22.347554 coreos-metadata[789]: Jul 06 23:57:22.346 INFO wrote hostname ci-4081.3.4-b-aec8669192 to /sysroot/etc/hostname Jul 6 23:57:22.348487 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:57:22.352037 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:57:22.359351 coreos-metadata[788]: Jul 06 23:57:22.359 INFO Fetch successful Jul 6 23:57:22.359934 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:57:22.366952 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:57:22.369144 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 6 23:57:22.369322 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 6 23:57:22.375103 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:57:22.495730 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:57:22.501626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:57:22.503663 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:57:22.516471 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:57:22.547438 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:57:22.552301 ignition[907]: INFO : Ignition 2.19.0 Jul 6 23:57:22.553258 ignition[907]: INFO : Stage: mount Jul 6 23:57:22.554224 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:22.555519 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:22.557001 ignition[907]: INFO : mount: mount passed Jul 6 23:57:22.557593 ignition[907]: INFO : Ignition finished successfully Jul 6 23:57:22.559465 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:57:22.563622 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:57:22.573463 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:57:22.588697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:57:22.614495 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Jul 6 23:57:22.614601 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:57:22.615575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:57:22.616519 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:57:22.624854 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:57:22.628120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:57:22.673589 ignition[934]: INFO : Ignition 2.19.0 Jul 6 23:57:22.674552 ignition[934]: INFO : Stage: files Jul 6 23:57:22.675045 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:22.675045 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:22.676166 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:57:22.677042 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:57:22.677042 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:57:22.680688 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:57:22.681591 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:57:22.681591 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:57:22.681321 unknown[934]: wrote ssh authorized keys file for user: core Jul 6 23:57:22.684079 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:57:22.684079 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:57:22.743594 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:57:22.940521 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:57:22.940521 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:57:22.942178 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:57:23.419489 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:57:23.490868 systemd-networkd[751]: eth1: Gained IPv6LL Jul 6 23:57:23.510694 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:57:23.511475 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:57:23.511475 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:57:23.511475 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:57:23.511475 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:57:23.511475 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:57:23.514554 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:57:23.514554 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:57:23.517501 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:57:23.517501 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:57:23.521985 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:57:23.521985 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:57:23.521985 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:57:23.521985 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:57:23.521985 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:57:23.746826 systemd-networkd[751]: eth0: Gained IPv6LL Jul 6 23:57:24.177063 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:57:24.684467 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:57:24.684467 ignition[934]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:57:24.686250 ignition[934]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:57:24.686250 ignition[934]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:57:24.686250 ignition[934]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:57:24.686250 ignition[934]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:57:24.690605 ignition[934]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:57:24.690605 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:57:24.690605 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:57:24.690605 ignition[934]: INFO : files: files passed Jul 6 23:57:24.690605 ignition[934]: INFO : Ignition finished successfully Jul 6 23:57:24.689109 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:57:24.706872 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:57:24.710679 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:57:24.714091 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:57:24.714678 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:57:24.735479 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:57:24.735479 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:57:24.738637 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:57:24.741022 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:57:24.742709 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:57:24.751753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:57:24.789927 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:57:24.790115 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:57:24.792810 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:57:24.793511 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:57:24.794651 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:57:24.796872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:57:24.833831 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:57:24.840824 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:57:24.862697 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:57:24.863286 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:57:24.863818 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:57:24.864934 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:57:24.865131 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:57:24.866578 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:57:24.867177 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:57:24.867853 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:57:24.868605 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:57:24.869242 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:57:24.870059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:57:24.870733 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:57:24.871479 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:57:24.872252 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:57:24.873094 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:57:24.873865 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:57:24.874006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:57:24.874975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:57:24.875668 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:57:24.876333 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:57:24.876470 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:57:24.877048 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:57:24.877216 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:57:24.878085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:57:24.878244 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:57:24.879019 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:57:24.879201 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:57:24.879745 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:57:24.879859 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:57:24.890382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:57:24.890875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:57:24.891105 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:57:24.893779 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:57:24.897149 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:57:24.897448 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:57:24.900664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:57:24.901112 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:57:24.909452 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:57:24.909590 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:57:24.923262 ignition[987]: INFO : Ignition 2.19.0 Jul 6 23:57:24.923262 ignition[987]: INFO : Stage: umount Jul 6 23:57:24.923262 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:24.923262 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 6 23:57:24.923262 ignition[987]: INFO : umount: umount passed Jul 6 23:57:24.923262 ignition[987]: INFO : Ignition finished successfully Jul 6 23:57:24.929971 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:57:24.930128 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:57:24.932366 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:57:24.933404 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:57:24.935254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:57:24.948378 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:57:24.948498 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:57:24.949121 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:57:24.949199 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:57:24.949969 systemd[1]: Stopped target network.target - Network. Jul 6 23:57:24.956007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:57:24.956133 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:57:24.957437 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:57:24.958295 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:57:24.966325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:57:24.971154 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:57:24.971667 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:57:24.972228 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:57:24.972307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:57:24.973596 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:57:24.973670 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:57:24.974533 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:57:24.974621 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:57:24.975388 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:57:24.975488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:57:24.976659 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:57:24.977776 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:57:24.979113 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:57:24.979272 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:57:24.982014 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:57:24.982107 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:57:24.982192 systemd-networkd[751]: eth1: DHCPv6 lease lost Jul 6 23:57:24.984550 systemd-networkd[751]: eth0: DHCPv6 lease lost Jul 6 23:57:24.986681 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:57:24.986863 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:57:24.992062 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:57:24.992276 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:57:24.995000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:57:24.995082 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:57:24.999667 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:57:25.000235 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:57:25.000340 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:57:25.000975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:57:25.001048 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:25.001663 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:57:25.001738 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:57:25.002710 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:57:25.002779 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:57:25.003963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:57:25.024069 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:57:25.024391 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:57:25.027523 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:57:25.027638 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:57:25.028512 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:57:25.028570 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:57:25.029491 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:57:25.029572 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:57:25.031015 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:57:25.031106 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:57:25.033594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:57:25.033686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:57:25.044822 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:57:25.046029 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:57:25.046140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:57:25.046779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:57:25.046858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:25.048074 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:57:25.048245 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:57:25.057542 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:57:25.057738 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:57:25.058699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:57:25.063700 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:57:25.080510 systemd[1]: Switching root. Jul 6 23:57:25.122771 systemd-journald[182]: Journal stopped Jul 6 23:57:26.594264 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jul 6 23:57:26.594420 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:57:26.594479 kernel: SELinux: policy capability open_perms=1 Jul 6 23:57:26.594502 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:57:26.594523 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:57:26.594545 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:57:26.594579 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:57:26.594601 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:57:26.594620 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:57:26.594638 kernel: audit: type=1403 audit(1751846245.299:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:57:26.594661 systemd[1]: Successfully loaded SELinux policy in 46.296ms. Jul 6 23:57:26.594706 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.695ms. Jul 6 23:57:26.594751 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:57:26.594775 systemd[1]: Detected virtualization kvm. Jul 6 23:57:26.594806 systemd[1]: Detected architecture x86-64. Jul 6 23:57:26.594830 systemd[1]: Detected first boot. Jul 6 23:57:26.594852 systemd[1]: Hostname set to . Jul 6 23:57:26.594874 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:57:26.594905 zram_generator::config[1031]: No configuration found. Jul 6 23:57:26.594940 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:57:26.594965 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:57:26.594988 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:57:26.595020 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:57:26.595047 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:57:26.595071 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:57:26.595096 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:57:26.595120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:57:26.595145 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:57:26.595187 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:57:26.595212 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:57:26.595244 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:57:26.595267 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:57:26.595292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:57:26.595316 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:57:26.595338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:57:26.595361 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:57:26.595386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:57:26.595412 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:57:26.595436 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:57:26.595512 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:57:26.595539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:57:26.595572 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:57:26.595597 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:57:26.595630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:57:26.595654 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:57:26.595679 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:57:26.595709 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:57:26.595731 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:57:26.595754 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:57:26.595774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:57:26.595797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:57:26.595818 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:57:26.595839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:57:26.595861 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:57:26.595880 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:57:26.595908 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:57:26.595929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:26.595952 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:57:26.595972 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:57:26.595994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:57:26.596019 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:57:26.596042 systemd[1]: Reached target machines.target - Containers. Jul 6 23:57:26.596066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:57:26.596098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:26.596124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:57:26.596150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:57:26.596171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:57:26.596198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:57:26.596222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:57:26.596247 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:57:26.596274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:57:26.596299 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:57:26.596327 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:57:26.596351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:57:26.596372 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:57:26.596394 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:57:26.596416 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:57:26.596554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:57:26.596583 kernel: loop: module loaded Jul 6 23:57:26.596608 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:57:26.596632 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:57:26.596666 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:57:26.596688 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:57:26.596709 systemd[1]: Stopped verity-setup.service. Jul 6 23:57:26.596732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:26.596755 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:57:26.596778 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:57:26.596801 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:57:26.597062 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:57:26.597093 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:57:26.597117 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:57:26.597143 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:57:26.597168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:57:26.597364 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:57:26.597405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:57:26.597445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:57:26.597470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:57:26.597495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:57:26.597518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:57:26.597549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:57:26.597574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:57:26.597599 kernel: fuse: init (API version 7.39) Jul 6 23:57:26.597794 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:57:26.597818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:57:26.597843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:57:26.597866 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:57:26.597890 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:57:26.598025 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:57:26.598059 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:57:26.598086 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:57:26.598110 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:57:26.598261 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:57:26.598288 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:57:26.598312 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:57:26.598331 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:57:26.598355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:57:26.598375 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:57:26.598404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:26.598438 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:57:26.598468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:57:26.598492 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:57:26.598517 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:57:26.598615 systemd-journald[1104]: Collecting audit messages is disabled. Jul 6 23:57:26.598677 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:57:26.598703 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:57:26.598728 systemd-journald[1104]: Journal started Jul 6 23:57:26.598780 systemd-journald[1104]: Runtime Journal (/run/log/journal/33168a4352bd4ee586c7235fa99edd38) is 4.9M, max 39.3M, 34.4M free. Jul 6 23:57:26.088649 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:57:26.112277 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:57:26.112958 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:57:26.601671 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:57:26.661847 kernel: ACPI: bus type drm_connector registered Jul 6 23:57:26.662972 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:57:26.663237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:57:26.695808 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:57:26.698057 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:57:26.701983 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:57:26.712508 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:57:26.725709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:26.740229 kernel: loop0: detected capacity change from 0 to 224512 Jul 6 23:57:26.750948 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:57:26.763773 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:57:26.789618 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:57:26.789846 systemd-journald[1104]: Time spent on flushing to /var/log/journal/33168a4352bd4ee586c7235fa99edd38 is 68.544ms for 995 entries. Jul 6 23:57:26.789846 systemd-journald[1104]: System Journal (/var/log/journal/33168a4352bd4ee586c7235fa99edd38) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:57:26.874762 systemd-journald[1104]: Received client request to flush runtime journal. Jul 6 23:57:26.874833 kernel: loop1: detected capacity change from 0 to 8 Jul 6 23:57:26.815808 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:57:26.819605 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:57:26.829707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:57:26.840624 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:57:26.887943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:57:26.895508 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:57:26.944150 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:57:26.954387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:57:26.966220 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:57:26.993468 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:57:27.033551 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 6 23:57:27.033586 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 6 23:57:27.039464 kernel: loop4: detected capacity change from 0 to 224512 Jul 6 23:57:27.059610 kernel: loop5: detected capacity change from 0 to 8 Jul 6 23:57:27.062535 kernel: loop6: detected capacity change from 0 to 140768 Jul 6 23:57:27.077283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:57:27.114692 kernel: loop7: detected capacity change from 0 to 142488 Jul 6 23:57:27.136220 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 6 23:57:27.140571 (sd-merge)[1177]: Merged extensions into '/usr'. Jul 6 23:57:27.153718 systemd[1]: Reloading requested from client PID 1127 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:57:27.153747 systemd[1]: Reloading... Jul 6 23:57:27.381523 zram_generator::config[1204]: No configuration found. Jul 6 23:57:27.450537 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:57:27.583766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:27.658361 systemd[1]: Reloading finished in 503 ms. Jul 6 23:57:27.687864 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:57:27.693356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:57:27.703994 systemd[1]: Starting ensure-sysext.service... Jul 6 23:57:27.716722 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:57:27.738735 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:57:27.738763 systemd[1]: Reloading... Jul 6 23:57:27.780173 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:57:27.780874 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:57:27.782331 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:57:27.783084 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 6 23:57:27.783319 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 6 23:57:27.789042 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:57:27.789091 systemd-tmpfiles[1248]: Skipping /boot Jul 6 23:57:27.819564 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:57:27.819581 systemd-tmpfiles[1248]: Skipping /boot Jul 6 23:57:27.919462 zram_generator::config[1271]: No configuration found. Jul 6 23:57:28.149363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:28.330073 systemd[1]: Reloading finished in 590 ms. Jul 6 23:57:28.373752 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:57:28.376178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:57:28.419234 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:28.439294 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:57:28.445366 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:57:28.473719 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:57:28.485785 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:57:28.500833 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:57:28.516141 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.516517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:28.522902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:57:28.526734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:57:28.530881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:57:28.531449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:28.531578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.536011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.536221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:28.536403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:28.536554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.541246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.542330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:28.546885 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:57:28.548808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:28.549026 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.555858 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:57:28.559327 systemd[1]: Finished ensure-sysext.service. Jul 6 23:57:28.570939 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:57:28.606473 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:57:28.632319 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:57:28.633613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:57:28.634556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:57:28.634917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:57:28.644230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:57:28.644885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:57:28.648459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:57:28.657576 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:57:28.658556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:57:28.660839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:57:28.665836 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:57:28.666751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:57:28.675140 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 6 23:57:28.681453 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:57:28.688740 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:57:28.706205 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:57:28.722079 augenrules[1357]: No rules Jul 6 23:57:28.723547 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:28.726040 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:57:28.732777 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:57:28.742511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:57:28.870173 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:57:28.873355 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:57:28.875213 systemd-networkd[1368]: lo: Link UP Jul 6 23:57:28.875222 systemd-networkd[1368]: lo: Gained carrier Jul 6 23:57:28.876569 systemd-networkd[1368]: Enumeration completed Jul 6 23:57:28.876684 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:57:28.877382 systemd-resolved[1324]: Positive Trust Anchors: Jul 6 23:57:28.877414 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:57:28.877871 systemd-timesyncd[1337]: No network connectivity, watching for changes. Jul 6 23:57:28.878747 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:57:28.882698 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:57:28.892087 systemd-resolved[1324]: Using system hostname 'ci-4081.3.4-b-aec8669192'. Jul 6 23:57:28.898752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:57:28.901219 systemd[1]: Reached target network.target - Network. Jul 6 23:57:28.901837 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:57:28.902545 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:57:28.986470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1373) Jul 6 23:57:28.992628 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 6 23:57:28.993176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:28.993509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:29.001738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:57:29.010760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:57:29.016856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:57:29.017392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:29.017484 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:57:29.017503 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:29.018010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:57:29.018745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:57:29.023486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:57:29.024842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:57:29.026965 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:57:29.031222 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:57:29.031586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:57:29.042339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:57:29.049581 kernel: ISO 9660 Extensions: RRIP_1991A Jul 6 23:57:29.053521 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 6 23:57:29.100417 systemd-networkd[1368]: eth0: Configuring with /run/systemd/network/10-76:8d:dd:ee:d5:f9.network. Jul 6 23:57:29.102554 systemd-networkd[1368]: eth0: Link UP Jul 6 23:57:29.102563 systemd-networkd[1368]: eth0: Gained carrier Jul 6 23:57:29.109564 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Jul 6 23:57:29.120496 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:57:29.122061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:57:29.947632 systemd-timesyncd[1337]: Contacted time server 24.144.88.190:123 (2.flatcar.pool.ntp.org). Jul 6 23:57:29.947838 systemd-timesyncd[1337]: Initial clock synchronization to Sun 2025-07-06 23:57:29.947482 UTC. Jul 6 23:57:29.947964 systemd-resolved[1324]: Clock change detected. Flushing caches. Jul 6 23:57:29.958934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:57:29.967727 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:57:29.981019 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 6 23:57:29.988768 systemd-networkd[1368]: eth1: Configuring with /run/systemd/network/10-52:9d:7a:64:b3:b5.network. Jul 6 23:57:29.990740 systemd-networkd[1368]: eth1: Link UP Jul 6 23:57:29.990745 systemd-networkd[1368]: eth1: Gained carrier Jul 6 23:57:29.996002 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:57:30.017809 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:57:30.020961 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 6 23:57:30.021110 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 6 23:57:30.037920 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:57:30.038824 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:57:30.038958 kernel: [drm] features: -context_init Jul 6 23:57:30.056934 kernel: [drm] number of scanouts: 1 Jul 6 23:57:30.057039 kernel: [drm] number of cap sets: 0 Jul 6 23:57:30.094243 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 6 23:57:30.147687 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:57:30.147830 kernel: Console: switching to colour frame buffer device 128x48 Jul 6 23:57:30.151699 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:57:30.156703 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:57:30.161567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:30.175960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:57:30.176282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:30.183152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:30.228235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:57:30.228562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:30.295331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:30.359038 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:57:30.389446 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:57:30.398077 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:57:30.420178 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:30.420676 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:57:30.459568 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:57:30.461135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:57:30.461312 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:57:30.461642 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:57:30.461827 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:57:30.462225 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:57:30.462438 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:57:30.462616 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:57:30.463283 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:57:30.463430 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:57:30.463602 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:57:30.465877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:57:30.469851 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:57:30.478428 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:57:30.482380 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:57:30.486707 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:57:30.489280 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:57:30.490091 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:57:30.492861 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:57:30.492911 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:57:30.496868 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:57:30.508805 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:57:30.517990 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:57:30.523947 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:57:30.533246 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:57:30.549031 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:57:30.549875 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:57:30.559897 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:57:30.561361 coreos-metadata[1434]: Jul 06 23:57:30.561 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:57:30.567872 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:57:30.574975 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:57:30.578696 coreos-metadata[1434]: Jul 06 23:57:30.577 INFO Fetch successful Jul 6 23:57:30.588483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:57:30.603171 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:57:30.604610 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:57:30.607052 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:57:30.617997 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:57:30.619100 jq[1438]: false Jul 6 23:57:30.625915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:57:30.629346 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:57:30.639206 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:57:30.640389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:57:30.640997 dbus-daemon[1435]: [system] SELinux support is enabled Jul 6 23:57:30.650609 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:57:30.660816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:57:30.661756 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:57:30.685057 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:57:30.694334 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:57:30.694401 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:57:30.695296 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:57:30.695442 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 6 23:57:30.695471 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:57:30.703633 systemd-logind[1445]: New seat seat0. Jul 6 23:57:30.706395 extend-filesystems[1439]: Found loop4 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found loop5 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found loop6 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found loop7 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda1 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda2 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda3 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found usr Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda4 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda6 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda7 Jul 6 23:57:30.706395 extend-filesystems[1439]: Found vda9 Jul 6 23:57:30.706395 extend-filesystems[1439]: Checking size of /dev/vda9 Jul 6 23:57:30.839834 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 6 23:57:30.748780 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:57:30.840021 update_engine[1446]: I20250706 23:57:30.795571 1446 main.cc:92] Flatcar Update Engine starting Jul 6 23:57:30.840021 update_engine[1446]: I20250706 23:57:30.814965 1446 update_check_scheduler.cc:74] Next update check in 5m19s Jul 6 23:57:30.840482 tar[1450]: linux-amd64/LICENSE Jul 6 23:57:30.840482 tar[1450]: linux-amd64/helm Jul 6 23:57:30.840840 extend-filesystems[1439]: Resized partition /dev/vda9 Jul 6 23:57:30.748804 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:57:30.875720 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:57:30.749106 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:57:30.889564 jq[1447]: true Jul 6 23:57:30.812549 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:57:30.890034 jq[1473]: true Jul 6 23:57:30.825191 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:57:30.894972 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:57:30.895235 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:57:30.913871 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:57:30.921994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:57:30.984701 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1376) Jul 6 23:57:31.013414 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 6 23:57:31.037393 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:57:31.037393 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 6 23:57:31.037393 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 6 23:57:31.037213 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:57:31.052320 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jul 6 23:57:31.052320 extend-filesystems[1439]: Found vdb Jul 6 23:57:31.039564 systemd-networkd[1368]: eth1: Gained IPv6LL Jul 6 23:57:31.039886 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:57:31.048338 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:57:31.075417 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:57:31.057298 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:57:31.068036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:31.073960 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:57:31.088206 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:57:31.090373 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:57:31.111031 systemd[1]: Starting sshkeys.service... Jul 6 23:57:31.156236 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:57:31.161090 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:57:31.168768 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:57:31.181160 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:57:31.201676 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:57:31.213104 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:57:31.215266 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:57:31.215454 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:57:31.230381 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:57:31.311113 coreos-metadata[1530]: Jul 06 23:57:31.310 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 6 23:57:31.314537 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:57:31.328387 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:57:31.342798 coreos-metadata[1530]: Jul 06 23:57:31.337 INFO Fetch successful Jul 6 23:57:31.338457 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:57:31.339632 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:57:31.365029 unknown[1530]: wrote ssh authorized keys file for user: core Jul 6 23:57:31.432402 update-ssh-keys[1543]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:57:31.434664 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:57:31.439851 systemd[1]: Finished sshkeys.service. Jul 6 23:57:31.509180 containerd[1451]: time="2025-07-06T23:57:31.506901267Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:57:31.577128 containerd[1451]: time="2025-07-06T23:57:31.577065038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.584720 containerd[1451]: time="2025-07-06T23:57:31.584602771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:31.584900 containerd[1451]: time="2025-07-06T23:57:31.584876778Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:57:31.584977 containerd[1451]: time="2025-07-06T23:57:31.584963879Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:57:31.586825 containerd[1451]: time="2025-07-06T23:57:31.586782538Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:57:31.586989 containerd[1451]: time="2025-07-06T23:57:31.586971838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.587209 containerd[1451]: time="2025-07-06T23:57:31.587177115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:31.587283 containerd[1451]: time="2025-07-06T23:57:31.587270224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.587628 containerd[1451]: time="2025-07-06T23:57:31.587602946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:31.587750 containerd[1451]: time="2025-07-06T23:57:31.587735103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.587828 containerd[1451]: time="2025-07-06T23:57:31.587814596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:31.587903 containerd[1451]: time="2025-07-06T23:57:31.587888525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.588127 containerd[1451]: time="2025-07-06T23:57:31.588100046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.588633 containerd[1451]: time="2025-07-06T23:57:31.588607638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:31.588922 containerd[1451]: time="2025-07-06T23:57:31.588902906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:31.588999 containerd[1451]: time="2025-07-06T23:57:31.588987115Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:57:31.589171 containerd[1451]: time="2025-07-06T23:57:31.589156946Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:57:31.589297 containerd[1451]: time="2025-07-06T23:57:31.589277751Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:57:31.594676 containerd[1451]: time="2025-07-06T23:57:31.594115542Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:57:31.594676 containerd[1451]: time="2025-07-06T23:57:31.594221136Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:57:31.594676 containerd[1451]: time="2025-07-06T23:57:31.594241383Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:57:31.594676 containerd[1451]: time="2025-07-06T23:57:31.594256074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:57:31.594676 containerd[1451]: time="2025-07-06T23:57:31.594271442Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:57:31.594676 containerd[1451]: time="2025-07-06T23:57:31.594449127Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:57:31.595229 containerd[1451]: time="2025-07-06T23:57:31.595199281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595406380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595439654Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595453754Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595468895Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595498072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595512615Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595526344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595540358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595553817Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595575787Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595587331Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595606696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595619509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.596511 containerd[1451]: time="2025-07-06T23:57:31.595643367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595672966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595685039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595698615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595710164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595722927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595747247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595761661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595803584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595896231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595910974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595929197Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595951380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595972388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597142 containerd[1451]: time="2025-07-06T23:57:31.595984832Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596069608Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596093141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596105009Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596188553Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596200109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596214293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596226076Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:57:31.597468 containerd[1451]: time="2025-07-06T23:57:31.596236456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:57:31.597723 containerd[1451]: time="2025-07-06T23:57:31.596725567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:57:31.597723 containerd[1451]: time="2025-07-06T23:57:31.596844922Z" level=info msg="Connect containerd service" Jul 6 23:57:31.597723 containerd[1451]: time="2025-07-06T23:57:31.596908181Z" level=info msg="using legacy CRI server" Jul 6 23:57:31.597723 containerd[1451]: time="2025-07-06T23:57:31.596916768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:57:31.597723 containerd[1451]: time="2025-07-06T23:57:31.597103574Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:57:31.601685 containerd[1451]: time="2025-07-06T23:57:31.599988659Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:57:31.601823 containerd[1451]: time="2025-07-06T23:57:31.601754773Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:57:31.602426 containerd[1451]: time="2025-07-06T23:57:31.601888974Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:57:31.608129 containerd[1451]: time="2025-07-06T23:57:31.607845351Z" level=info msg="Start subscribing containerd event" Jul 6 23:57:31.608129 containerd[1451]: time="2025-07-06T23:57:31.608141802Z" level=info msg="Start recovering state" Jul 6 23:57:31.608320 containerd[1451]: time="2025-07-06T23:57:31.608261717Z" level=info msg="Start event monitor" Jul 6 23:57:31.608320 containerd[1451]: time="2025-07-06T23:57:31.608296786Z" level=info msg="Start snapshots syncer" Jul 6 23:57:31.608320 containerd[1451]: time="2025-07-06T23:57:31.608314708Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:57:31.608447 containerd[1451]: time="2025-07-06T23:57:31.608325611Z" level=info msg="Start streaming server" Jul 6 23:57:31.611349 containerd[1451]: time="2025-07-06T23:57:31.608869984Z" level=info msg="containerd successfully booted in 0.103541s" Jul 6 23:57:31.609022 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:57:31.804130 systemd-networkd[1368]: eth0: Gained IPv6LL Jul 6 23:57:31.964388 tar[1450]: linux-amd64/README.md Jul 6 23:57:31.983698 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:57:32.586940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:32.587396 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:32.588258 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:57:32.590291 systemd[1]: Startup finished in 1.160s (kernel) + 6.565s (initrd) + 6.510s (userspace) = 14.236s. Jul 6 23:57:33.214418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:57:33.222998 systemd[1]: Started sshd@0-64.23.136.149:22-139.178.89.65:40526.service - OpenSSH per-connection server daemon (139.178.89.65:40526). Jul 6 23:57:33.250532 kubelet[1558]: E0706 23:57:33.250376 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:33.254166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:33.254645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:33.255187 systemd[1]: kubelet.service: Consumed 1.336s CPU time. Jul 6 23:57:33.304307 sshd[1569]: Accepted publickey for core from 139.178.89.65 port 40526 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:33.307469 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:33.319004 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:57:33.328212 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:57:33.331745 systemd-logind[1445]: New session 1 of user core. Jul 6 23:57:33.353440 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:57:33.360107 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:57:33.376126 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:57:33.508202 systemd[1574]: Queued start job for default target default.target. Jul 6 23:57:33.518061 systemd[1574]: Created slice app.slice - User Application Slice. Jul 6 23:57:33.518096 systemd[1574]: Reached target paths.target - Paths. Jul 6 23:57:33.518121 systemd[1574]: Reached target timers.target - Timers. Jul 6 23:57:33.519933 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:57:33.535585 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:57:33.535745 systemd[1574]: Reached target sockets.target - Sockets. Jul 6 23:57:33.535773 systemd[1574]: Reached target basic.target - Basic System. Jul 6 23:57:33.535997 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:57:33.537417 systemd[1574]: Reached target default.target - Main User Target. Jul 6 23:57:33.537505 systemd[1574]: Startup finished in 148ms. Jul 6 23:57:33.547956 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:57:33.626117 systemd[1]: Started sshd@1-64.23.136.149:22-139.178.89.65:40534.service - OpenSSH per-connection server daemon (139.178.89.65:40534). Jul 6 23:57:33.673303 sshd[1585]: Accepted publickey for core from 139.178.89.65 port 40534 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:33.675065 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:33.680193 systemd-logind[1445]: New session 2 of user core. Jul 6 23:57:33.691958 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:57:33.756052 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:33.766053 systemd[1]: sshd@1-64.23.136.149:22-139.178.89.65:40534.service: Deactivated successfully. Jul 6 23:57:33.768546 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:57:33.770856 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:57:33.775174 systemd[1]: Started sshd@2-64.23.136.149:22-139.178.89.65:40548.service - OpenSSH per-connection server daemon (139.178.89.65:40548). Jul 6 23:57:33.777048 systemd-logind[1445]: Removed session 2. Jul 6 23:57:33.835532 sshd[1592]: Accepted publickey for core from 139.178.89.65 port 40548 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:33.837493 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:33.843105 systemd-logind[1445]: New session 3 of user core. Jul 6 23:57:33.851925 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:57:33.908966 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:33.928449 systemd[1]: sshd@2-64.23.136.149:22-139.178.89.65:40548.service: Deactivated successfully. Jul 6 23:57:33.931957 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:57:33.934903 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:57:33.940049 systemd[1]: Started sshd@3-64.23.136.149:22-139.178.89.65:40564.service - OpenSSH per-connection server daemon (139.178.89.65:40564). Jul 6 23:57:33.941439 systemd-logind[1445]: Removed session 3. Jul 6 23:57:33.987139 sshd[1599]: Accepted publickey for core from 139.178.89.65 port 40564 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:33.988824 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:33.995479 systemd-logind[1445]: New session 4 of user core. Jul 6 23:57:34.001995 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:57:34.067862 sshd[1599]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:34.082702 systemd[1]: sshd@3-64.23.136.149:22-139.178.89.65:40564.service: Deactivated successfully. Jul 6 23:57:34.085048 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:57:34.087968 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:57:34.093046 systemd[1]: Started sshd@4-64.23.136.149:22-139.178.89.65:40580.service - OpenSSH per-connection server daemon (139.178.89.65:40580). Jul 6 23:57:34.094427 systemd-logind[1445]: Removed session 4. Jul 6 23:57:34.139308 sshd[1606]: Accepted publickey for core from 139.178.89.65 port 40580 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:34.141504 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:34.149333 systemd-logind[1445]: New session 5 of user core. Jul 6 23:57:34.154976 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:57:34.226199 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:57:34.226872 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:34.241564 sudo[1609]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:34.246605 sshd[1606]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:34.257908 systemd[1]: sshd@4-64.23.136.149:22-139.178.89.65:40580.service: Deactivated successfully. Jul 6 23:57:34.260931 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:57:34.263542 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:57:34.268117 systemd[1]: Started sshd@5-64.23.136.149:22-139.178.89.65:40596.service - OpenSSH per-connection server daemon (139.178.89.65:40596). Jul 6 23:57:34.271459 systemd-logind[1445]: Removed session 5. Jul 6 23:57:34.332745 sshd[1614]: Accepted publickey for core from 139.178.89.65 port 40596 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:34.334756 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:34.341421 systemd-logind[1445]: New session 6 of user core. Jul 6 23:57:34.347963 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:57:34.411023 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:57:34.411456 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:34.416196 sudo[1618]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:34.424131 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:57:34.425287 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:34.444203 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:34.458579 auditctl[1621]: No rules Jul 6 23:57:34.459130 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:57:34.459484 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:34.466249 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:34.518168 augenrules[1639]: No rules Jul 6 23:57:34.519421 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:34.521382 sudo[1617]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:34.527015 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:34.537778 systemd[1]: sshd@5-64.23.136.149:22-139.178.89.65:40596.service: Deactivated successfully. Jul 6 23:57:34.540495 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:57:34.542996 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:57:34.557535 systemd[1]: Started sshd@6-64.23.136.149:22-139.178.89.65:40600.service - OpenSSH per-connection server daemon (139.178.89.65:40600). Jul 6 23:57:34.559529 systemd-logind[1445]: Removed session 6. Jul 6 23:57:34.604792 sshd[1647]: Accepted publickey for core from 139.178.89.65 port 40600 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:57:34.607530 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:34.613363 systemd-logind[1445]: New session 7 of user core. Jul 6 23:57:34.621053 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:57:34.684296 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:57:34.684649 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:35.225488 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:57:35.226529 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:57:35.810709 dockerd[1667]: time="2025-07-06T23:57:35.808432341Z" level=info msg="Starting up" Jul 6 23:57:36.061370 dockerd[1667]: time="2025-07-06T23:57:36.060778689Z" level=info msg="Loading containers: start." Jul 6 23:57:36.210707 kernel: Initializing XFRM netlink socket Jul 6 23:57:36.331513 systemd-networkd[1368]: docker0: Link UP Jul 6 23:57:36.365152 dockerd[1667]: time="2025-07-06T23:57:36.365108397Z" level=info msg="Loading containers: done." Jul 6 23:57:36.392318 dockerd[1667]: time="2025-07-06T23:57:36.391534810Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:57:36.392318 dockerd[1667]: time="2025-07-06T23:57:36.391734286Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:57:36.392318 dockerd[1667]: time="2025-07-06T23:57:36.391925829Z" level=info msg="Daemon has completed initialization" Jul 6 23:57:36.392840 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1911935027-merged.mount: Deactivated successfully. Jul 6 23:57:36.436597 dockerd[1667]: time="2025-07-06T23:57:36.436499530Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:57:36.436949 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:57:37.365069 containerd[1451]: time="2025-07-06T23:57:37.364994056Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:57:37.952204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235842161.mount: Deactivated successfully. Jul 6 23:57:39.409925 containerd[1451]: time="2025-07-06T23:57:39.409821618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:39.413210 containerd[1451]: time="2025-07-06T23:57:39.412825371Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 6 23:57:39.417485 containerd[1451]: time="2025-07-06T23:57:39.416191909Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:39.420267 containerd[1451]: time="2025-07-06T23:57:39.420210609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:39.421784 containerd[1451]: time="2025-07-06T23:57:39.421727257Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.056679301s" Jul 6 23:57:39.422004 containerd[1451]: time="2025-07-06T23:57:39.421978489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:57:39.423453 containerd[1451]: time="2025-07-06T23:57:39.423412950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:57:40.866393 containerd[1451]: time="2025-07-06T23:57:40.866298606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:40.868159 containerd[1451]: time="2025-07-06T23:57:40.868039205Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:40.868159 containerd[1451]: time="2025-07-06T23:57:40.868111484Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 6 23:57:40.871779 containerd[1451]: time="2025-07-06T23:57:40.871700234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:40.873076 containerd[1451]: time="2025-07-06T23:57:40.872882139Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.449427702s" Jul 6 23:57:40.873076 containerd[1451]: time="2025-07-06T23:57:40.872932622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:57:40.874327 containerd[1451]: time="2025-07-06T23:57:40.874023603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:57:42.092194 containerd[1451]: time="2025-07-06T23:57:42.092104832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:42.093795 containerd[1451]: time="2025-07-06T23:57:42.093728023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 6 23:57:42.094527 containerd[1451]: time="2025-07-06T23:57:42.094441143Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:42.099139 containerd[1451]: time="2025-07-06T23:57:42.099047005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:42.100687 containerd[1451]: time="2025-07-06T23:57:42.100526317Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.226445881s" Jul 6 23:57:42.100687 containerd[1451]: time="2025-07-06T23:57:42.100576303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:57:42.101816 containerd[1451]: time="2025-07-06T23:57:42.101357438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:57:43.200623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230345855.mount: Deactivated successfully. Jul 6 23:57:43.505426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:57:43.513027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:43.758035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:43.770700 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:43.876154 kubelet[1890]: E0706 23:57:43.876096 1890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:43.884080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:43.884352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:44.046524 containerd[1451]: time="2025-07-06T23:57:44.045419937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:44.047093 containerd[1451]: time="2025-07-06T23:57:44.047040417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 6 23:57:44.047223 containerd[1451]: time="2025-07-06T23:57:44.047188557Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:44.049679 containerd[1451]: time="2025-07-06T23:57:44.049599949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:44.050972 containerd[1451]: time="2025-07-06T23:57:44.050918746Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.949516209s" Jul 6 23:57:44.051158 containerd[1451]: time="2025-07-06T23:57:44.051133326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:57:44.051874 containerd[1451]: time="2025-07-06T23:57:44.051846722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:57:44.053698 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jul 6 23:57:44.579973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470636056.mount: Deactivated successfully. Jul 6 23:57:45.430896 containerd[1451]: time="2025-07-06T23:57:45.430819219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:45.432363 containerd[1451]: time="2025-07-06T23:57:45.432287135Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:57:45.433122 containerd[1451]: time="2025-07-06T23:57:45.432514987Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:45.436136 containerd[1451]: time="2025-07-06T23:57:45.436027059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:45.437563 containerd[1451]: time="2025-07-06T23:57:45.437398307Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.38541838s" Jul 6 23:57:45.437563 containerd[1451]: time="2025-07-06T23:57:45.437444074Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:57:45.438634 containerd[1451]: time="2025-07-06T23:57:45.438425650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:57:45.959985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546195462.mount: Deactivated successfully. Jul 6 23:57:45.965515 containerd[1451]: time="2025-07-06T23:57:45.964300680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:45.965515 containerd[1451]: time="2025-07-06T23:57:45.965205981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:57:45.965515 containerd[1451]: time="2025-07-06T23:57:45.965449867Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:45.968907 containerd[1451]: time="2025-07-06T23:57:45.968836001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:45.969994 containerd[1451]: time="2025-07-06T23:57:45.969956534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.495602ms" Jul 6 23:57:45.970183 containerd[1451]: time="2025-07-06T23:57:45.970155255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:57:45.971077 containerd[1451]: time="2025-07-06T23:57:45.971039655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:57:46.514243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146423867.mount: Deactivated successfully. Jul 6 23:57:47.165040 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jul 6 23:57:48.695486 containerd[1451]: time="2025-07-06T23:57:48.693442239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:48.695486 containerd[1451]: time="2025-07-06T23:57:48.694564796Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 6 23:57:48.697889 containerd[1451]: time="2025-07-06T23:57:48.697783291Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:48.702616 containerd[1451]: time="2025-07-06T23:57:48.702512770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:48.704530 containerd[1451]: time="2025-07-06T23:57:48.704265270Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.73318158s" Jul 6 23:57:48.704530 containerd[1451]: time="2025-07-06T23:57:48.704325992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:57:51.481837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:51.493016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:51.531979 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit session-7.scope)... Jul 6 23:57:51.532165 systemd[1]: Reloading... Jul 6 23:57:51.688702 zram_generator::config[2074]: No configuration found. Jul 6 23:57:51.832457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:51.918101 systemd[1]: Reloading finished in 385 ms. Jul 6 23:57:51.988275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:51.993494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:51.999521 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:57:51.999919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:52.007210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:52.192008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:52.201201 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:57:52.276251 kubelet[2131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:52.276251 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:57:52.276251 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:52.277091 kubelet[2131]: I0706 23:57:52.276302 2131 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:57:52.698630 kubelet[2131]: I0706 23:57:52.698549 2131 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:57:52.698630 kubelet[2131]: I0706 23:57:52.698623 2131 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:57:52.699526 kubelet[2131]: I0706 23:57:52.699400 2131 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:57:52.729603 kubelet[2131]: I0706 23:57:52.729567 2131 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:57:52.730894 kubelet[2131]: E0706 23:57:52.730851 2131 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.136.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:52.740069 kubelet[2131]: E0706 23:57:52.739959 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:57:52.740069 kubelet[2131]: I0706 23:57:52.740064 2131 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:57:52.744340 kubelet[2131]: I0706 23:57:52.744298 2131 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:57:52.748030 kubelet[2131]: I0706 23:57:52.747931 2131 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:57:52.748343 kubelet[2131]: I0706 23:57:52.748027 2131 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-b-aec8669192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:57:52.750288 kubelet[2131]: I0706 23:57:52.750170 2131 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:57:52.750288 kubelet[2131]: I0706 23:57:52.750235 2131 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:57:52.751880 kubelet[2131]: I0706 23:57:52.751846 2131 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:52.757327 kubelet[2131]: I0706 23:57:52.756814 2131 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:57:52.757327 kubelet[2131]: I0706 23:57:52.756910 2131 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:57:52.757327 kubelet[2131]: I0706 23:57:52.756948 2131 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:57:52.757327 kubelet[2131]: I0706 23:57:52.756965 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:57:52.765411 kubelet[2131]: W0706 23:57:52.765206 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.136.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-b-aec8669192&limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:52.765411 kubelet[2131]: E0706 23:57:52.765275 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.136.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-b-aec8669192&limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:52.766484 kubelet[2131]: W0706 23:57:52.766154 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.136.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:52.766484 kubelet[2131]: E0706 23:57:52.766210 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.136.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:52.767685 kubelet[2131]: I0706 23:57:52.767593 2131 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:57:52.772854 kubelet[2131]: I0706 23:57:52.772673 2131 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:57:52.773536 kubelet[2131]: W0706 23:57:52.773498 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:57:52.776711 kubelet[2131]: I0706 23:57:52.776638 2131 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:57:52.776711 kubelet[2131]: I0706 23:57:52.776716 2131 server.go:1287] "Started kubelet" Jul 6 23:57:52.780527 kubelet[2131]: I0706 23:57:52.780476 2131 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:57:52.782160 kubelet[2131]: I0706 23:57:52.782130 2131 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:57:52.787045 kubelet[2131]: I0706 23:57:52.786391 2131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:57:52.787045 kubelet[2131]: I0706 23:57:52.786911 2131 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:57:52.789124 kubelet[2131]: I0706 23:57:52.787522 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:57:52.791688 kubelet[2131]: E0706 23:57:52.788922 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.136.149:6443/api/v1/namespaces/default/events\": dial tcp 64.23.136.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-b-aec8669192.184fcee745f230bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-b-aec8669192,UID:ci-4081.3.4-b-aec8669192,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-b-aec8669192,},FirstTimestamp:2025-07-06 23:57:52.776679613 +0000 UTC m=+0.570120949,LastTimestamp:2025-07-06 23:57:52.776679613 +0000 UTC m=+0.570120949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-b-aec8669192,}" Jul 6 23:57:52.791688 kubelet[2131]: I0706 23:57:52.791176 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:57:52.794267 kubelet[2131]: E0706 23:57:52.794198 2131 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.4-b-aec8669192\" not found" Jul 6 23:57:52.794452 kubelet[2131]: I0706 23:57:52.794439 2131 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:57:52.794834 kubelet[2131]: I0706 23:57:52.794812 2131 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:57:52.795037 kubelet[2131]: I0706 23:57:52.795022 2131 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:57:52.795638 kubelet[2131]: W0706 23:57:52.795587 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.136.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:52.795804 kubelet[2131]: E0706 23:57:52.795779 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.136.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:52.796194 kubelet[2131]: E0706 23:57:52.796162 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.136.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-b-aec8669192?timeout=10s\": dial tcp 64.23.136.149:6443: connect: connection refused" interval="200ms" Jul 6 23:57:52.800052 kubelet[2131]: I0706 23:57:52.799326 2131 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:57:52.800052 kubelet[2131]: I0706 23:57:52.799471 2131 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:57:52.805069 kubelet[2131]: I0706 23:57:52.805039 2131 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:57:52.816185 kubelet[2131]: I0706 23:57:52.816120 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:57:52.820433 kubelet[2131]: I0706 23:57:52.820391 2131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:57:52.820614 kubelet[2131]: I0706 23:57:52.820601 2131 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:57:52.820818 kubelet[2131]: I0706 23:57:52.820801 2131 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:57:52.820890 kubelet[2131]: I0706 23:57:52.820881 2131 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:57:52.821075 kubelet[2131]: E0706 23:57:52.821048 2131 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:57:52.831837 kubelet[2131]: W0706 23:57:52.831765 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.136.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:52.831998 kubelet[2131]: E0706 23:57:52.831850 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.136.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:52.834650 kubelet[2131]: I0706 23:57:52.834615 2131 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:57:52.834650 kubelet[2131]: I0706 23:57:52.834634 2131 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:57:52.834919 kubelet[2131]: I0706 23:57:52.834747 2131 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:52.837998 kubelet[2131]: I0706 23:57:52.837947 2131 policy_none.go:49] "None policy: Start" Jul 6 23:57:52.837998 kubelet[2131]: I0706 23:57:52.837987 2131 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:57:52.837998 kubelet[2131]: I0706 23:57:52.838002 2131 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:57:52.849628 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:57:52.862429 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:57:52.865803 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:57:52.883742 kubelet[2131]: I0706 23:57:52.882466 2131 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:57:52.883742 kubelet[2131]: I0706 23:57:52.882759 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:57:52.883742 kubelet[2131]: I0706 23:57:52.882785 2131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:57:52.883742 kubelet[2131]: I0706 23:57:52.883610 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:57:52.886433 kubelet[2131]: E0706 23:57:52.886388 2131 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:57:52.886580 kubelet[2131]: E0706 23:57:52.886467 2131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-b-aec8669192\" not found" Jul 6 23:57:52.932709 systemd[1]: Created slice kubepods-burstable-pode13c385cffa16c2d4fb4a66a8950e3dd.slice - libcontainer container kubepods-burstable-pode13c385cffa16c2d4fb4a66a8950e3dd.slice. Jul 6 23:57:52.951735 kubelet[2131]: E0706 23:57:52.950489 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.136.149:6443/api/v1/namespaces/default/events\": dial tcp 64.23.136.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-b-aec8669192.184fcee745f230bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-b-aec8669192,UID:ci-4081.3.4-b-aec8669192,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-b-aec8669192,},FirstTimestamp:2025-07-06 23:57:52.776679613 +0000 UTC m=+0.570120949,LastTimestamp:2025-07-06 23:57:52.776679613 +0000 UTC m=+0.570120949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-b-aec8669192,}" Jul 6 23:57:52.954972 kubelet[2131]: E0706 23:57:52.954858 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.959275 systemd[1]: Created slice kubepods-burstable-pod682b143e96cc2458e90dd982539cd976.slice - libcontainer container kubepods-burstable-pod682b143e96cc2458e90dd982539cd976.slice. Jul 6 23:57:52.962308 kubelet[2131]: E0706 23:57:52.962204 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.965732 systemd[1]: Created slice kubepods-burstable-pod3a84577dab14328e2f2fc6a900315365.slice - libcontainer container kubepods-burstable-pod3a84577dab14328e2f2fc6a900315365.slice. Jul 6 23:57:52.967931 kubelet[2131]: E0706 23:57:52.967899 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.984360 kubelet[2131]: I0706 23:57:52.984321 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.985172 kubelet[2131]: E0706 23:57:52.985136 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.136.149:6443/api/v1/nodes\": dial tcp 64.23.136.149:6443: connect: connection refused" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.996999 kubelet[2131]: I0706 23:57:52.996574 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.996999 kubelet[2131]: I0706 23:57:52.996635 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.996999 kubelet[2131]: I0706 23:57:52.996697 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.996999 kubelet[2131]: I0706 23:57:52.996723 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/682b143e96cc2458e90dd982539cd976-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-b-aec8669192\" (UID: \"682b143e96cc2458e90dd982539cd976\") " pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.996999 kubelet[2131]: I0706 23:57:52.996746 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/682b143e96cc2458e90dd982539cd976-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-b-aec8669192\" (UID: \"682b143e96cc2458e90dd982539cd976\") " pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.997351 kubelet[2131]: I0706 23:57:52.996768 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.997351 kubelet[2131]: I0706 23:57:52.996812 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/682b143e96cc2458e90dd982539cd976-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-b-aec8669192\" (UID: \"682b143e96cc2458e90dd982539cd976\") " pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.997351 kubelet[2131]: I0706 23:57:52.996840 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.997351 kubelet[2131]: I0706 23:57:52.996866 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a84577dab14328e2f2fc6a900315365-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-b-aec8669192\" (UID: \"3a84577dab14328e2f2fc6a900315365\") " pod="kube-system/kube-scheduler-ci-4081.3.4-b-aec8669192" Jul 6 23:57:52.997351 kubelet[2131]: E0706 23:57:52.996914 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.136.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-b-aec8669192?timeout=10s\": dial tcp 64.23.136.149:6443: connect: connection refused" interval="400ms" Jul 6 23:57:53.186991 kubelet[2131]: I0706 23:57:53.186946 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:53.187851 kubelet[2131]: E0706 23:57:53.187819 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.136.149:6443/api/v1/nodes\": dial tcp 64.23.136.149:6443: connect: connection refused" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:53.257636 kubelet[2131]: E0706 23:57:53.257542 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:53.258487 containerd[1451]: time="2025-07-06T23:57:53.258435035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-b-aec8669192,Uid:e13c385cffa16c2d4fb4a66a8950e3dd,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:53.260285 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jul 6 23:57:53.264071 kubelet[2131]: E0706 23:57:53.263722 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:53.269023 kubelet[2131]: E0706 23:57:53.268963 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:53.273028 containerd[1451]: time="2025-07-06T23:57:53.272608319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-b-aec8669192,Uid:682b143e96cc2458e90dd982539cd976,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:53.273028 containerd[1451]: time="2025-07-06T23:57:53.272712961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-b-aec8669192,Uid:3a84577dab14328e2f2fc6a900315365,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:53.398183 kubelet[2131]: E0706 23:57:53.398100 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.136.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-b-aec8669192?timeout=10s\": dial tcp 64.23.136.149:6443: connect: connection refused" interval="800ms" Jul 6 23:57:53.589691 kubelet[2131]: I0706 23:57:53.589528 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:53.590341 kubelet[2131]: E0706 23:57:53.589973 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.136.149:6443/api/v1/nodes\": dial tcp 64.23.136.149:6443: connect: connection refused" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:53.667480 kubelet[2131]: W0706 23:57:53.667346 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.136.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:53.667480 kubelet[2131]: E0706 23:57:53.667427 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.136.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:53.797678 kubelet[2131]: W0706 23:57:53.797513 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.136.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:53.797678 kubelet[2131]: E0706 23:57:53.797580 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.136.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:53.825456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421010171.mount: Deactivated successfully. Jul 6 23:57:53.830561 containerd[1451]: time="2025-07-06T23:57:53.830462552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:53.832436 containerd[1451]: time="2025-07-06T23:57:53.832364060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:57:53.835701 containerd[1451]: time="2025-07-06T23:57:53.835616295Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:53.835918 kubelet[2131]: W0706 23:57:53.835862 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.136.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-b-aec8669192&limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:53.835985 kubelet[2131]: E0706 23:57:53.835928 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.136.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-b-aec8669192&limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:53.837195 containerd[1451]: time="2025-07-06T23:57:53.836984560Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:53.837195 containerd[1451]: time="2025-07-06T23:57:53.837073220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:57:53.838495 containerd[1451]: time="2025-07-06T23:57:53.837840127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:57:53.838495 containerd[1451]: time="2025-07-06T23:57:53.837938010Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:53.842773 containerd[1451]: time="2025-07-06T23:57:53.842598628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:53.844834 containerd[1451]: time="2025-07-06T23:57:53.844490469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.672657ms" Jul 6 23:57:53.845909 containerd[1451]: time="2025-07-06T23:57:53.845865543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.318209ms" Jul 6 23:57:53.848683 containerd[1451]: time="2025-07-06T23:57:53.848630459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 575.881997ms" Jul 6 23:57:54.004774 containerd[1451]: time="2025-07-06T23:57:54.004330959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:54.004774 containerd[1451]: time="2025-07-06T23:57:54.004405414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:54.004774 containerd[1451]: time="2025-07-06T23:57:54.004422462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:54.004774 containerd[1451]: time="2025-07-06T23:57:54.004526169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:54.026281 containerd[1451]: time="2025-07-06T23:57:54.024982722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:54.026281 containerd[1451]: time="2025-07-06T23:57:54.025957928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:54.026281 containerd[1451]: time="2025-07-06T23:57:54.025976072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:54.026903 containerd[1451]: time="2025-07-06T23:57:54.026522012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:54.032271 containerd[1451]: time="2025-07-06T23:57:54.030935368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:54.032271 containerd[1451]: time="2025-07-06T23:57:54.031052857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:54.032271 containerd[1451]: time="2025-07-06T23:57:54.031074121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:54.032271 containerd[1451]: time="2025-07-06T23:57:54.031238364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:54.048965 systemd[1]: Started cri-containerd-bf3ec60707b2c305ce86dda50c89cd2b06f6c95dbe74f76ea6b67b7a217aa1d5.scope - libcontainer container bf3ec60707b2c305ce86dda50c89cd2b06f6c95dbe74f76ea6b67b7a217aa1d5. Jul 6 23:57:54.073921 systemd[1]: Started cri-containerd-6ca17377572a4da4ab91945c2d0234a5a20a82127291282e684cab51065b3ea5.scope - libcontainer container 6ca17377572a4da4ab91945c2d0234a5a20a82127291282e684cab51065b3ea5. Jul 6 23:57:54.096298 systemd[1]: Started cri-containerd-23dd95d4a350844a33fa52082635c9cfdf8be937f4113c8a118ba01aa4dc8e68.scope - libcontainer container 23dd95d4a350844a33fa52082635c9cfdf8be937f4113c8a118ba01aa4dc8e68. Jul 6 23:57:54.117456 kubelet[2131]: W0706 23:57:54.117370 2131 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.136.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.136.149:6443: connect: connection refused Jul 6 23:57:54.117808 kubelet[2131]: E0706 23:57:54.117478 2131 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.136.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.136.149:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:54.176598 containerd[1451]: time="2025-07-06T23:57:54.176551667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-b-aec8669192,Uid:682b143e96cc2458e90dd982539cd976,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca17377572a4da4ab91945c2d0234a5a20a82127291282e684cab51065b3ea5\"" Jul 6 23:57:54.180597 kubelet[2131]: E0706 23:57:54.180284 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:54.191597 containerd[1451]: time="2025-07-06T23:57:54.191405013Z" level=info msg="CreateContainer within sandbox \"6ca17377572a4da4ab91945c2d0234a5a20a82127291282e684cab51065b3ea5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:57:54.197156 containerd[1451]: time="2025-07-06T23:57:54.197101464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-b-aec8669192,Uid:3a84577dab14328e2f2fc6a900315365,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf3ec60707b2c305ce86dda50c89cd2b06f6c95dbe74f76ea6b67b7a217aa1d5\"" Jul 6 23:57:54.198746 kubelet[2131]: E0706 23:57:54.198684 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.136.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-b-aec8669192?timeout=10s\": dial tcp 64.23.136.149:6443: connect: connection refused" interval="1.6s" Jul 6 23:57:54.200287 kubelet[2131]: E0706 23:57:54.199975 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:54.204681 containerd[1451]: time="2025-07-06T23:57:54.204539943Z" level=info msg="CreateContainer within sandbox \"bf3ec60707b2c305ce86dda50c89cd2b06f6c95dbe74f76ea6b67b7a217aa1d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:57:54.210307 containerd[1451]: time="2025-07-06T23:57:54.210172620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-b-aec8669192,Uid:e13c385cffa16c2d4fb4a66a8950e3dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"23dd95d4a350844a33fa52082635c9cfdf8be937f4113c8a118ba01aa4dc8e68\"" Jul 6 23:57:54.213106 kubelet[2131]: E0706 23:57:54.213059 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:54.216437 containerd[1451]: time="2025-07-06T23:57:54.216383098Z" level=info msg="CreateContainer within sandbox \"23dd95d4a350844a33fa52082635c9cfdf8be937f4113c8a118ba01aa4dc8e68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:57:54.220099 containerd[1451]: time="2025-07-06T23:57:54.220048789Z" level=info msg="CreateContainer within sandbox \"6ca17377572a4da4ab91945c2d0234a5a20a82127291282e684cab51065b3ea5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"52f389f27615c05be1c8ecc06e3637502fe6c921505a9f18a8d087e61f64b8f0\"" Jul 6 23:57:54.221584 containerd[1451]: time="2025-07-06T23:57:54.221551016Z" level=info msg="StartContainer for \"52f389f27615c05be1c8ecc06e3637502fe6c921505a9f18a8d087e61f64b8f0\"" Jul 6 23:57:54.233235 containerd[1451]: time="2025-07-06T23:57:54.233048692Z" level=info msg="CreateContainer within sandbox \"bf3ec60707b2c305ce86dda50c89cd2b06f6c95dbe74f76ea6b67b7a217aa1d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75a02e02b379d115843e6887b173846b7a68c6e3f8baae9b261660e9d6eb4f60\"" Jul 6 23:57:54.235132 containerd[1451]: time="2025-07-06T23:57:54.233712894Z" level=info msg="StartContainer for \"75a02e02b379d115843e6887b173846b7a68c6e3f8baae9b261660e9d6eb4f60\"" Jul 6 23:57:54.238947 containerd[1451]: time="2025-07-06T23:57:54.238888372Z" level=info msg="CreateContainer within sandbox \"23dd95d4a350844a33fa52082635c9cfdf8be937f4113c8a118ba01aa4dc8e68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"649461a65ec9cd2fc75921eba1d2abaa10116971d629d30e443fc39cf5ecbce1\"" Jul 6 23:57:54.240426 containerd[1451]: time="2025-07-06T23:57:54.240365136Z" level=info msg="StartContainer for \"649461a65ec9cd2fc75921eba1d2abaa10116971d629d30e443fc39cf5ecbce1\"" Jul 6 23:57:54.274976 systemd[1]: Started cri-containerd-52f389f27615c05be1c8ecc06e3637502fe6c921505a9f18a8d087e61f64b8f0.scope - libcontainer container 52f389f27615c05be1c8ecc06e3637502fe6c921505a9f18a8d087e61f64b8f0. Jul 6 23:57:54.304983 systemd[1]: Started cri-containerd-75a02e02b379d115843e6887b173846b7a68c6e3f8baae9b261660e9d6eb4f60.scope - libcontainer container 75a02e02b379d115843e6887b173846b7a68c6e3f8baae9b261660e9d6eb4f60. Jul 6 23:57:54.317910 systemd[1]: Started cri-containerd-649461a65ec9cd2fc75921eba1d2abaa10116971d629d30e443fc39cf5ecbce1.scope - libcontainer container 649461a65ec9cd2fc75921eba1d2abaa10116971d629d30e443fc39cf5ecbce1. Jul 6 23:57:54.369222 containerd[1451]: time="2025-07-06T23:57:54.367777626Z" level=info msg="StartContainer for \"52f389f27615c05be1c8ecc06e3637502fe6c921505a9f18a8d087e61f64b8f0\" returns successfully" Jul 6 23:57:54.394925 kubelet[2131]: I0706 23:57:54.394881 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:54.396720 kubelet[2131]: E0706 23:57:54.396632 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.136.149:6443/api/v1/nodes\": dial tcp 64.23.136.149:6443: connect: connection refused" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:54.409314 containerd[1451]: time="2025-07-06T23:57:54.409248780Z" level=info msg="StartContainer for \"75a02e02b379d115843e6887b173846b7a68c6e3f8baae9b261660e9d6eb4f60\" returns successfully" Jul 6 23:57:54.420931 containerd[1451]: time="2025-07-06T23:57:54.420878352Z" level=info msg="StartContainer for \"649461a65ec9cd2fc75921eba1d2abaa10116971d629d30e443fc39cf5ecbce1\" returns successfully" Jul 6 23:57:54.847635 kubelet[2131]: E0706 23:57:54.846722 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:54.847635 kubelet[2131]: E0706 23:57:54.846956 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:54.848550 kubelet[2131]: E0706 23:57:54.848528 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:54.849680 kubelet[2131]: E0706 23:57:54.848765 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:54.852674 kubelet[2131]: E0706 23:57:54.852615 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:54.853065 kubelet[2131]: E0706 23:57:54.852968 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:55.857281 kubelet[2131]: E0706 23:57:55.857242 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:55.861038 kubelet[2131]: E0706 23:57:55.858882 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:55.861038 kubelet[2131]: E0706 23:57:55.860797 2131 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:55.861038 kubelet[2131]: E0706 23:57:55.860969 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:55.998680 kubelet[2131]: I0706 23:57:55.998516 2131 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.440889 kubelet[2131]: E0706 23:57:56.440793 2131 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-b-aec8669192\" not found" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.601927 kubelet[2131]: I0706 23:57:56.601353 2131 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.697856 kubelet[2131]: I0706 23:57:56.696795 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.705736 kubelet[2131]: E0706 23:57:56.705497 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-b-aec8669192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.705736 kubelet[2131]: I0706 23:57:56.705561 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.709157 kubelet[2131]: E0706 23:57:56.709105 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.709157 kubelet[2131]: I0706 23:57:56.709150 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.712206 kubelet[2131]: E0706 23:57:56.712150 2131 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.4-b-aec8669192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.4-b-aec8669192" Jul 6 23:57:56.768052 kubelet[2131]: I0706 23:57:56.768000 2131 apiserver.go:52] "Watching apiserver" Jul 6 23:57:56.795088 kubelet[2131]: I0706 23:57:56.795039 2131 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:57:57.956020 kubelet[2131]: I0706 23:57:57.955940 2131 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:57.966694 kubelet[2131]: W0706 23:57:57.966414 2131 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:57:57.967973 kubelet[2131]: E0706 23:57:57.967930 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:58.672507 systemd[1]: Reloading requested from client PID 2409 ('systemctl') (unit session-7.scope)... Jul 6 23:57:58.673030 systemd[1]: Reloading... Jul 6 23:57:58.802724 zram_generator::config[2448]: No configuration found. Jul 6 23:57:58.863637 kubelet[2131]: E0706 23:57:58.863585 2131 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:57:59.001244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:59.143087 systemd[1]: Reloading finished in 469 ms. Jul 6 23:57:59.203888 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:59.216693 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:57:59.217359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:59.217595 systemd[1]: kubelet.service: Consumed 1.055s CPU time, 129.2M memory peak, 0B memory swap peak. Jul 6 23:57:59.227184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:59.399995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:59.422470 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:57:59.521147 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:59.521147 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:57:59.521147 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:59.521967 kubelet[2499]: I0706 23:57:59.521273 2499 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:57:59.535544 kubelet[2499]: I0706 23:57:59.535493 2499 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:57:59.535544 kubelet[2499]: I0706 23:57:59.535534 2499 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:57:59.535982 kubelet[2499]: I0706 23:57:59.535958 2499 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:57:59.541990 kubelet[2499]: I0706 23:57:59.541714 2499 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:57:59.545176 kubelet[2499]: I0706 23:57:59.545108 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:57:59.551275 kubelet[2499]: E0706 23:57:59.551214 2499 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:57:59.551275 kubelet[2499]: I0706 23:57:59.551264 2499 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:57:59.555859 kubelet[2499]: I0706 23:57:59.555799 2499 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:57:59.556297 kubelet[2499]: I0706 23:57:59.556237 2499 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:57:59.556486 kubelet[2499]: I0706 23:57:59.556288 2499 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-b-aec8669192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:57:59.556604 kubelet[2499]: I0706 23:57:59.556492 2499 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:57:59.556604 kubelet[2499]: I0706 23:57:59.556504 2499 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:57:59.556604 kubelet[2499]: I0706 23:57:59.556556 2499 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:59.556797 kubelet[2499]: I0706 23:57:59.556782 2499 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:57:59.560180 kubelet[2499]: I0706 23:57:59.559950 2499 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:57:59.560180 kubelet[2499]: I0706 23:57:59.560052 2499 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:57:59.560180 kubelet[2499]: I0706 23:57:59.560071 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:57:59.575908 kubelet[2499]: I0706 23:57:59.574620 2499 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:57:59.575908 kubelet[2499]: I0706 23:57:59.575272 2499 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:57:59.586707 kubelet[2499]: I0706 23:57:59.586482 2499 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:57:59.586707 kubelet[2499]: I0706 23:57:59.586535 2499 server.go:1287] "Started kubelet" Jul 6 23:57:59.590824 kubelet[2499]: I0706 23:57:59.590789 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:57:59.599729 kubelet[2499]: I0706 23:57:59.598820 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:57:59.602287 kubelet[2499]: I0706 23:57:59.602246 2499 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:57:59.603406 kubelet[2499]: I0706 23:57:59.603371 2499 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:57:59.603766 kubelet[2499]: I0706 23:57:59.603752 2499 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:57:59.607561 kubelet[2499]: I0706 23:57:59.607531 2499 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:57:59.607981 kubelet[2499]: I0706 23:57:59.607948 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:57:59.609560 kubelet[2499]: I0706 23:57:59.609537 2499 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:57:59.611792 kubelet[2499]: I0706 23:57:59.611736 2499 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:57:59.613191 kubelet[2499]: I0706 23:57:59.613150 2499 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:57:59.615280 kubelet[2499]: I0706 23:57:59.614524 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:57:59.615280 kubelet[2499]: I0706 23:57:59.615004 2499 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:57:59.624181 kubelet[2499]: I0706 23:57:59.624108 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:57:59.626537 kubelet[2499]: I0706 23:57:59.626483 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:57:59.626769 kubelet[2499]: I0706 23:57:59.626753 2499 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:57:59.626865 kubelet[2499]: I0706 23:57:59.626854 2499 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:57:59.626917 kubelet[2499]: I0706 23:57:59.626910 2499 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:57:59.627149 kubelet[2499]: E0706 23:57:59.627123 2499 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:57:59.665815 kubelet[2499]: E0706 23:57:59.665618 2499 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:57:59.716329 sudo[2531]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:57:59.717244 sudo[2531]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:57:59.727840 kubelet[2499]: E0706 23:57:59.727791 2499 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765620 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765648 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765687 2499 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765867 2499 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765878 2499 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765897 2499 policy_none.go:49] "None policy: Start" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765909 2499 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.765919 2499 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:57:59.767037 kubelet[2499]: I0706 23:57:59.766035 2499 state_mem.go:75] "Updated machine memory state" Jul 6 23:57:59.773682 kubelet[2499]: I0706 23:57:59.773616 2499 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:57:59.774289 kubelet[2499]: I0706 23:57:59.773949 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:57:59.774289 kubelet[2499]: I0706 23:57:59.773989 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:57:59.777665 kubelet[2499]: I0706 23:57:59.777617 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:57:59.788234 kubelet[2499]: E0706 23:57:59.783076 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:57:59.879517 kubelet[2499]: I0706 23:57:59.879262 2499 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:59.905427 kubelet[2499]: I0706 23:57:59.905375 2499 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:59.905623 kubelet[2499]: I0706 23:57:59.905486 2499 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.4-b-aec8669192" Jul 6 23:57:59.938800 kubelet[2499]: I0706 23:57:59.935887 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.4-b-aec8669192" Jul 6 23:57:59.938800 kubelet[2499]: I0706 23:57:59.936427 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:57:59.938800 kubelet[2499]: I0706 23:57:59.937695 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:57:59.949135 kubelet[2499]: W0706 23:57:59.949080 2499 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:57:59.956987 kubelet[2499]: W0706 23:57:59.956936 2499 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:57:59.958343 kubelet[2499]: W0706 23:57:59.956936 2499 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:57:59.958604 kubelet[2499]: E0706 23:57:59.958567 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-b-aec8669192\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015445 kubelet[2499]: I0706 23:58:00.014177 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015445 kubelet[2499]: I0706 23:58:00.014245 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015445 kubelet[2499]: I0706 23:58:00.014283 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015445 kubelet[2499]: I0706 23:58:00.014313 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015445 kubelet[2499]: I0706 23:58:00.014343 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a84577dab14328e2f2fc6a900315365-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-b-aec8669192\" (UID: \"3a84577dab14328e2f2fc6a900315365\") " pod="kube-system/kube-scheduler-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015860 kubelet[2499]: I0706 23:58:00.014379 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/682b143e96cc2458e90dd982539cd976-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-b-aec8669192\" (UID: \"682b143e96cc2458e90dd982539cd976\") " pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015860 kubelet[2499]: I0706 23:58:00.014408 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/682b143e96cc2458e90dd982539cd976-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-b-aec8669192\" (UID: \"682b143e96cc2458e90dd982539cd976\") " pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015860 kubelet[2499]: I0706 23:58:00.014438 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e13c385cffa16c2d4fb4a66a8950e3dd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-b-aec8669192\" (UID: \"e13c385cffa16c2d4fb4a66a8950e3dd\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.015860 kubelet[2499]: I0706 23:58:00.014485 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/682b143e96cc2458e90dd982539cd976-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-b-aec8669192\" (UID: \"682b143e96cc2458e90dd982539cd976\") " pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.252296 kubelet[2499]: E0706 23:58:00.250972 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:00.259233 kubelet[2499]: E0706 23:58:00.259054 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:00.260950 kubelet[2499]: E0706 23:58:00.260279 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:00.561761 kubelet[2499]: I0706 23:58:00.561435 2499 apiserver.go:52] "Watching apiserver" Jul 6 23:58:00.604831 kubelet[2499]: I0706 23:58:00.604779 2499 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:58:00.704251 kubelet[2499]: E0706 23:58:00.704190 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:00.705951 kubelet[2499]: E0706 23:58:00.705140 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:00.705951 kubelet[2499]: I0706 23:58:00.705243 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.720425 kubelet[2499]: W0706 23:58:00.719955 2499 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:58:00.720425 kubelet[2499]: E0706 23:58:00.720058 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.4-b-aec8669192\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" Jul 6 23:58:00.721689 kubelet[2499]: E0706 23:58:00.721638 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:00.789326 kubelet[2499]: I0706 23:58:00.789217 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-b-aec8669192" podStartSLOduration=3.789187819 podStartE2EDuration="3.789187819s" podCreationTimestamp="2025-07-06 23:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:00.774526396 +0000 UTC m=+1.342862561" watchObservedRunningTime="2025-07-06 23:58:00.789187819 +0000 UTC m=+1.357523978" Jul 6 23:58:00.794962 sudo[2531]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:00.805541 kubelet[2499]: I0706 23:58:00.805303 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-b-aec8669192" podStartSLOduration=1.805278328 podStartE2EDuration="1.805278328s" podCreationTimestamp="2025-07-06 23:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:00.790366542 +0000 UTC m=+1.358702706" watchObservedRunningTime="2025-07-06 23:58:00.805278328 +0000 UTC m=+1.373614494" Jul 6 23:58:01.706163 kubelet[2499]: E0706 23:58:01.706049 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:01.709619 kubelet[2499]: E0706 23:58:01.707467 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:02.707170 sudo[1650]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:02.711875 sshd[1647]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:02.718069 systemd[1]: sshd@6-64.23.136.149:22-139.178.89.65:40600.service: Deactivated successfully. Jul 6 23:58:02.722678 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:58:02.723406 systemd[1]: session-7.scope: Consumed 5.664s CPU time, 142.5M memory peak, 0B memory swap peak. Jul 6 23:58:02.724289 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:58:02.725942 systemd-logind[1445]: Removed session 7. Jul 6 23:58:03.028180 kubelet[2499]: E0706 23:58:03.028129 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:04.520041 kubelet[2499]: I0706 23:58:04.519990 2499 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:58:04.521384 containerd[1451]: time="2025-07-06T23:58:04.520659854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:58:04.522089 kubelet[2499]: I0706 23:58:04.522041 2499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:58:05.193245 kubelet[2499]: E0706 23:58:05.193029 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.212302 kubelet[2499]: I0706 23:58:05.212226 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-b-aec8669192" podStartSLOduration=6.212181652 podStartE2EDuration="6.212181652s" podCreationTimestamp="2025-07-06 23:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:00.805881807 +0000 UTC m=+1.374217974" watchObservedRunningTime="2025-07-06 23:58:05.212181652 +0000 UTC m=+5.780517806" Jul 6 23:58:05.275817 systemd[1]: Created slice kubepods-besteffort-pod8e2f9894_2c25_4f97_aa56_bb6f96022691.slice - libcontainer container kubepods-besteffort-pod8e2f9894_2c25_4f97_aa56_bb6f96022691.slice. Jul 6 23:58:05.307546 systemd[1]: Created slice kubepods-burstable-pod1d65d0f3_c375_4185_8a1a_8abf652aaeb2.slice - libcontainer container kubepods-burstable-pod1d65d0f3_c375_4185_8a1a_8abf652aaeb2.slice. Jul 6 23:58:05.350865 kubelet[2499]: I0706 23:58:05.349792 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmm95\" (UniqueName: \"kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-kube-api-access-pmm95\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.350865 kubelet[2499]: I0706 23:58:05.350483 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-bpf-maps\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.350865 kubelet[2499]: I0706 23:58:05.350602 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hostproc\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.350865 kubelet[2499]: I0706 23:58:05.350634 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e2f9894-2c25-4f97-aa56-bb6f96022691-kube-proxy\") pod \"kube-proxy-qbv29\" (UID: \"8e2f9894-2c25-4f97-aa56-bb6f96022691\") " pod="kube-system/kube-proxy-qbv29" Jul 6 23:58:05.350865 kubelet[2499]: I0706 23:58:05.350753 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-net\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.350865 kubelet[2499]: I0706 23:58:05.350864 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-run\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351288 kubelet[2499]: I0706 23:58:05.350891 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-clustermesh-secrets\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351288 kubelet[2499]: I0706 23:58:05.350937 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-xtables-lock\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351288 kubelet[2499]: I0706 23:58:05.350962 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-lib-modules\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351288 kubelet[2499]: I0706 23:58:05.350982 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e2f9894-2c25-4f97-aa56-bb6f96022691-xtables-lock\") pod \"kube-proxy-qbv29\" (UID: \"8e2f9894-2c25-4f97-aa56-bb6f96022691\") " pod="kube-system/kube-proxy-qbv29" Jul 6 23:58:05.351288 kubelet[2499]: I0706 23:58:05.351002 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cni-path\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351288 kubelet[2499]: I0706 23:58:05.351024 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7hs\" (UniqueName: \"kubernetes.io/projected/8e2f9894-2c25-4f97-aa56-bb6f96022691-kube-api-access-qm7hs\") pod \"kube-proxy-qbv29\" (UID: \"8e2f9894-2c25-4f97-aa56-bb6f96022691\") " pod="kube-system/kube-proxy-qbv29" Jul 6 23:58:05.351545 kubelet[2499]: I0706 23:58:05.351076 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e2f9894-2c25-4f97-aa56-bb6f96022691-lib-modules\") pod \"kube-proxy-qbv29\" (UID: \"8e2f9894-2c25-4f97-aa56-bb6f96022691\") " pod="kube-system/kube-proxy-qbv29" Jul 6 23:58:05.351545 kubelet[2499]: I0706 23:58:05.351100 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-cgroup\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351545 kubelet[2499]: I0706 23:58:05.351140 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-etc-cni-netd\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351545 kubelet[2499]: I0706 23:58:05.351162 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hubble-tls\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351545 kubelet[2499]: I0706 23:58:05.351195 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-config-path\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.351545 kubelet[2499]: I0706 23:58:05.351218 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-kernel\") pod \"cilium-4q62l\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " pod="kube-system/cilium-4q62l" Jul 6 23:58:05.577449 kubelet[2499]: I0706 23:58:05.577390 2499 status_manager.go:890] "Failed to get status for pod" podUID="fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029" pod="kube-system/cilium-operator-6c4d7847fc-rr7cw" err="pods \"cilium-operator-6c4d7847fc-rr7cw\" is forbidden: User \"system:node:ci-4081.3.4-b-aec8669192\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.4-b-aec8669192' and this object" Jul 6 23:58:05.582643 systemd[1]: Created slice kubepods-besteffort-podfc5ce6ed_c879_4eb7_b0d2_2e7c03e9f029.slice - libcontainer container kubepods-besteffort-podfc5ce6ed_c879_4eb7_b0d2_2e7c03e9f029.slice. Jul 6 23:58:05.587791 kubelet[2499]: E0706 23:58:05.587752 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.589032 containerd[1451]: time="2025-07-06T23:58:05.588993732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbv29,Uid:8e2f9894-2c25-4f97-aa56-bb6f96022691,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:05.617761 kubelet[2499]: E0706 23:58:05.613975 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.618120 containerd[1451]: time="2025-07-06T23:58:05.616913463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q62l,Uid:1d65d0f3-c375-4185-8a1a-8abf652aaeb2,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:05.636332 containerd[1451]: time="2025-07-06T23:58:05.635374751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:05.636332 containerd[1451]: time="2025-07-06T23:58:05.635460888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:05.636332 containerd[1451]: time="2025-07-06T23:58:05.635475918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:05.636332 containerd[1451]: time="2025-07-06T23:58:05.635782065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:05.654209 kubelet[2499]: I0706 23:58:05.654080 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-kube-api-access-m56dd\") pod \"cilium-operator-6c4d7847fc-rr7cw\" (UID: \"fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029\") " pod="kube-system/cilium-operator-6c4d7847fc-rr7cw" Jul 6 23:58:05.654209 kubelet[2499]: I0706 23:58:05.654128 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rr7cw\" (UID: \"fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029\") " pod="kube-system/cilium-operator-6c4d7847fc-rr7cw" Jul 6 23:58:05.663381 systemd[1]: Started cri-containerd-e2e5a4fff89329fca0d83c8e5704ad73bf4b940c6ac0ef1e4407fced2c2078f3.scope - libcontainer container e2e5a4fff89329fca0d83c8e5704ad73bf4b940c6ac0ef1e4407fced2c2078f3. Jul 6 23:58:05.683775 containerd[1451]: time="2025-07-06T23:58:05.683629983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:05.685898 containerd[1451]: time="2025-07-06T23:58:05.684432320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:05.687405 containerd[1451]: time="2025-07-06T23:58:05.686734239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:05.687405 containerd[1451]: time="2025-07-06T23:58:05.686966739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:05.717433 kubelet[2499]: E0706 23:58:05.717251 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.726970 systemd[1]: Started cri-containerd-a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03.scope - libcontainer container a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03. Jul 6 23:58:05.729193 containerd[1451]: time="2025-07-06T23:58:05.728098365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbv29,Uid:8e2f9894-2c25-4f97-aa56-bb6f96022691,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2e5a4fff89329fca0d83c8e5704ad73bf4b940c6ac0ef1e4407fced2c2078f3\"" Jul 6 23:58:05.731396 kubelet[2499]: E0706 23:58:05.731364 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.736336 containerd[1451]: time="2025-07-06T23:58:05.736280773Z" level=info msg="CreateContainer within sandbox \"e2e5a4fff89329fca0d83c8e5704ad73bf4b940c6ac0ef1e4407fced2c2078f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:58:05.777593 containerd[1451]: time="2025-07-06T23:58:05.777455959Z" level=info msg="CreateContainer within sandbox \"e2e5a4fff89329fca0d83c8e5704ad73bf4b940c6ac0ef1e4407fced2c2078f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"161ca46f831fa98982de38ab1392b6cd5961a7448d6f1ac4f53e698525c883be\"" Jul 6 23:58:05.778957 containerd[1451]: time="2025-07-06T23:58:05.778903300Z" level=info msg="StartContainer for \"161ca46f831fa98982de38ab1392b6cd5961a7448d6f1ac4f53e698525c883be\"" Jul 6 23:58:05.819359 containerd[1451]: time="2025-07-06T23:58:05.819320451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q62l,Uid:1d65d0f3-c375-4185-8a1a-8abf652aaeb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\"" Jul 6 23:58:05.820801 kubelet[2499]: E0706 23:58:05.820648 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.824868 containerd[1451]: time="2025-07-06T23:58:05.824312134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:58:05.887123 kubelet[2499]: E0706 23:58:05.886915 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:05.889155 containerd[1451]: time="2025-07-06T23:58:05.888550574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rr7cw,Uid:fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:05.896788 systemd[1]: Started cri-containerd-161ca46f831fa98982de38ab1392b6cd5961a7448d6f1ac4f53e698525c883be.scope - libcontainer container 161ca46f831fa98982de38ab1392b6cd5961a7448d6f1ac4f53e698525c883be. Jul 6 23:58:05.958589 containerd[1451]: time="2025-07-06T23:58:05.958256390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:05.958589 containerd[1451]: time="2025-07-06T23:58:05.958548196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:05.959114 containerd[1451]: time="2025-07-06T23:58:05.958827436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:05.959114 containerd[1451]: time="2025-07-06T23:58:05.958979821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:05.975266 containerd[1451]: time="2025-07-06T23:58:05.975156868Z" level=info msg="StartContainer for \"161ca46f831fa98982de38ab1392b6cd5961a7448d6f1ac4f53e698525c883be\" returns successfully" Jul 6 23:58:05.998015 systemd[1]: Started cri-containerd-f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a.scope - libcontainer container f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a. Jul 6 23:58:06.081966 containerd[1451]: time="2025-07-06T23:58:06.080086392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rr7cw,Uid:fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029,Namespace:kube-system,Attempt:0,} returns sandbox id \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\"" Jul 6 23:58:06.083528 kubelet[2499]: E0706 23:58:06.082986 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:06.727841 kubelet[2499]: E0706 23:58:06.727792 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:06.747737 kubelet[2499]: I0706 23:58:06.747157 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbv29" podStartSLOduration=1.747128078 podStartE2EDuration="1.747128078s" podCreationTimestamp="2025-07-06 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:06.746549962 +0000 UTC m=+7.314886131" watchObservedRunningTime="2025-07-06 23:58:06.747128078 +0000 UTC m=+7.315464248" Jul 6 23:58:07.552676 kubelet[2499]: E0706 23:58:07.552370 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:07.729220 kubelet[2499]: E0706 23:58:07.728123 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:10.661880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63911805.mount: Deactivated successfully. Jul 6 23:58:13.052928 kubelet[2499]: E0706 23:58:13.052883 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:13.432757 containerd[1451]: time="2025-07-06T23:58:13.432099117Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:13.432757 containerd[1451]: time="2025-07-06T23:58:13.432184719Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:58:13.435021 containerd[1451]: time="2025-07-06T23:58:13.434606969Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:13.437023 containerd[1451]: time="2025-07-06T23:58:13.436964503Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.61258383s" Jul 6 23:58:13.437302 containerd[1451]: time="2025-07-06T23:58:13.437164006Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:58:13.439618 containerd[1451]: time="2025-07-06T23:58:13.439187502Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:58:13.441636 containerd[1451]: time="2025-07-06T23:58:13.441552894Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:58:13.558650 containerd[1451]: time="2025-07-06T23:58:13.558572928Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\"" Jul 6 23:58:13.559602 containerd[1451]: time="2025-07-06T23:58:13.559561076Z" level=info msg="StartContainer for \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\"" Jul 6 23:58:13.758800 kubelet[2499]: E0706 23:58:13.758673 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:13.785067 systemd[1]: Started cri-containerd-8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd.scope - libcontainer container 8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd. Jul 6 23:58:13.820604 containerd[1451]: time="2025-07-06T23:58:13.820533843Z" level=info msg="StartContainer for \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\" returns successfully" Jul 6 23:58:13.845592 systemd[1]: cri-containerd-8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd.scope: Deactivated successfully. Jul 6 23:58:13.976907 containerd[1451]: time="2025-07-06T23:58:13.964430084Z" level=info msg="shim disconnected" id=8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd namespace=k8s.io Jul 6 23:58:13.976907 containerd[1451]: time="2025-07-06T23:58:13.976709376Z" level=warning msg="cleaning up after shim disconnected" id=8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd namespace=k8s.io Jul 6 23:58:13.976907 containerd[1451]: time="2025-07-06T23:58:13.976728421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:14.524856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd-rootfs.mount: Deactivated successfully. Jul 6 23:58:14.637298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179518386.mount: Deactivated successfully. Jul 6 23:58:14.764386 kubelet[2499]: E0706 23:58:14.764295 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:14.774355 containerd[1451]: time="2025-07-06T23:58:14.774276979Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:58:14.799944 containerd[1451]: time="2025-07-06T23:58:14.799546586Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\"" Jul 6 23:58:14.802696 containerd[1451]: time="2025-07-06T23:58:14.801955897Z" level=info msg="StartContainer for \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\"" Jul 6 23:58:14.865941 systemd[1]: Started cri-containerd-5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901.scope - libcontainer container 5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901. Jul 6 23:58:14.923136 containerd[1451]: time="2025-07-06T23:58:14.923063854Z" level=info msg="StartContainer for \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\" returns successfully" Jul 6 23:58:14.944280 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:58:14.944608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:58:14.945337 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:58:14.955124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:58:14.960039 systemd[1]: cri-containerd-5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901.scope: Deactivated successfully. Jul 6 23:58:15.027032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:58:15.040003 containerd[1451]: time="2025-07-06T23:58:15.039901500Z" level=info msg="shim disconnected" id=5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901 namespace=k8s.io Jul 6 23:58:15.040279 containerd[1451]: time="2025-07-06T23:58:15.039995815Z" level=warning msg="cleaning up after shim disconnected" id=5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901 namespace=k8s.io Jul 6 23:58:15.040279 containerd[1451]: time="2025-07-06T23:58:15.040028608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:15.100257 containerd[1451]: time="2025-07-06T23:58:15.100080237Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:58:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:58:15.495966 containerd[1451]: time="2025-07-06T23:58:15.495888693Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.497155 containerd[1451]: time="2025-07-06T23:58:15.497079398Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:58:15.498023 containerd[1451]: time="2025-07-06T23:58:15.497422378Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.499818 containerd[1451]: time="2025-07-06T23:58:15.499770147Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.060549233s" Jul 6 23:58:15.499818 containerd[1451]: time="2025-07-06T23:58:15.499819929Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:58:15.504336 containerd[1451]: time="2025-07-06T23:58:15.504279774Z" level=info msg="CreateContainer within sandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:58:15.528979 containerd[1451]: time="2025-07-06T23:58:15.528911998Z" level=info msg="CreateContainer within sandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\"" Jul 6 23:58:15.531049 containerd[1451]: time="2025-07-06T23:58:15.529775130Z" level=info msg="StartContainer for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\"" Jul 6 23:58:15.575961 systemd[1]: Started cri-containerd-97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054.scope - libcontainer container 97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054. Jul 6 23:58:15.622007 containerd[1451]: time="2025-07-06T23:58:15.621955419Z" level=info msg="StartContainer for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" returns successfully" Jul 6 23:58:15.776337 kubelet[2499]: E0706 23:58:15.776195 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:15.784274 kubelet[2499]: E0706 23:58:15.782081 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:15.787597 containerd[1451]: time="2025-07-06T23:58:15.787537820Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:58:15.826770 containerd[1451]: time="2025-07-06T23:58:15.826382441Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\"" Jul 6 23:58:15.828762 containerd[1451]: time="2025-07-06T23:58:15.827539359Z" level=info msg="StartContainer for \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\"" Jul 6 23:58:15.878890 systemd[1]: Started cri-containerd-aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826.scope - libcontainer container aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826. Jul 6 23:58:15.988266 containerd[1451]: time="2025-07-06T23:58:15.988206582Z" level=info msg="StartContainer for \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\" returns successfully" Jul 6 23:58:16.019976 systemd[1]: cri-containerd-aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826.scope: Deactivated successfully. Jul 6 23:58:16.064571 containerd[1451]: time="2025-07-06T23:58:16.064230843Z" level=info msg="shim disconnected" id=aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826 namespace=k8s.io Jul 6 23:58:16.064571 containerd[1451]: time="2025-07-06T23:58:16.064309509Z" level=warning msg="cleaning up after shim disconnected" id=aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826 namespace=k8s.io Jul 6 23:58:16.064571 containerd[1451]: time="2025-07-06T23:58:16.064318977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:16.072645 kubelet[2499]: I0706 23:58:16.072565 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rr7cw" podStartSLOduration=1.654917623 podStartE2EDuration="11.071841628s" podCreationTimestamp="2025-07-06 23:58:05 +0000 UTC" firstStartedPulling="2025-07-06 23:58:06.084632743 +0000 UTC m=+6.652968900" lastFinishedPulling="2025-07-06 23:58:15.501556746 +0000 UTC m=+16.069892905" observedRunningTime="2025-07-06 23:58:15.915707754 +0000 UTC m=+16.484043918" watchObservedRunningTime="2025-07-06 23:58:16.071841628 +0000 UTC m=+16.640177790" Jul 6 23:58:16.362101 update_engine[1446]: I20250706 23:58:16.361799 1446 update_attempter.cc:509] Updating boot flags... Jul 6 23:58:16.495194 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3135) Jul 6 23:58:16.532399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826-rootfs.mount: Deactivated successfully. Jul 6 23:58:16.670481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3138) Jul 6 23:58:16.781973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3138) Jul 6 23:58:16.801704 kubelet[2499]: E0706 23:58:16.801124 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:16.803030 kubelet[2499]: E0706 23:58:16.802818 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:16.819877 containerd[1451]: time="2025-07-06T23:58:16.819024584Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:58:16.873244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929512794.mount: Deactivated successfully. Jul 6 23:58:16.878501 containerd[1451]: time="2025-07-06T23:58:16.878308847Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\"" Jul 6 23:58:16.896169 containerd[1451]: time="2025-07-06T23:58:16.895994405Z" level=info msg="StartContainer for \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\"" Jul 6 23:58:16.988962 systemd[1]: Started cri-containerd-fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402.scope - libcontainer container fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402. Jul 6 23:58:17.082196 containerd[1451]: time="2025-07-06T23:58:17.081981709Z" level=info msg="StartContainer for \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\" returns successfully" Jul 6 23:58:17.089259 systemd[1]: cri-containerd-fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402.scope: Deactivated successfully. Jul 6 23:58:17.177641 containerd[1451]: time="2025-07-06T23:58:17.177307051Z" level=info msg="shim disconnected" id=fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402 namespace=k8s.io Jul 6 23:58:17.177641 containerd[1451]: time="2025-07-06T23:58:17.177393190Z" level=warning msg="cleaning up after shim disconnected" id=fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402 namespace=k8s.io Jul 6 23:58:17.177641 containerd[1451]: time="2025-07-06T23:58:17.177410973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:17.527643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402-rootfs.mount: Deactivated successfully. Jul 6 23:58:17.807188 kubelet[2499]: E0706 23:58:17.807067 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:17.813285 containerd[1451]: time="2025-07-06T23:58:17.811707075Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:58:17.854437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579750608.mount: Deactivated successfully. Jul 6 23:58:17.857163 containerd[1451]: time="2025-07-06T23:58:17.857074932Z" level=info msg="CreateContainer within sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\"" Jul 6 23:58:17.858021 containerd[1451]: time="2025-07-06T23:58:17.857972643Z" level=info msg="StartContainer for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\"" Jul 6 23:58:17.912962 systemd[1]: Started cri-containerd-bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf.scope - libcontainer container bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf. Jul 6 23:58:17.976731 containerd[1451]: time="2025-07-06T23:58:17.976417135Z" level=info msg="StartContainer for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" returns successfully" Jul 6 23:58:18.274927 kubelet[2499]: I0706 23:58:18.274880 2499 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:58:18.355318 systemd[1]: Created slice kubepods-burstable-pod3da282bd_b582_4bb9_aa11_57ba9cbf3494.slice - libcontainer container kubepods-burstable-pod3da282bd_b582_4bb9_aa11_57ba9cbf3494.slice. Jul 6 23:58:18.369289 systemd[1]: Created slice kubepods-burstable-podee840ee9_54b9_41fc_993b_fbc024b6bd52.slice - libcontainer container kubepods-burstable-podee840ee9_54b9_41fc_993b_fbc024b6bd52.slice. Jul 6 23:58:18.468348 kubelet[2499]: I0706 23:58:18.468278 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da282bd-b582-4bb9-aa11-57ba9cbf3494-config-volume\") pod \"coredns-668d6bf9bc-vtwxv\" (UID: \"3da282bd-b582-4bb9-aa11-57ba9cbf3494\") " pod="kube-system/coredns-668d6bf9bc-vtwxv" Jul 6 23:58:18.468348 kubelet[2499]: I0706 23:58:18.468351 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee840ee9-54b9-41fc-993b-fbc024b6bd52-config-volume\") pod \"coredns-668d6bf9bc-xnmw8\" (UID: \"ee840ee9-54b9-41fc-993b-fbc024b6bd52\") " pod="kube-system/coredns-668d6bf9bc-xnmw8" Jul 6 23:58:18.468628 kubelet[2499]: I0706 23:58:18.468389 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8sq\" (UniqueName: \"kubernetes.io/projected/ee840ee9-54b9-41fc-993b-fbc024b6bd52-kube-api-access-jl8sq\") pod \"coredns-668d6bf9bc-xnmw8\" (UID: \"ee840ee9-54b9-41fc-993b-fbc024b6bd52\") " pod="kube-system/coredns-668d6bf9bc-xnmw8" Jul 6 23:58:18.468628 kubelet[2499]: I0706 23:58:18.468434 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzmq\" (UniqueName: \"kubernetes.io/projected/3da282bd-b582-4bb9-aa11-57ba9cbf3494-kube-api-access-qlzmq\") pod \"coredns-668d6bf9bc-vtwxv\" (UID: \"3da282bd-b582-4bb9-aa11-57ba9cbf3494\") " pod="kube-system/coredns-668d6bf9bc-vtwxv" Jul 6 23:58:18.666628 kubelet[2499]: E0706 23:58:18.666449 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:18.669278 containerd[1451]: time="2025-07-06T23:58:18.669212285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vtwxv,Uid:3da282bd-b582-4bb9-aa11-57ba9cbf3494,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:18.675560 kubelet[2499]: E0706 23:58:18.675497 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:18.678748 containerd[1451]: time="2025-07-06T23:58:18.677300559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnmw8,Uid:ee840ee9-54b9-41fc-993b-fbc024b6bd52,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:18.867686 kubelet[2499]: E0706 23:58:18.863179 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:19.864112 kubelet[2499]: E0706 23:58:19.863989 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:20.795284 systemd-networkd[1368]: cilium_host: Link UP Jul 6 23:58:20.799813 systemd-networkd[1368]: cilium_net: Link UP Jul 6 23:58:20.800160 systemd-networkd[1368]: cilium_net: Gained carrier Jul 6 23:58:20.800386 systemd-networkd[1368]: cilium_host: Gained carrier Jul 6 23:58:20.869707 kubelet[2499]: E0706 23:58:20.868218 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:21.000882 systemd-networkd[1368]: cilium_vxlan: Link UP Jul 6 23:58:21.000895 systemd-networkd[1368]: cilium_vxlan: Gained carrier Jul 6 23:58:21.204040 systemd-networkd[1368]: cilium_host: Gained IPv6LL Jul 6 23:58:21.340702 systemd-networkd[1368]: cilium_net: Gained IPv6LL Jul 6 23:58:21.461973 kernel: NET: Registered PF_ALG protocol family Jul 6 23:58:22.535526 systemd-networkd[1368]: lxc_health: Link UP Jul 6 23:58:22.563397 systemd-networkd[1368]: lxc_health: Gained carrier Jul 6 23:58:22.747858 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Jul 6 23:58:22.828676 kernel: eth0: renamed from tmp5314e Jul 6 23:58:22.824188 systemd-networkd[1368]: lxcb95eaae4066a: Link UP Jul 6 23:58:22.841282 systemd-networkd[1368]: lxcb95eaae4066a: Gained carrier Jul 6 23:58:22.845314 systemd-networkd[1368]: lxc0e5081e541bd: Link UP Jul 6 23:58:22.857716 kernel: eth0: renamed from tmp61dc2 Jul 6 23:58:22.873076 systemd-networkd[1368]: lxc0e5081e541bd: Gained carrier Jul 6 23:58:23.619802 kubelet[2499]: E0706 23:58:23.619650 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:23.653137 kubelet[2499]: I0706 23:58:23.653051 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4q62l" podStartSLOduration=11.037659832 podStartE2EDuration="18.653031523s" podCreationTimestamp="2025-07-06 23:58:05 +0000 UTC" firstStartedPulling="2025-07-06 23:58:05.822965315 +0000 UTC m=+6.391301458" lastFinishedPulling="2025-07-06 23:58:13.438336991 +0000 UTC m=+14.006673149" observedRunningTime="2025-07-06 23:58:18.908154155 +0000 UTC m=+19.476490320" watchObservedRunningTime="2025-07-06 23:58:23.653031523 +0000 UTC m=+24.221367684" Jul 6 23:58:23.880212 kubelet[2499]: E0706 23:58:23.880004 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:24.091990 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jul 6 23:58:24.667987 systemd-networkd[1368]: lxc0e5081e541bd: Gained IPv6LL Jul 6 23:58:24.669154 systemd-networkd[1368]: lxcb95eaae4066a: Gained IPv6LL Jul 6 23:58:24.882548 kubelet[2499]: E0706 23:58:24.882491 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:28.926570 containerd[1451]: time="2025-07-06T23:58:28.925760886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:28.926570 containerd[1451]: time="2025-07-06T23:58:28.925858104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:28.926570 containerd[1451]: time="2025-07-06T23:58:28.925874687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:28.934916 containerd[1451]: time="2025-07-06T23:58:28.926008858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:28.970980 systemd[1]: Started cri-containerd-5314ea90071a6124093ffebd264118d3363f7bd99d242b91347494951edcd06f.scope - libcontainer container 5314ea90071a6124093ffebd264118d3363f7bd99d242b91347494951edcd06f. Jul 6 23:58:29.017345 containerd[1451]: time="2025-07-06T23:58:29.017190126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:29.017723 containerd[1451]: time="2025-07-06T23:58:29.017677979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:29.018035 containerd[1451]: time="2025-07-06T23:58:29.017857993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:29.019058 containerd[1451]: time="2025-07-06T23:58:29.018885939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:29.070088 systemd[1]: Started cri-containerd-61dc20c26e352379551729be43c8b392eb8207eaeddd65a30b9810077d430344.scope - libcontainer container 61dc20c26e352379551729be43c8b392eb8207eaeddd65a30b9810077d430344. Jul 6 23:58:29.119958 containerd[1451]: time="2025-07-06T23:58:29.119749561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vtwxv,Uid:3da282bd-b582-4bb9-aa11-57ba9cbf3494,Namespace:kube-system,Attempt:0,} returns sandbox id \"5314ea90071a6124093ffebd264118d3363f7bd99d242b91347494951edcd06f\"" Jul 6 23:58:29.123476 kubelet[2499]: E0706 23:58:29.122972 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:29.128220 containerd[1451]: time="2025-07-06T23:58:29.128024545Z" level=info msg="CreateContainer within sandbox \"5314ea90071a6124093ffebd264118d3363f7bd99d242b91347494951edcd06f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:58:29.161499 containerd[1451]: time="2025-07-06T23:58:29.161443104Z" level=info msg="CreateContainer within sandbox \"5314ea90071a6124093ffebd264118d3363f7bd99d242b91347494951edcd06f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"234c452e6272c4864957a9fc1c6c915bd5a7aa54c7b96946391f5f28ca972509\"" Jul 6 23:58:29.165487 containerd[1451]: time="2025-07-06T23:58:29.163273330Z" level=info msg="StartContainer for \"234c452e6272c4864957a9fc1c6c915bd5a7aa54c7b96946391f5f28ca972509\"" Jul 6 23:58:29.234932 systemd[1]: Started cri-containerd-234c452e6272c4864957a9fc1c6c915bd5a7aa54c7b96946391f5f28ca972509.scope - libcontainer container 234c452e6272c4864957a9fc1c6c915bd5a7aa54c7b96946391f5f28ca972509. Jul 6 23:58:29.237046 containerd[1451]: time="2025-07-06T23:58:29.236979033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnmw8,Uid:ee840ee9-54b9-41fc-993b-fbc024b6bd52,Namespace:kube-system,Attempt:0,} returns sandbox id \"61dc20c26e352379551729be43c8b392eb8207eaeddd65a30b9810077d430344\"" Jul 6 23:58:29.241303 kubelet[2499]: E0706 23:58:29.240590 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:29.247228 containerd[1451]: time="2025-07-06T23:58:29.246090057Z" level=info msg="CreateContainer within sandbox \"61dc20c26e352379551729be43c8b392eb8207eaeddd65a30b9810077d430344\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:58:29.265023 containerd[1451]: time="2025-07-06T23:58:29.264949751Z" level=info msg="CreateContainer within sandbox \"61dc20c26e352379551729be43c8b392eb8207eaeddd65a30b9810077d430344\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30e61df5e8c63ad65c06df80881f595e342abbc9d76e9811580baa72149e1eec\"" Jul 6 23:58:29.267324 containerd[1451]: time="2025-07-06T23:58:29.265888694Z" level=info msg="StartContainer for \"30e61df5e8c63ad65c06df80881f595e342abbc9d76e9811580baa72149e1eec\"" Jul 6 23:58:29.304514 containerd[1451]: time="2025-07-06T23:58:29.304352999Z" level=info msg="StartContainer for \"234c452e6272c4864957a9fc1c6c915bd5a7aa54c7b96946391f5f28ca972509\" returns successfully" Jul 6 23:58:29.324988 systemd[1]: Started cri-containerd-30e61df5e8c63ad65c06df80881f595e342abbc9d76e9811580baa72149e1eec.scope - libcontainer container 30e61df5e8c63ad65c06df80881f595e342abbc9d76e9811580baa72149e1eec. Jul 6 23:58:29.369288 containerd[1451]: time="2025-07-06T23:58:29.369101231Z" level=info msg="StartContainer for \"30e61df5e8c63ad65c06df80881f595e342abbc9d76e9811580baa72149e1eec\" returns successfully" Jul 6 23:58:29.897239 kubelet[2499]: E0706 23:58:29.897199 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:29.900396 kubelet[2499]: E0706 23:58:29.900360 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:29.918819 kubelet[2499]: I0706 23:58:29.917957 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xnmw8" podStartSLOduration=24.917934169 podStartE2EDuration="24.917934169s" podCreationTimestamp="2025-07-06 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:29.917871575 +0000 UTC m=+30.486207779" watchObservedRunningTime="2025-07-06 23:58:29.917934169 +0000 UTC m=+30.486270331" Jul 6 23:58:29.945167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432844987.mount: Deactivated successfully. Jul 6 23:58:29.994462 kubelet[2499]: I0706 23:58:29.994112 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vtwxv" podStartSLOduration=24.994079473 podStartE2EDuration="24.994079473s" podCreationTimestamp="2025-07-06 23:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:29.942516967 +0000 UTC m=+30.510853130" watchObservedRunningTime="2025-07-06 23:58:29.994079473 +0000 UTC m=+30.562415641" Jul 6 23:58:30.903347 kubelet[2499]: E0706 23:58:30.903164 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:30.904708 kubelet[2499]: E0706 23:58:30.904189 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:31.904853 kubelet[2499]: E0706 23:58:31.904480 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:31.904853 kubelet[2499]: E0706 23:58:31.904743 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:58:44.516136 systemd[1]: Started sshd@7-64.23.136.149:22-139.178.89.65:50408.service - OpenSSH per-connection server daemon (139.178.89.65:50408). Jul 6 23:58:44.608806 sshd[3880]: Accepted publickey for core from 139.178.89.65 port 50408 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:58:44.610773 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:44.619375 systemd-logind[1445]: New session 8 of user core. Jul 6 23:58:44.629147 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:58:45.310053 sshd[3880]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:45.316174 systemd[1]: sshd@7-64.23.136.149:22-139.178.89.65:50408.service: Deactivated successfully. Jul 6 23:58:45.319357 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:58:45.321039 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:58:45.323536 systemd-logind[1445]: Removed session 8. Jul 6 23:58:50.331248 systemd[1]: Started sshd@8-64.23.136.149:22-139.178.89.65:39408.service - OpenSSH per-connection server daemon (139.178.89.65:39408). Jul 6 23:58:50.390725 sshd[3894]: Accepted publickey for core from 139.178.89.65 port 39408 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:58:50.393148 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:50.401698 systemd-logind[1445]: New session 9 of user core. Jul 6 23:58:50.406003 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:58:50.574181 sshd[3894]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:50.581921 systemd[1]: sshd@8-64.23.136.149:22-139.178.89.65:39408.service: Deactivated successfully. Jul 6 23:58:50.585114 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:58:50.587370 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:58:50.589142 systemd-logind[1445]: Removed session 9. Jul 6 23:58:55.594187 systemd[1]: Started sshd@9-64.23.136.149:22-139.178.89.65:39424.service - OpenSSH per-connection server daemon (139.178.89.65:39424). Jul 6 23:58:55.640687 sshd[3908]: Accepted publickey for core from 139.178.89.65 port 39424 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:58:55.641593 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:55.650295 systemd-logind[1445]: New session 10 of user core. Jul 6 23:58:55.654990 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:58:55.825943 sshd[3908]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:55.843562 systemd[1]: sshd@9-64.23.136.149:22-139.178.89.65:39424.service: Deactivated successfully. Jul 6 23:58:55.847301 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:58:55.848675 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:58:55.850797 systemd-logind[1445]: Removed session 10. Jul 6 23:59:00.847200 systemd[1]: Started sshd@10-64.23.136.149:22-139.178.89.65:52416.service - OpenSSH per-connection server daemon (139.178.89.65:52416). Jul 6 23:59:00.914915 sshd[3925]: Accepted publickey for core from 139.178.89.65 port 52416 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:00.917117 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:00.924724 systemd-logind[1445]: New session 11 of user core. Jul 6 23:59:00.929924 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:59:01.102169 sshd[3925]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:01.107015 systemd[1]: sshd@10-64.23.136.149:22-139.178.89.65:52416.service: Deactivated successfully. Jul 6 23:59:01.110137 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:59:01.113344 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:59:01.115537 systemd-logind[1445]: Removed session 11. Jul 6 23:59:06.127616 systemd[1]: Started sshd@11-64.23.136.149:22-139.178.89.65:52424.service - OpenSSH per-connection server daemon (139.178.89.65:52424). Jul 6 23:59:06.176348 sshd[3939]: Accepted publickey for core from 139.178.89.65 port 52424 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:06.180615 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:06.188506 systemd-logind[1445]: New session 12 of user core. Jul 6 23:59:06.194963 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:59:06.350569 sshd[3939]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:06.364208 systemd[1]: sshd@11-64.23.136.149:22-139.178.89.65:52424.service: Deactivated successfully. Jul 6 23:59:06.368472 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:59:06.372173 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:59:06.379477 systemd[1]: Started sshd@12-64.23.136.149:22-139.178.89.65:52430.service - OpenSSH per-connection server daemon (139.178.89.65:52430). Jul 6 23:59:06.382292 systemd-logind[1445]: Removed session 12. Jul 6 23:59:06.435459 sshd[3953]: Accepted publickey for core from 139.178.89.65 port 52430 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:06.438112 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:06.446631 systemd-logind[1445]: New session 13 of user core. Jul 6 23:59:06.452043 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:59:06.691973 sshd[3953]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:06.708990 systemd[1]: sshd@12-64.23.136.149:22-139.178.89.65:52430.service: Deactivated successfully. Jul 6 23:59:06.717913 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:59:06.723105 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:59:06.732362 systemd[1]: Started sshd@13-64.23.136.149:22-139.178.89.65:52444.service - OpenSSH per-connection server daemon (139.178.89.65:52444). Jul 6 23:59:06.737080 systemd-logind[1445]: Removed session 13. Jul 6 23:59:06.821359 sshd[3966]: Accepted publickey for core from 139.178.89.65 port 52444 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:06.823911 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:06.832400 systemd-logind[1445]: New session 14 of user core. Jul 6 23:59:06.840025 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:59:07.013623 sshd[3966]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:07.019792 systemd[1]: sshd@13-64.23.136.149:22-139.178.89.65:52444.service: Deactivated successfully. Jul 6 23:59:07.022699 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:59:07.023712 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:59:07.024905 systemd-logind[1445]: Removed session 14. Jul 6 23:59:10.628156 kubelet[2499]: E0706 23:59:10.628108 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:12.044863 systemd[1]: Started sshd@14-64.23.136.149:22-139.178.89.65:41730.service - OpenSSH per-connection server daemon (139.178.89.65:41730). Jul 6 23:59:12.098119 sshd[3979]: Accepted publickey for core from 139.178.89.65 port 41730 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:12.100441 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:12.107691 systemd-logind[1445]: New session 15 of user core. Jul 6 23:59:12.115024 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:59:12.267954 sshd[3979]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:12.274988 systemd[1]: sshd@14-64.23.136.149:22-139.178.89.65:41730.service: Deactivated successfully. Jul 6 23:59:12.279395 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:59:12.281228 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:59:12.282613 systemd-logind[1445]: Removed session 15. Jul 6 23:59:13.628852 kubelet[2499]: E0706 23:59:13.628760 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:17.291199 systemd[1]: Started sshd@15-64.23.136.149:22-139.178.89.65:41734.service - OpenSSH per-connection server daemon (139.178.89.65:41734). Jul 6 23:59:17.346984 sshd[3992]: Accepted publickey for core from 139.178.89.65 port 41734 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:17.350856 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:17.359574 systemd-logind[1445]: New session 16 of user core. Jul 6 23:59:17.371112 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:59:17.532900 sshd[3992]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:17.539416 systemd[1]: sshd@15-64.23.136.149:22-139.178.89.65:41734.service: Deactivated successfully. Jul 6 23:59:17.544873 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:59:17.547251 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:59:17.548869 systemd-logind[1445]: Removed session 16. Jul 6 23:59:18.628410 kubelet[2499]: E0706 23:59:18.628366 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:22.561231 systemd[1]: Started sshd@16-64.23.136.149:22-139.178.89.65:33256.service - OpenSSH per-connection server daemon (139.178.89.65:33256). Jul 6 23:59:22.612915 sshd[4004]: Accepted publickey for core from 139.178.89.65 port 33256 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:22.615289 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:22.622759 systemd-logind[1445]: New session 17 of user core. Jul 6 23:59:22.629001 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:59:22.767491 sshd[4004]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:22.778685 systemd[1]: sshd@16-64.23.136.149:22-139.178.89.65:33256.service: Deactivated successfully. Jul 6 23:59:22.783818 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:59:22.786813 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:59:22.801339 systemd[1]: Started sshd@17-64.23.136.149:22-139.178.89.65:33258.service - OpenSSH per-connection server daemon (139.178.89.65:33258). Jul 6 23:59:22.804931 systemd-logind[1445]: Removed session 17. Jul 6 23:59:22.852929 sshd[4017]: Accepted publickey for core from 139.178.89.65 port 33258 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:22.855043 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:22.861278 systemd-logind[1445]: New session 18 of user core. Jul 6 23:59:22.869978 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:59:23.225428 sshd[4017]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:23.237037 systemd[1]: sshd@17-64.23.136.149:22-139.178.89.65:33258.service: Deactivated successfully. Jul 6 23:59:23.240151 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:59:23.242034 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:59:23.250277 systemd[1]: Started sshd@18-64.23.136.149:22-139.178.89.65:33266.service - OpenSSH per-connection server daemon (139.178.89.65:33266). Jul 6 23:59:23.254750 systemd-logind[1445]: Removed session 18. Jul 6 23:59:23.306251 sshd[4028]: Accepted publickey for core from 139.178.89.65 port 33266 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:23.309222 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:23.320110 systemd-logind[1445]: New session 19 of user core. Jul 6 23:59:23.323086 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:59:24.402008 sshd[4028]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:24.414504 systemd[1]: sshd@18-64.23.136.149:22-139.178.89.65:33266.service: Deactivated successfully. Jul 6 23:59:24.418434 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:59:24.424142 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:59:24.433873 systemd[1]: Started sshd@19-64.23.136.149:22-139.178.89.65:33274.service - OpenSSH per-connection server daemon (139.178.89.65:33274). Jul 6 23:59:24.438834 systemd-logind[1445]: Removed session 19. Jul 6 23:59:24.493398 sshd[4046]: Accepted publickey for core from 139.178.89.65 port 33274 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:24.495626 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:24.503881 systemd-logind[1445]: New session 20 of user core. Jul 6 23:59:24.513045 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:59:24.628786 kubelet[2499]: E0706 23:59:24.628175 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:24.870208 sshd[4046]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:24.881792 systemd[1]: sshd@19-64.23.136.149:22-139.178.89.65:33274.service: Deactivated successfully. Jul 6 23:59:24.888355 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:59:24.889941 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:59:24.899344 systemd[1]: Started sshd@20-64.23.136.149:22-139.178.89.65:33284.service - OpenSSH per-connection server daemon (139.178.89.65:33284). Jul 6 23:59:24.901802 systemd-logind[1445]: Removed session 20. Jul 6 23:59:24.961298 sshd[4057]: Accepted publickey for core from 139.178.89.65 port 33284 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:24.964126 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:24.971238 systemd-logind[1445]: New session 21 of user core. Jul 6 23:59:24.978003 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:59:25.140995 sshd[4057]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:25.146508 systemd[1]: sshd@20-64.23.136.149:22-139.178.89.65:33284.service: Deactivated successfully. Jul 6 23:59:25.150413 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:59:25.153456 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:59:25.159230 systemd-logind[1445]: Removed session 21. Jul 6 23:59:25.628117 kubelet[2499]: E0706 23:59:25.628045 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:30.168128 systemd[1]: Started sshd@21-64.23.136.149:22-139.178.89.65:49128.service - OpenSSH per-connection server daemon (139.178.89.65:49128). Jul 6 23:59:30.223671 sshd[4069]: Accepted publickey for core from 139.178.89.65 port 49128 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:30.226438 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:30.235078 systemd-logind[1445]: New session 22 of user core. Jul 6 23:59:30.241176 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:59:30.426888 sshd[4069]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:30.433583 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:59:30.434002 systemd[1]: sshd@21-64.23.136.149:22-139.178.89.65:49128.service: Deactivated successfully. Jul 6 23:59:30.437510 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:59:30.442057 systemd-logind[1445]: Removed session 22. Jul 6 23:59:35.450128 systemd[1]: Started sshd@22-64.23.136.149:22-139.178.89.65:49142.service - OpenSSH per-connection server daemon (139.178.89.65:49142). Jul 6 23:59:35.500684 sshd[4083]: Accepted publickey for core from 139.178.89.65 port 49142 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:35.503572 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:35.512015 systemd-logind[1445]: New session 23 of user core. Jul 6 23:59:35.521027 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:59:35.691978 sshd[4083]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:35.697816 systemd[1]: sshd@22-64.23.136.149:22-139.178.89.65:49142.service: Deactivated successfully. Jul 6 23:59:35.702282 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:59:35.705050 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:59:35.706489 systemd-logind[1445]: Removed session 23. Jul 6 23:59:40.628783 kubelet[2499]: E0706 23:59:40.628723 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:40.714140 systemd[1]: Started sshd@23-64.23.136.149:22-139.178.89.65:53172.service - OpenSSH per-connection server daemon (139.178.89.65:53172). Jul 6 23:59:40.783732 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 53172 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:40.786405 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:40.793907 systemd-logind[1445]: New session 24 of user core. Jul 6 23:59:40.802252 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:59:40.983705 sshd[4098]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:40.991195 systemd[1]: sshd@23-64.23.136.149:22-139.178.89.65:53172.service: Deactivated successfully. Jul 6 23:59:40.994334 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:59:40.996635 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:59:40.998462 systemd-logind[1445]: Removed session 24. Jul 6 23:59:41.629549 kubelet[2499]: E0706 23:59:41.628271 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:46.007427 systemd[1]: Started sshd@24-64.23.136.149:22-139.178.89.65:53188.service - OpenSSH per-connection server daemon (139.178.89.65:53188). Jul 6 23:59:46.065641 sshd[4111]: Accepted publickey for core from 139.178.89.65 port 53188 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:46.068087 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:46.075245 systemd-logind[1445]: New session 25 of user core. Jul 6 23:59:46.083024 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:59:46.226505 sshd[4111]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:46.236600 systemd[1]: sshd@24-64.23.136.149:22-139.178.89.65:53188.service: Deactivated successfully. Jul 6 23:59:46.239408 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:59:46.242404 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:59:46.253165 systemd[1]: Started sshd@25-64.23.136.149:22-139.178.89.65:53198.service - OpenSSH per-connection server daemon (139.178.89.65:53198). Jul 6 23:59:46.255815 systemd-logind[1445]: Removed session 25. Jul 6 23:59:46.304746 sshd[4124]: Accepted publickey for core from 139.178.89.65 port 53198 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:46.307245 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:46.314083 systemd-logind[1445]: New session 26 of user core. Jul 6 23:59:46.321943 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:59:47.996315 containerd[1451]: time="2025-07-06T23:59:47.995761111Z" level=info msg="StopContainer for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" with timeout 30 (s)" Jul 6 23:59:48.000719 containerd[1451]: time="2025-07-06T23:59:47.999785253Z" level=info msg="Stop container \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" with signal terminated" Jul 6 23:59:48.067089 containerd[1451]: time="2025-07-06T23:59:48.066993955Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:59:48.068771 systemd[1]: cri-containerd-97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054.scope: Deactivated successfully. Jul 6 23:59:48.113008 containerd[1451]: time="2025-07-06T23:59:48.111672266Z" level=info msg="StopContainer for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" with timeout 2 (s)" Jul 6 23:59:48.113204 containerd[1451]: time="2025-07-06T23:59:48.113071541Z" level=info msg="Stop container \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" with signal terminated" Jul 6 23:59:48.142529 systemd-networkd[1368]: lxc_health: Link DOWN Jul 6 23:59:48.142538 systemd-networkd[1368]: lxc_health: Lost carrier Jul 6 23:59:48.163649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054-rootfs.mount: Deactivated successfully. Jul 6 23:59:48.191019 systemd[1]: cri-containerd-bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf.scope: Deactivated successfully. Jul 6 23:59:48.191445 systemd[1]: cri-containerd-bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf.scope: Consumed 10.290s CPU time. Jul 6 23:59:48.196175 containerd[1451]: time="2025-07-06T23:59:48.195949907Z" level=info msg="shim disconnected" id=97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054 namespace=k8s.io Jul 6 23:59:48.196175 containerd[1451]: time="2025-07-06T23:59:48.196135727Z" level=warning msg="cleaning up after shim disconnected" id=97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054 namespace=k8s.io Jul 6 23:59:48.197273 containerd[1451]: time="2025-07-06T23:59:48.196149617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:48.243547 containerd[1451]: time="2025-07-06T23:59:48.243304648Z" level=info msg="StopContainer for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" returns successfully" Jul 6 23:59:48.245803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf-rootfs.mount: Deactivated successfully. Jul 6 23:59:48.247114 containerd[1451]: time="2025-07-06T23:59:48.246947296Z" level=info msg="StopPodSandbox for \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\"" Jul 6 23:59:48.250055 containerd[1451]: time="2025-07-06T23:59:48.247289147Z" level=info msg="Container to stop \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:48.257113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a-shm.mount: Deactivated successfully. Jul 6 23:59:48.262055 containerd[1451]: time="2025-07-06T23:59:48.261039929Z" level=info msg="shim disconnected" id=bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf namespace=k8s.io Jul 6 23:59:48.262055 containerd[1451]: time="2025-07-06T23:59:48.261779126Z" level=warning msg="cleaning up after shim disconnected" id=bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf namespace=k8s.io Jul 6 23:59:48.262055 containerd[1451]: time="2025-07-06T23:59:48.261803957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:48.267594 systemd[1]: cri-containerd-f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a.scope: Deactivated successfully. Jul 6 23:59:48.307748 containerd[1451]: time="2025-07-06T23:59:48.307528918Z" level=info msg="StopContainer for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" returns successfully" Jul 6 23:59:48.309031 containerd[1451]: time="2025-07-06T23:59:48.308967587Z" level=info msg="StopPodSandbox for \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\"" Jul 6 23:59:48.309031 containerd[1451]: time="2025-07-06T23:59:48.309033297Z" level=info msg="Container to stop \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:48.309305 containerd[1451]: time="2025-07-06T23:59:48.309048159Z" level=info msg="Container to stop \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:48.309305 containerd[1451]: time="2025-07-06T23:59:48.309059260Z" level=info msg="Container to stop \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:48.309305 containerd[1451]: time="2025-07-06T23:59:48.309082274Z" level=info msg="Container to stop \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:48.309305 containerd[1451]: time="2025-07-06T23:59:48.309092592Z" level=info msg="Container to stop \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:48.318405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03-shm.mount: Deactivated successfully. Jul 6 23:59:48.329130 systemd[1]: cri-containerd-a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03.scope: Deactivated successfully. Jul 6 23:59:48.340799 containerd[1451]: time="2025-07-06T23:59:48.340705009Z" level=info msg="shim disconnected" id=f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a namespace=k8s.io Jul 6 23:59:48.341108 containerd[1451]: time="2025-07-06T23:59:48.340791177Z" level=warning msg="cleaning up after shim disconnected" id=f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a namespace=k8s.io Jul 6 23:59:48.341108 containerd[1451]: time="2025-07-06T23:59:48.340843239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:48.380053 containerd[1451]: time="2025-07-06T23:59:48.379949140Z" level=info msg="shim disconnected" id=a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03 namespace=k8s.io Jul 6 23:59:48.380053 containerd[1451]: time="2025-07-06T23:59:48.380041431Z" level=warning msg="cleaning up after shim disconnected" id=a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03 namespace=k8s.io Jul 6 23:59:48.380053 containerd[1451]: time="2025-07-06T23:59:48.380056758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:48.390690 containerd[1451]: time="2025-07-06T23:59:48.390585368Z" level=info msg="TearDown network for sandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" successfully" Jul 6 23:59:48.391097 containerd[1451]: time="2025-07-06T23:59:48.390952126Z" level=info msg="StopPodSandbox for \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" returns successfully" Jul 6 23:59:48.412566 containerd[1451]: time="2025-07-06T23:59:48.412495764Z" level=info msg="TearDown network for sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" successfully" Jul 6 23:59:48.412566 containerd[1451]: time="2025-07-06T23:59:48.412543277Z" level=info msg="StopPodSandbox for \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" returns successfully" Jul 6 23:59:48.528932 kubelet[2499]: I0706 23:59:48.527887 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-xtables-lock\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.528932 kubelet[2499]: I0706 23:59:48.527977 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-config-path\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.528932 kubelet[2499]: I0706 23:59:48.528023 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-cilium-config-path\") pod \"fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029\" (UID: \"fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029\") " Jul 6 23:59:48.528932 kubelet[2499]: I0706 23:59:48.528056 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-bpf-maps\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.528932 kubelet[2499]: I0706 23:59:48.528085 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hubble-tls\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.528932 kubelet[2499]: I0706 23:59:48.528109 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hostproc\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.529710 kubelet[2499]: I0706 23:59:48.528132 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-run\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.529710 kubelet[2499]: I0706 23:59:48.528156 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-cgroup\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.529710 kubelet[2499]: I0706 23:59:48.528185 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-clustermesh-secrets\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.529710 kubelet[2499]: I0706 23:59:48.528209 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-lib-modules\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.529710 kubelet[2499]: I0706 23:59:48.528233 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cni-path\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.529710 kubelet[2499]: I0706 23:59:48.528270 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-kube-api-access-m56dd\") pod \"fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029\" (UID: \"fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029\") " Jul 6 23:59:48.530055 kubelet[2499]: I0706 23:59:48.528298 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-kernel\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.532472 kubelet[2499]: I0706 23:59:48.530935 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.533339 kubelet[2499]: I0706 23:59:48.531089 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.533451 kubelet[2499]: I0706 23:59:48.533371 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.537604 kubelet[2499]: I0706 23:59:48.536939 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:59:48.537964 kubelet[2499]: I0706 23:59:48.537927 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.538097 kubelet[2499]: I0706 23:59:48.538081 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cni-path" (OuterVolumeSpecName: "cni-path") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.540873 kubelet[2499]: I0706 23:59:48.540102 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.540873 kubelet[2499]: I0706 23:59:48.540157 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-etc-cni-netd\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.540873 kubelet[2499]: I0706 23:59:48.540215 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmm95\" (UniqueName: \"kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-kube-api-access-pmm95\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.540873 kubelet[2499]: I0706 23:59:48.540251 2499 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-net\") pod \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\" (UID: \"1d65d0f3-c375-4185-8a1a-8abf652aaeb2\") " Jul 6 23:59:48.540873 kubelet[2499]: I0706 23:59:48.540321 2499 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-xtables-lock\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.540873 kubelet[2499]: I0706 23:59:48.540338 2499 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-config-path\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540355 2499 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-run\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540373 2499 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cilium-cgroup\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540387 2499 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-lib-modules\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540402 2499 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-cni-path\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540416 2499 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-kernel\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540448 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.541441 kubelet[2499]: I0706 23:59:48.540463 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.541806 kubelet[2499]: I0706 23:59:48.541121 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.541806 kubelet[2499]: I0706 23:59:48.541432 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hostproc" (OuterVolumeSpecName: "hostproc") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:59:48.541915 kubelet[2499]: I0706 23:59:48.541827 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:59:48.547302 kubelet[2499]: I0706 23:59:48.547146 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029" (UID: "fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:59:48.547493 kubelet[2499]: I0706 23:59:48.547436 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-kube-api-access-m56dd" (OuterVolumeSpecName: "kube-api-access-m56dd") pod "fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029" (UID: "fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029"). InnerVolumeSpecName "kube-api-access-m56dd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:59:48.552390 kubelet[2499]: I0706 23:59:48.552191 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-kube-api-access-pmm95" (OuterVolumeSpecName: "kube-api-access-pmm95") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "kube-api-access-pmm95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:59:48.553244 kubelet[2499]: I0706 23:59:48.553201 2499 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1d65d0f3-c375-4185-8a1a-8abf652aaeb2" (UID: "1d65d0f3-c375-4185-8a1a-8abf652aaeb2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:59:48.641321 kubelet[2499]: I0706 23:59:48.641249 2499 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-kube-api-access-m56dd\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641321 kubelet[2499]: I0706 23:59:48.641311 2499 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-etc-cni-netd\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641321 kubelet[2499]: I0706 23:59:48.641327 2499 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pmm95\" (UniqueName: \"kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-kube-api-access-pmm95\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641592 kubelet[2499]: I0706 23:59:48.641342 2499 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-host-proc-sys-net\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641592 kubelet[2499]: I0706 23:59:48.641357 2499 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029-cilium-config-path\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641592 kubelet[2499]: I0706 23:59:48.641373 2499 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-bpf-maps\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641592 kubelet[2499]: I0706 23:59:48.641386 2499 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hubble-tls\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641592 kubelet[2499]: I0706 23:59:48.641402 2499 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-hostproc\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.641592 kubelet[2499]: I0706 23:59:48.641414 2499 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d65d0f3-c375-4185-8a1a-8abf652aaeb2-clustermesh-secrets\") on node \"ci-4081.3.4-b-aec8669192\" DevicePath \"\"" Jul 6 23:59:48.989797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a-rootfs.mount: Deactivated successfully. Jul 6 23:59:48.989938 systemd[1]: var-lib-kubelet-pods-fc5ce6ed\x2dc879\x2d4eb7\x2db0d2\x2d2e7c03e9f029-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm56dd.mount: Deactivated successfully. Jul 6 23:59:48.990028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03-rootfs.mount: Deactivated successfully. Jul 6 23:59:48.990104 systemd[1]: var-lib-kubelet-pods-1d65d0f3\x2dc375\x2d4185\x2d8a1a\x2d8abf652aaeb2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:59:48.990223 systemd[1]: var-lib-kubelet-pods-1d65d0f3\x2dc375\x2d4185\x2d8a1a\x2d8abf652aaeb2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmm95.mount: Deactivated successfully. Jul 6 23:59:48.990309 systemd[1]: var-lib-kubelet-pods-1d65d0f3\x2dc375\x2d4185\x2d8a1a\x2d8abf652aaeb2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:59:49.137650 systemd[1]: Removed slice kubepods-besteffort-podfc5ce6ed_c879_4eb7_b0d2_2e7c03e9f029.slice - libcontainer container kubepods-besteffort-podfc5ce6ed_c879_4eb7_b0d2_2e7c03e9f029.slice. Jul 6 23:59:49.138803 kubelet[2499]: I0706 23:59:49.138265 2499 scope.go:117] "RemoveContainer" containerID="97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054" Jul 6 23:59:49.142960 containerd[1451]: time="2025-07-06T23:59:49.142249697Z" level=info msg="RemoveContainer for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\"" Jul 6 23:59:49.152981 containerd[1451]: time="2025-07-06T23:59:49.152219670Z" level=info msg="RemoveContainer for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" returns successfully" Jul 6 23:59:49.164981 systemd[1]: Removed slice kubepods-burstable-pod1d65d0f3_c375_4185_8a1a_8abf652aaeb2.slice - libcontainer container kubepods-burstable-pod1d65d0f3_c375_4185_8a1a_8abf652aaeb2.slice. Jul 6 23:59:49.165274 systemd[1]: kubepods-burstable-pod1d65d0f3_c375_4185_8a1a_8abf652aaeb2.slice: Consumed 10.409s CPU time. Jul 6 23:59:49.171678 kubelet[2499]: I0706 23:59:49.171598 2499 scope.go:117] "RemoveContainer" containerID="97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054" Jul 6 23:59:49.188260 containerd[1451]: time="2025-07-06T23:59:49.176765802Z" level=error msg="ContainerStatus for \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\": not found" Jul 6 23:59:49.190278 kubelet[2499]: E0706 23:59:49.190073 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\": not found" containerID="97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054" Jul 6 23:59:49.205418 kubelet[2499]: I0706 23:59:49.191458 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054"} err="failed to get container status \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\": rpc error: code = NotFound desc = an error occurred when try to find container \"97c792834619663024bfcc7ab00f0c8a479999ddc3c1803afacb9762689bc054\": not found" Jul 6 23:59:49.205418 kubelet[2499]: I0706 23:59:49.204375 2499 scope.go:117] "RemoveContainer" containerID="bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf" Jul 6 23:59:49.210447 containerd[1451]: time="2025-07-06T23:59:49.209964260Z" level=info msg="RemoveContainer for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\"" Jul 6 23:59:49.216525 containerd[1451]: time="2025-07-06T23:59:49.215214636Z" level=info msg="RemoveContainer for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" returns successfully" Jul 6 23:59:49.217061 kubelet[2499]: I0706 23:59:49.217030 2499 scope.go:117] "RemoveContainer" containerID="fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402" Jul 6 23:59:49.219963 containerd[1451]: time="2025-07-06T23:59:49.219568821Z" level=info msg="RemoveContainer for \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\"" Jul 6 23:59:49.222476 containerd[1451]: time="2025-07-06T23:59:49.222355647Z" level=info msg="RemoveContainer for \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\" returns successfully" Jul 6 23:59:49.223050 kubelet[2499]: I0706 23:59:49.222990 2499 scope.go:117] "RemoveContainer" containerID="aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826" Jul 6 23:59:49.225579 containerd[1451]: time="2025-07-06T23:59:49.225182139Z" level=info msg="RemoveContainer for \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\"" Jul 6 23:59:49.228930 containerd[1451]: time="2025-07-06T23:59:49.228873777Z" level=info msg="RemoveContainer for \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\" returns successfully" Jul 6 23:59:49.229639 kubelet[2499]: I0706 23:59:49.229478 2499 scope.go:117] "RemoveContainer" containerID="5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901" Jul 6 23:59:49.231492 containerd[1451]: time="2025-07-06T23:59:49.231444723Z" level=info msg="RemoveContainer for \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\"" Jul 6 23:59:49.237380 containerd[1451]: time="2025-07-06T23:59:49.237306040Z" level=info msg="RemoveContainer for \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\" returns successfully" Jul 6 23:59:49.238789 kubelet[2499]: I0706 23:59:49.238744 2499 scope.go:117] "RemoveContainer" containerID="8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd" Jul 6 23:59:49.241187 containerd[1451]: time="2025-07-06T23:59:49.241077633Z" level=info msg="RemoveContainer for \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\"" Jul 6 23:59:49.245415 containerd[1451]: time="2025-07-06T23:59:49.245161608Z" level=info msg="RemoveContainer for \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\" returns successfully" Jul 6 23:59:49.245729 kubelet[2499]: I0706 23:59:49.245684 2499 scope.go:117] "RemoveContainer" containerID="bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf" Jul 6 23:59:49.246236 containerd[1451]: time="2025-07-06T23:59:49.246166510Z" level=error msg="ContainerStatus for \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\": not found" Jul 6 23:59:49.246725 kubelet[2499]: E0706 23:59:49.246422 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\": not found" containerID="bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf" Jul 6 23:59:49.246725 kubelet[2499]: I0706 23:59:49.246467 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf"} err="failed to get container status \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdbea301f2a1fa9722804e7bb36eda84472a209850b8efc44cc1287cc20cdbcf\": not found" Jul 6 23:59:49.246725 kubelet[2499]: I0706 23:59:49.246503 2499 scope.go:117] "RemoveContainer" containerID="fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402" Jul 6 23:59:49.247265 containerd[1451]: time="2025-07-06T23:59:49.247117755Z" level=error msg="ContainerStatus for \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\": not found" Jul 6 23:59:49.247457 kubelet[2499]: E0706 23:59:49.247321 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\": not found" containerID="fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402" Jul 6 23:59:49.247457 kubelet[2499]: I0706 23:59:49.247359 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402"} err="failed to get container status \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\": rpc error: code = NotFound desc = an error occurred when try to find container \"fabfc8439894c51a1bd4912585d666b60cd5b00a6fcb763e1472a4f64ae77402\": not found" Jul 6 23:59:49.247457 kubelet[2499]: I0706 23:59:49.247387 2499 scope.go:117] "RemoveContainer" containerID="aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826" Jul 6 23:59:49.248036 containerd[1451]: time="2025-07-06T23:59:49.247711808Z" level=error msg="ContainerStatus for \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\": not found" Jul 6 23:59:49.248714 kubelet[2499]: E0706 23:59:49.248222 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\": not found" containerID="aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826" Jul 6 23:59:49.248714 kubelet[2499]: I0706 23:59:49.248281 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826"} err="failed to get container status \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\": rpc error: code = NotFound desc = an error occurred when try to find container \"aeb5da09a35957237df54e8d719b577374516d0bce697205c3f0b10541015826\": not found" Jul 6 23:59:49.248714 kubelet[2499]: I0706 23:59:49.248308 2499 scope.go:117] "RemoveContainer" containerID="5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901" Jul 6 23:59:49.248909 containerd[1451]: time="2025-07-06T23:59:49.248541940Z" level=error msg="ContainerStatus for \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\": not found" Jul 6 23:59:49.249629 kubelet[2499]: E0706 23:59:49.249078 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\": not found" containerID="5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901" Jul 6 23:59:49.249629 kubelet[2499]: I0706 23:59:49.249114 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901"} err="failed to get container status \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\": rpc error: code = NotFound desc = an error occurred when try to find container \"5187745828c86bd44c2b3868969344ee8df6072fe01cd99a48ef9c2413425901\": not found" Jul 6 23:59:49.249629 kubelet[2499]: I0706 23:59:49.249169 2499 scope.go:117] "RemoveContainer" containerID="8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd" Jul 6 23:59:49.249629 kubelet[2499]: E0706 23:59:49.249549 2499 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\": not found" containerID="8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd" Jul 6 23:59:49.249629 kubelet[2499]: I0706 23:59:49.249587 2499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd"} err="failed to get container status \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\": not found" Jul 6 23:59:49.249939 containerd[1451]: time="2025-07-06T23:59:49.249412390Z" level=error msg="ContainerStatus for \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fe1d3d2edcf7d7acb380a44ab3d716db4b9621808075e242d18c20d37f711dd\": not found" Jul 6 23:59:49.631144 kubelet[2499]: I0706 23:59:49.631009 2499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d65d0f3-c375-4185-8a1a-8abf652aaeb2" path="/var/lib/kubelet/pods/1d65d0f3-c375-4185-8a1a-8abf652aaeb2/volumes" Jul 6 23:59:49.632491 kubelet[2499]: I0706 23:59:49.631873 2499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029" path="/var/lib/kubelet/pods/fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029/volumes" Jul 6 23:59:49.819369 kubelet[2499]: E0706 23:59:49.819288 2499 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:59:49.847512 sshd[4124]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:49.856935 systemd[1]: sshd@25-64.23.136.149:22-139.178.89.65:53198.service: Deactivated successfully. Jul 6 23:59:49.860357 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:59:49.863949 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:59:49.876221 systemd[1]: Started sshd@26-64.23.136.149:22-139.178.89.65:57942.service - OpenSSH per-connection server daemon (139.178.89.65:57942). Jul 6 23:59:49.878202 systemd-logind[1445]: Removed session 26. Jul 6 23:59:49.931543 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 57942 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:49.933642 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:49.941339 systemd-logind[1445]: New session 27 of user core. Jul 6 23:59:49.948925 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:59:50.696363 sshd[4288]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:50.709081 systemd[1]: sshd@26-64.23.136.149:22-139.178.89.65:57942.service: Deactivated successfully. Jul 6 23:59:50.717453 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:59:50.721742 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:59:50.735844 kubelet[2499]: I0706 23:59:50.735788 2499 memory_manager.go:355] "RemoveStaleState removing state" podUID="1d65d0f3-c375-4185-8a1a-8abf652aaeb2" containerName="cilium-agent" Jul 6 23:59:50.735844 kubelet[2499]: I0706 23:59:50.735829 2499 memory_manager.go:355] "RemoveStaleState removing state" podUID="fc5ce6ed-c879-4eb7-b0d2-2e7c03e9f029" containerName="cilium-operator" Jul 6 23:59:50.737226 systemd[1]: Started sshd@27-64.23.136.149:22-139.178.89.65:57954.service - OpenSSH per-connection server daemon (139.178.89.65:57954). Jul 6 23:59:50.741107 systemd-logind[1445]: Removed session 27. Jul 6 23:59:50.782604 systemd[1]: Created slice kubepods-burstable-podf9c62e66_2c4f_4057_888f_379124bd9efd.slice - libcontainer container kubepods-burstable-podf9c62e66_2c4f_4057_888f_379124bd9efd.slice. Jul 6 23:59:50.832697 sshd[4300]: Accepted publickey for core from 139.178.89.65 port 57954 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:50.837635 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:50.850431 systemd-logind[1445]: New session 28 of user core. Jul 6 23:59:50.856977 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:59:50.865807 kubelet[2499]: I0706 23:59:50.865618 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-cilium-run\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.865954 kubelet[2499]: I0706 23:59:50.865818 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c62e66-2c4f-4057-888f-379124bd9efd-hubble-tls\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.865954 kubelet[2499]: I0706 23:59:50.865934 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-cilium-cgroup\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.866030 kubelet[2499]: I0706 23:59:50.865972 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb98l\" (UniqueName: \"kubernetes.io/projected/f9c62e66-2c4f-4057-888f-379124bd9efd-kube-api-access-lb98l\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.867820 kubelet[2499]: I0706 23:59:50.867729 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c62e66-2c4f-4057-888f-379124bd9efd-cilium-config-path\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.867944 kubelet[2499]: I0706 23:59:50.867836 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-bpf-maps\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.867944 kubelet[2499]: I0706 23:59:50.867915 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-host-proc-sys-net\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868023 kubelet[2499]: I0706 23:59:50.867949 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9c62e66-2c4f-4057-888f-379124bd9efd-cilium-ipsec-secrets\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868023 kubelet[2499]: I0706 23:59:50.868005 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c62e66-2c4f-4057-888f-379124bd9efd-clustermesh-secrets\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868130 kubelet[2499]: I0706 23:59:50.868064 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-xtables-lock\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868130 kubelet[2499]: I0706 23:59:50.868092 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-host-proc-sys-kernel\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868206 kubelet[2499]: I0706 23:59:50.868146 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-etc-cni-netd\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868206 kubelet[2499]: I0706 23:59:50.868170 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-hostproc\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868285 kubelet[2499]: I0706 23:59:50.868239 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-cni-path\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.868331 kubelet[2499]: I0706 23:59:50.868302 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c62e66-2c4f-4057-888f-379124bd9efd-lib-modules\") pod \"cilium-bd6kf\" (UID: \"f9c62e66-2c4f-4057-888f-379124bd9efd\") " pod="kube-system/cilium-bd6kf" Jul 6 23:59:50.926385 sshd[4300]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:50.945943 systemd[1]: sshd@27-64.23.136.149:22-139.178.89.65:57954.service: Deactivated successfully. Jul 6 23:59:50.949386 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:59:50.952460 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:59:50.961276 systemd[1]: Started sshd@28-64.23.136.149:22-139.178.89.65:57962.service - OpenSSH per-connection server daemon (139.178.89.65:57962). Jul 6 23:59:50.967110 systemd-logind[1445]: Removed session 28. Jul 6 23:59:51.060600 sshd[4308]: Accepted publickey for core from 139.178.89.65 port 57962 ssh2: RSA SHA256:D4plKyt2QZB6tnAzg8tnqANd96Eqfj0a1VMxd0zBq6E Jul 6 23:59:51.066349 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:51.075808 systemd-logind[1445]: New session 29 of user core. Jul 6 23:59:51.084021 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 6 23:59:51.091277 kubelet[2499]: E0706 23:59:51.091214 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:51.094732 containerd[1451]: time="2025-07-06T23:59:51.092468466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bd6kf,Uid:f9c62e66-2c4f-4057-888f-379124bd9efd,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:51.133868 containerd[1451]: time="2025-07-06T23:59:51.133220944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:51.133868 containerd[1451]: time="2025-07-06T23:59:51.133319350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:51.133868 containerd[1451]: time="2025-07-06T23:59:51.133385273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:51.133868 containerd[1451]: time="2025-07-06T23:59:51.133545226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:51.185297 systemd[1]: Started cri-containerd-d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537.scope - libcontainer container d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537. Jul 6 23:59:51.245068 containerd[1451]: time="2025-07-06T23:59:51.244951173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bd6kf,Uid:f9c62e66-2c4f-4057-888f-379124bd9efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\"" Jul 6 23:59:51.247694 kubelet[2499]: E0706 23:59:51.247126 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:51.253414 containerd[1451]: time="2025-07-06T23:59:51.253142108Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:59:51.270995 containerd[1451]: time="2025-07-06T23:59:51.269619438Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c\"" Jul 6 23:59:51.271530 containerd[1451]: time="2025-07-06T23:59:51.271494157Z" level=info msg="StartContainer for \"0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c\"" Jul 6 23:59:51.317982 systemd[1]: Started cri-containerd-0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c.scope - libcontainer container 0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c. Jul 6 23:59:51.361331 containerd[1451]: time="2025-07-06T23:59:51.361274505Z" level=info msg="StartContainer for \"0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c\" returns successfully" Jul 6 23:59:51.392273 systemd[1]: cri-containerd-0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c.scope: Deactivated successfully. Jul 6 23:59:51.443363 containerd[1451]: time="2025-07-06T23:59:51.443041813Z" level=info msg="shim disconnected" id=0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c namespace=k8s.io Jul 6 23:59:51.443363 containerd[1451]: time="2025-07-06T23:59:51.443120699Z" level=warning msg="cleaning up after shim disconnected" id=0e5679d30e39f93de6024e2513aa54261ee787a2cc1d5655c1fa23ce68b5803c namespace=k8s.io Jul 6 23:59:51.443363 containerd[1451]: time="2025-07-06T23:59:51.443134478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:51.690884 kubelet[2499]: I0706 23:59:51.689281 2499 setters.go:602] "Node became not ready" node="ci-4081.3.4-b-aec8669192" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:59:51Z","lastTransitionTime":"2025-07-06T23:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:59:52.163706 kubelet[2499]: E0706 23:59:52.163614 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:52.170709 containerd[1451]: time="2025-07-06T23:59:52.170521851Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:59:52.191174 containerd[1451]: time="2025-07-06T23:59:52.190698309Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96\"" Jul 6 23:59:52.196387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580308053.mount: Deactivated successfully. Jul 6 23:59:52.200942 containerd[1451]: time="2025-07-06T23:59:52.200495843Z" level=info msg="StartContainer for \"d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96\"" Jul 6 23:59:52.257044 systemd[1]: Started cri-containerd-d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96.scope - libcontainer container d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96. Jul 6 23:59:52.304224 containerd[1451]: time="2025-07-06T23:59:52.304139751Z" level=info msg="StartContainer for \"d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96\" returns successfully" Jul 6 23:59:52.318504 systemd[1]: cri-containerd-d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96.scope: Deactivated successfully. Jul 6 23:59:52.354994 containerd[1451]: time="2025-07-06T23:59:52.354837842Z" level=info msg="shim disconnected" id=d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96 namespace=k8s.io Jul 6 23:59:52.354994 containerd[1451]: time="2025-07-06T23:59:52.354911949Z" level=warning msg="cleaning up after shim disconnected" id=d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96 namespace=k8s.io Jul 6 23:59:52.354994 containerd[1451]: time="2025-07-06T23:59:52.354934403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:52.628294 kubelet[2499]: E0706 23:59:52.628096 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-vtwxv" podUID="3da282bd-b582-4bb9-aa11-57ba9cbf3494" Jul 6 23:59:52.982298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0afac86a86e7c55ddc677cb8e5959d6ae7c1bd8f95e50eb9ec4fab33f32cb96-rootfs.mount: Deactivated successfully. Jul 6 23:59:53.170287 kubelet[2499]: E0706 23:59:53.170237 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:53.174599 containerd[1451]: time="2025-07-06T23:59:53.174312867Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:59:53.198070 containerd[1451]: time="2025-07-06T23:59:53.198006253Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b\"" Jul 6 23:59:53.199803 containerd[1451]: time="2025-07-06T23:59:53.199686086Z" level=info msg="StartContainer for \"6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b\"" Jul 6 23:59:53.264138 systemd[1]: Started cri-containerd-6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b.scope - libcontainer container 6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b. Jul 6 23:59:53.311937 containerd[1451]: time="2025-07-06T23:59:53.311576320Z" level=info msg="StartContainer for \"6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b\" returns successfully" Jul 6 23:59:53.321556 systemd[1]: cri-containerd-6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b.scope: Deactivated successfully. Jul 6 23:59:53.357324 containerd[1451]: time="2025-07-06T23:59:53.356969534Z" level=info msg="shim disconnected" id=6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b namespace=k8s.io Jul 6 23:59:53.357744 containerd[1451]: time="2025-07-06T23:59:53.357290820Z" level=warning msg="cleaning up after shim disconnected" id=6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b namespace=k8s.io Jul 6 23:59:53.357744 containerd[1451]: time="2025-07-06T23:59:53.357419146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:53.983640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c34daa43bdd7c804b995149013bb3f09199d7240c1ac1288956a205f74f586b-rootfs.mount: Deactivated successfully. Jul 6 23:59:54.175870 kubelet[2499]: E0706 23:59:54.175080 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:54.181311 containerd[1451]: time="2025-07-06T23:59:54.181164630Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:59:54.204803 containerd[1451]: time="2025-07-06T23:59:54.204732738Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319\"" Jul 6 23:59:54.207730 containerd[1451]: time="2025-07-06T23:59:54.205905851Z" level=info msg="StartContainer for \"b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319\"" Jul 6 23:59:54.268997 systemd[1]: Started cri-containerd-b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319.scope - libcontainer container b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319. Jul 6 23:59:54.321059 systemd[1]: cri-containerd-b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319.scope: Deactivated successfully. Jul 6 23:59:54.325325 containerd[1451]: time="2025-07-06T23:59:54.324466503Z" level=info msg="StartContainer for \"b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319\" returns successfully" Jul 6 23:59:54.365615 containerd[1451]: time="2025-07-06T23:59:54.365510320Z" level=info msg="shim disconnected" id=b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319 namespace=k8s.io Jul 6 23:59:54.365615 containerd[1451]: time="2025-07-06T23:59:54.365640048Z" level=warning msg="cleaning up after shim disconnected" id=b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319 namespace=k8s.io Jul 6 23:59:54.366311 containerd[1451]: time="2025-07-06T23:59:54.366041242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:54.627995 kubelet[2499]: E0706 23:59:54.627382 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-vtwxv" podUID="3da282bd-b582-4bb9-aa11-57ba9cbf3494" Jul 6 23:59:54.821904 kubelet[2499]: E0706 23:59:54.821763 2499 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:59:54.982229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b337fbc59673daea0325994bb64c890f5f3bf1539c87e34453831d2ac08c9319-rootfs.mount: Deactivated successfully. Jul 6 23:59:55.182746 kubelet[2499]: E0706 23:59:55.181300 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:55.187297 containerd[1451]: time="2025-07-06T23:59:55.187212422Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:59:55.214085 containerd[1451]: time="2025-07-06T23:59:55.213256346Z" level=info msg="CreateContainer within sandbox \"d0d8961c2ec69cc1b629e9765bccc1c076d8cab906f9d491e3d4b714a3961537\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce0a900ebbdc4052f8ef3e73f357fac15471349d58025aa40b778f39d2d148a0\"" Jul 6 23:59:55.214792 containerd[1451]: time="2025-07-06T23:59:55.214456795Z" level=info msg="StartContainer for \"ce0a900ebbdc4052f8ef3e73f357fac15471349d58025aa40b778f39d2d148a0\"" Jul 6 23:59:55.271970 systemd[1]: Started cri-containerd-ce0a900ebbdc4052f8ef3e73f357fac15471349d58025aa40b778f39d2d148a0.scope - libcontainer container ce0a900ebbdc4052f8ef3e73f357fac15471349d58025aa40b778f39d2d148a0. Jul 6 23:59:55.314781 containerd[1451]: time="2025-07-06T23:59:55.314693499Z" level=info msg="StartContainer for \"ce0a900ebbdc4052f8ef3e73f357fac15471349d58025aa40b778f39d2d148a0\" returns successfully" Jul 6 23:59:55.915964 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:59:56.199795 kubelet[2499]: E0706 23:59:56.199351 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:56.629098 kubelet[2499]: E0706 23:59:56.627747 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-vtwxv" podUID="3da282bd-b582-4bb9-aa11-57ba9cbf3494" Jul 6 23:59:57.200261 kubelet[2499]: E0706 23:59:57.200161 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 6 23:59:58.629727 kubelet[2499]: E0706 23:59:58.627644 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-vtwxv" podUID="3da282bd-b582-4bb9-aa11-57ba9cbf3494" Jul 6 23:59:59.675930 containerd[1451]: time="2025-07-06T23:59:59.675128458Z" level=info msg="StopPodSandbox for \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\"" Jul 6 23:59:59.675930 containerd[1451]: time="2025-07-06T23:59:59.675235035Z" level=info msg="TearDown network for sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" successfully" Jul 6 23:59:59.675930 containerd[1451]: time="2025-07-06T23:59:59.675246356Z" level=info msg="StopPodSandbox for \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" returns successfully" Jul 6 23:59:59.675930 containerd[1451]: time="2025-07-06T23:59:59.675707306Z" level=info msg="RemovePodSandbox for \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\"" Jul 6 23:59:59.675930 containerd[1451]: time="2025-07-06T23:59:59.675747582Z" level=info msg="Forcibly stopping sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\"" Jul 6 23:59:59.675930 containerd[1451]: time="2025-07-06T23:59:59.675816823Z" level=info msg="TearDown network for sandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" successfully" Jul 6 23:59:59.679324 containerd[1451]: time="2025-07-06T23:59:59.679254335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:59.679815 containerd[1451]: time="2025-07-06T23:59:59.679361070Z" level=info msg="RemovePodSandbox \"a58986858f45b01d02216448f3594209d6acf4cc03c0167c5ef030f2874baa03\" returns successfully" Jul 6 23:59:59.680328 containerd[1451]: time="2025-07-06T23:59:59.680068818Z" level=info msg="StopPodSandbox for \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\"" Jul 6 23:59:59.680328 containerd[1451]: time="2025-07-06T23:59:59.680185451Z" level=info msg="TearDown network for sandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" successfully" Jul 6 23:59:59.680328 containerd[1451]: time="2025-07-06T23:59:59.680203995Z" level=info msg="StopPodSandbox for \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" returns successfully" Jul 6 23:59:59.681758 containerd[1451]: time="2025-07-06T23:59:59.681022560Z" level=info msg="RemovePodSandbox for \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\"" Jul 6 23:59:59.681758 containerd[1451]: time="2025-07-06T23:59:59.681064070Z" level=info msg="Forcibly stopping sandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\"" Jul 6 23:59:59.681758 containerd[1451]: time="2025-07-06T23:59:59.681148813Z" level=info msg="TearDown network for sandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" successfully" Jul 6 23:59:59.684775 containerd[1451]: time="2025-07-06T23:59:59.684693865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:59.685129 containerd[1451]: time="2025-07-06T23:59:59.684964548Z" level=info msg="RemovePodSandbox \"f20fb5240659507af08fd87a4bd1bc878712b67c6066f7465e765cd3d877412a\" returns successfully" Jul 6 23:59:59.956278 systemd-networkd[1368]: lxc_health: Link UP Jul 6 23:59:59.969145 systemd-networkd[1368]: lxc_health: Gained carrier Jul 7 00:00:00.038356 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:00.054219 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:00.118859 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:00.163462 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:00.166335 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:00:00.630679 kubelet[2499]: E0707 00:00:00.628843 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:00:01.052042 systemd-networkd[1368]: lxc_health: Gained IPv6LL Jul 7 00:00:01.099371 kubelet[2499]: E0707 00:00:01.096843 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:00:01.207173 kubelet[2499]: I0707 00:00:01.207069 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bd6kf" podStartSLOduration=11.207037432 podStartE2EDuration="11.207037432s" podCreationTimestamp="2025-07-06 23:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:56.257400398 +0000 UTC m=+116.825736564" watchObservedRunningTime="2025-07-07 00:00:01.207037432 +0000 UTC m=+121.775373599" Jul 7 00:00:01.257063 kubelet[2499]: E0707 00:00:01.253631 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:00:02.251386 kubelet[2499]: E0707 00:00:02.250638 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jul 7 00:00:05.847967 sshd[4308]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:05.895983 systemd[1]: sshd@28-64.23.136.149:22-139.178.89.65:57962.service: Deactivated successfully. Jul 7 00:00:05.907551 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 00:00:05.914098 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Jul 7 00:00:05.925490 systemd-logind[1445]: Removed session 29.