Aug 13 07:08:51.243585 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:08:51.243659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:51.243682 kernel: BIOS-provided physical RAM map: Aug 13 07:08:51.243696 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:08:51.243709 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:08:51.243722 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:08:51.243737 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:08:51.243750 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:08:51.243760 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:08:51.243774 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:08:51.243788 kernel: NX (Execute Disable) protection: active Aug 13 07:08:51.243801 kernel: APIC: Static calls initialized Aug 13 07:08:51.243820 kernel: SMBIOS 2.8 present. Aug 13 07:08:51.243834 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:08:51.243850 kernel: Hypervisor detected: KVM Aug 13 07:08:51.243869 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:08:51.243889 kernel: kvm-clock: using sched offset of 3555126862 cycles Aug 13 07:08:51.243904 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:08:51.243919 kernel: tsc: Detected 1995.312 MHz processor Aug 13 07:08:51.243935 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:08:51.243950 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:08:51.243965 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:08:51.243981 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:08:51.243996 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:08:51.244014 kernel: ACPI: Early table checksum verification disabled Aug 13 07:08:51.244028 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:08:51.244043 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244058 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244073 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244087 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:08:51.244102 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244116 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244131 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244149 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:08:51.244163 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:08:51.244232 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:08:51.244247 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:08:51.244262 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:08:51.244276 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:08:51.244292 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:08:51.244313 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:08:51.244332 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:08:51.244347 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:08:51.244363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:08:51.244378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:08:51.244400 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:08:51.244416 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:08:51.244435 kernel: Zone ranges: Aug 13 07:08:51.244450 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:08:51.244466 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:08:51.244481 kernel: Normal empty Aug 13 07:08:51.244497 kernel: Movable zone start for each node Aug 13 07:08:51.244512 kernel: Early memory node ranges Aug 13 07:08:51.244528 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:08:51.244543 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:08:51.244558 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:08:51.244577 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:08:51.244593 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:08:51.244613 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:08:51.244628 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:08:51.244643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:08:51.244659 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:08:51.244675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:08:51.244690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:08:51.244706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:08:51.244725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:08:51.244741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:08:51.244754 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:08:51.244768 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:08:51.244784 kernel: TSC deadline timer available Aug 13 07:08:51.244799 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:08:51.244815 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:08:51.244831 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:08:51.244852 kernel: Booting paravirtualized kernel on KVM Aug 13 07:08:51.244868 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:08:51.244889 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:08:51.244904 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:08:51.244920 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:08:51.244935 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:08:51.244950 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:08:51.244968 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:51.244985 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:08:51.245000 kernel: random: crng init done Aug 13 07:08:51.245019 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:08:51.245034 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:08:51.245050 kernel: Fallback order for Node 0: 0 Aug 13 07:08:51.245065 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:08:51.245080 kernel: Policy zone: DMA32 Aug 13 07:08:51.245096 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:08:51.245113 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:08:51.245128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:08:51.245148 kernel: Kernel/User page tables isolation: enabled Aug 13 07:08:51.245164 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:08:51.247745 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:08:51.247772 kernel: Dynamic Preempt: voluntary Aug 13 07:08:51.247788 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:08:51.247805 kernel: rcu: RCU event tracing is enabled. Aug 13 07:08:51.247821 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:08:51.247837 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:08:51.247852 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:08:51.247868 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:08:51.247894 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:08:51.247909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:08:51.247926 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:08:51.247941 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:08:51.247964 kernel: Console: colour VGA+ 80x25 Aug 13 07:08:51.247980 kernel: printk: console [tty0] enabled Aug 13 07:08:51.247996 kernel: printk: console [ttyS0] enabled Aug 13 07:08:51.248011 kernel: ACPI: Core revision 20230628 Aug 13 07:08:51.248027 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:08:51.248047 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:08:51.248063 kernel: x2apic enabled Aug 13 07:08:51.248078 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:08:51.248094 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:08:51.248111 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Aug 13 07:08:51.248127 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Aug 13 07:08:51.248143 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:08:51.248159 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:08:51.248209 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:08:51.248226 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:08:51.248243 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:08:51.248263 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:08:51.248279 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:08:51.248296 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:08:51.248312 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:08:51.248329 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:08:51.248346 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:08:51.248374 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:08:51.248391 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:08:51.248407 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:08:51.248423 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:08:51.248439 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:08:51.248456 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:08:51.248473 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:08:51.248489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:08:51.248510 kernel: landlock: Up and running. Aug 13 07:08:51.248526 kernel: SELinux: Initializing. Aug 13 07:08:51.248543 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:08:51.248560 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:08:51.248576 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:08:51.248593 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:51.248611 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:51.248628 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:08:51.248644 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:08:51.248666 kernel: signal: max sigframe size: 1776 Aug 13 07:08:51.248683 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:08:51.248700 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:08:51.248717 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:08:51.248734 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:08:51.248749 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:08:51.248764 kernel: .... node #0, CPUs: #1 Aug 13 07:08:51.248781 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:08:51.248804 kernel: smpboot: Max logical packages: 1 Aug 13 07:08:51.248826 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Aug 13 07:08:51.248842 kernel: devtmpfs: initialized Aug 13 07:08:51.248859 kernel: x86/mm: Memory block size: 128MB Aug 13 07:08:51.248877 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:08:51.248893 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:08:51.248910 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:08:51.248927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:08:51.248944 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:08:51.248961 kernel: audit: type=2000 audit(1755068929.679:1): state=initialized audit_enabled=0 res=1 Aug 13 07:08:51.248982 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:08:51.248998 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:08:51.249015 kernel: cpuidle: using governor menu Aug 13 07:08:51.249032 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:08:51.249049 kernel: dca service started, version 1.12.1 Aug 13 07:08:51.249065 kernel: PCI: Using configuration type 1 for base access Aug 13 07:08:51.249082 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:08:51.249099 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:08:51.249115 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:08:51.249136 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:08:51.249153 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:08:51.249169 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:08:51.250486 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:08:51.250505 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:08:51.250522 kernel: ACPI: Interpreter enabled Aug 13 07:08:51.250539 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:08:51.250556 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:08:51.250573 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:08:51.250589 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:08:51.250615 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:08:51.250632 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:08:51.251263 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:08:51.251538 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:08:51.251707 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:08:51.251735 kernel: acpiphp: Slot [3] registered Aug 13 07:08:51.251751 kernel: acpiphp: Slot [4] registered Aug 13 07:08:51.251771 kernel: acpiphp: Slot [5] registered Aug 13 07:08:51.251787 kernel: acpiphp: Slot [6] registered Aug 13 07:08:51.251803 kernel: acpiphp: Slot [7] registered Aug 13 07:08:51.251820 kernel: acpiphp: Slot [8] registered Aug 13 07:08:51.251836 kernel: acpiphp: Slot [9] registered Aug 13 07:08:51.251852 kernel: acpiphp: Slot [10] registered Aug 13 07:08:51.251869 kernel: acpiphp: Slot [11] registered Aug 13 07:08:51.251885 kernel: acpiphp: Slot [12] registered Aug 13 07:08:51.251901 kernel: acpiphp: Slot [13] registered Aug 13 07:08:51.251922 kernel: acpiphp: Slot [14] registered Aug 13 07:08:51.251944 kernel: acpiphp: Slot [15] registered Aug 13 07:08:51.251960 kernel: acpiphp: Slot [16] registered Aug 13 07:08:51.251976 kernel: acpiphp: Slot [17] registered Aug 13 07:08:51.251993 kernel: acpiphp: Slot [18] registered Aug 13 07:08:51.252009 kernel: acpiphp: Slot [19] registered Aug 13 07:08:51.252026 kernel: acpiphp: Slot [20] registered Aug 13 07:08:51.252042 kernel: acpiphp: Slot [21] registered Aug 13 07:08:51.252058 kernel: acpiphp: Slot [22] registered Aug 13 07:08:51.252078 kernel: acpiphp: Slot [23] registered Aug 13 07:08:51.252094 kernel: acpiphp: Slot [24] registered Aug 13 07:08:51.252111 kernel: acpiphp: Slot [25] registered Aug 13 07:08:51.252127 kernel: acpiphp: Slot [26] registered Aug 13 07:08:51.252143 kernel: acpiphp: Slot [27] registered Aug 13 07:08:51.252159 kernel: acpiphp: Slot [28] registered Aug 13 07:08:51.253807 kernel: acpiphp: Slot [29] registered Aug 13 07:08:51.253834 kernel: acpiphp: Slot [30] registered Aug 13 07:08:51.253851 kernel: acpiphp: Slot [31] registered Aug 13 07:08:51.253867 kernel: PCI host bridge to bus 0000:00 Aug 13 07:08:51.254213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:08:51.254370 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:08:51.254518 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:08:51.254661 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:08:51.254804 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:08:51.254947 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:08:51.255226 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:08:51.255441 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:08:51.255648 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:08:51.255810 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:08:51.255994 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:08:51.256160 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:08:51.258369 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:08:51.258549 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:08:51.258747 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:08:51.258908 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:08:51.259106 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:08:51.259298 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:08:51.259463 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:08:51.259685 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:08:51.259849 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:08:51.260008 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:08:51.260167 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:08:51.260460 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:08:51.260619 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:08:51.260804 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:08:51.260981 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:08:51.261143 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:08:51.261323 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:08:51.261508 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:08:51.261824 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:08:51.261992 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:08:51.262151 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:08:51.262384 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:08:51.262545 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:08:51.262702 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:08:51.262860 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:08:51.263047 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:08:51.263232 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:08:51.263390 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:08:51.263599 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:08:51.263794 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:08:51.263956 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:08:51.264114 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:08:51.264295 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:08:51.264494 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:08:51.264654 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:08:51.264818 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:08:51.264839 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:08:51.264856 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:08:51.264873 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:08:51.264889 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:08:51.264906 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:08:51.264922 kernel: iommu: Default domain type: Translated Aug 13 07:08:51.264943 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:08:51.264959 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:08:51.264976 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:08:51.264992 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:08:51.265009 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:08:51.265196 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:08:51.265368 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:08:51.265538 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:08:51.265571 kernel: vgaarb: loaded Aug 13 07:08:51.265598 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:08:51.265616 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:08:51.265633 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:08:51.265650 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:08:51.265667 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:08:51.265684 kernel: pnp: PnP ACPI init Aug 13 07:08:51.265701 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:08:51.265718 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:08:51.265735 kernel: NET: Registered PF_INET protocol family Aug 13 07:08:51.265754 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:08:51.265769 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:08:51.265786 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:08:51.265810 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:08:51.265827 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:08:51.265844 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:08:51.265860 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:08:51.265877 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:08:51.265899 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:08:51.265916 kernel: NET: Registered PF_XDP protocol family Aug 13 07:08:51.266093 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:08:51.267594 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:08:51.267757 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:08:51.267897 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:08:51.268037 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:08:51.268225 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:08:51.268388 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:08:51.268420 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:08:51.268579 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 43213 usecs Aug 13 07:08:51.268600 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:08:51.268617 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:08:51.268635 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Aug 13 07:08:51.268652 kernel: Initialise system trusted keyrings Aug 13 07:08:51.268669 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:08:51.268685 kernel: Key type asymmetric registered Aug 13 07:08:51.268707 kernel: Asymmetric key parser 'x509' registered Aug 13 07:08:51.268723 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:08:51.268740 kernel: io scheduler mq-deadline registered Aug 13 07:08:51.268754 kernel: io scheduler kyber registered Aug 13 07:08:51.268768 kernel: io scheduler bfq registered Aug 13 07:08:51.268785 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:08:51.268802 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:08:51.268819 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:08:51.268836 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:08:51.268856 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:08:51.268872 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:08:51.268889 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:08:51.268906 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:08:51.268922 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:08:51.268939 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:08:51.272377 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:08:51.272646 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:08:51.272893 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:08:50 UTC (1755068930) Aug 13 07:08:51.273050 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:08:51.273071 kernel: intel_pstate: CPU model not supported Aug 13 07:08:51.273089 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:08:51.273106 kernel: Segment Routing with IPv6 Aug 13 07:08:51.273122 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:08:51.273139 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:08:51.273156 kernel: Key type dns_resolver registered Aug 13 07:08:51.273173 kernel: IPI shorthand broadcast: enabled Aug 13 07:08:51.273218 kernel: sched_clock: Marking stable (1211005360, 156218147)->(1525817843, -158594336) Aug 13 07:08:51.273234 kernel: registered taskstats version 1 Aug 13 07:08:51.273251 kernel: Loading compiled-in X.509 certificates Aug 13 07:08:51.273268 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:08:51.273285 kernel: Key type .fscrypt registered Aug 13 07:08:51.273302 kernel: Key type fscrypt-provisioning registered Aug 13 07:08:51.273319 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:08:51.273336 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:08:51.273356 kernel: ima: No architecture policies found Aug 13 07:08:51.273372 kernel: clk: Disabling unused clocks Aug 13 07:08:51.273389 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:08:51.273406 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:08:51.273423 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:08:51.273464 kernel: Run /init as init process Aug 13 07:08:51.273486 kernel: with arguments: Aug 13 07:08:51.273503 kernel: /init Aug 13 07:08:51.273520 kernel: with environment: Aug 13 07:08:51.273537 kernel: HOME=/ Aug 13 07:08:51.273557 kernel: TERM=linux Aug 13 07:08:51.273588 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:08:51.273620 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:08:51.273642 systemd[1]: Detected virtualization kvm. Aug 13 07:08:51.273661 systemd[1]: Detected architecture x86-64. Aug 13 07:08:51.273679 systemd[1]: Running in initrd. Aug 13 07:08:51.273697 systemd[1]: No hostname configured, using default hostname. Aug 13 07:08:51.273719 systemd[1]: Hostname set to . Aug 13 07:08:51.273738 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:08:51.273754 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:08:51.273769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:51.273788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:51.273808 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:08:51.273826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:51.273844 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:08:51.273867 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:08:51.273888 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:08:51.273907 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:08:51.273925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:51.273944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:51.273962 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:08:51.273980 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:51.274002 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:51.274020 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:08:51.274042 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:51.274060 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:51.274079 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:08:51.274101 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:08:51.274119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:51.274141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:51.274158 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:51.276234 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:08:51.276286 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:08:51.276307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:51.276325 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:08:51.276344 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:08:51.276372 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:51.276390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:51.276408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:51.276492 systemd-journald[184]: Collecting audit messages is disabled. Aug 13 07:08:51.276542 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:51.276561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:51.276580 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:08:51.276600 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:08:51.276624 systemd-journald[184]: Journal started Aug 13 07:08:51.276664 systemd-journald[184]: Runtime Journal (/run/log/journal/34af93d98d544275bb48b29fae358118) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:08:51.274448 systemd-modules-load[185]: Inserted module 'overlay' Aug 13 07:08:51.339440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:08:51.339481 kernel: Bridge firewalling registered Aug 13 07:08:51.339501 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:51.319357 systemd-modules-load[185]: Inserted module 'br_netfilter' Aug 13 07:08:51.340598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:51.341421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:51.356561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:51.360464 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:51.370479 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:08:51.373024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:51.390431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:51.397355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:51.404254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:51.406414 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:51.414530 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:08:51.426809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:08:51.429157 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:51.443940 dracut-cmdline[216]: dracut-dracut-053 Aug 13 07:08:51.449842 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:08:51.468612 systemd-resolved[217]: Positive Trust Anchors: Aug 13 07:08:51.468646 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:08:51.468681 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:08:51.478711 systemd-resolved[217]: Defaulting to hostname 'linux'. Aug 13 07:08:51.481846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:08:51.482587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:51.567228 kernel: SCSI subsystem initialized Aug 13 07:08:51.582228 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:08:51.599213 kernel: iscsi: registered transport (tcp) Aug 13 07:08:51.632596 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:08:51.632701 kernel: QLogic iSCSI HBA Driver Aug 13 07:08:51.721941 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:51.734562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:08:51.774268 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:08:51.774355 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:08:51.775642 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:08:51.837470 kernel: raid6: avx2x4 gen() 23544 MB/s Aug 13 07:08:51.855974 kernel: raid6: avx2x2 gen() 25031 MB/s Aug 13 07:08:51.872492 kernel: raid6: avx2x1 gen() 20214 MB/s Aug 13 07:08:51.872588 kernel: raid6: using algorithm avx2x2 gen() 25031 MB/s Aug 13 07:08:51.891535 kernel: raid6: .... xor() 14537 MB/s, rmw enabled Aug 13 07:08:51.891653 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:08:51.921252 kernel: xor: automatically using best checksumming function avx Aug 13 07:08:52.120229 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:08:52.139007 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:52.147592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:52.180021 systemd-udevd[401]: Using default interface naming scheme 'v255'. Aug 13 07:08:52.187376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:52.200037 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:08:52.234615 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 13 07:08:52.285985 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:52.294599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:08:52.376754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:52.385546 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:08:52.415971 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:08:52.420017 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:08:52.421299 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:52.424495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:08:52.431455 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:08:52.463052 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:08:52.494207 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:08:52.499385 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:08:52.518247 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:08:52.522612 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:08:52.565704 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:08:52.565810 kernel: GPT:9289727 != 125829119 Aug 13 07:08:52.565824 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:08:52.565835 kernel: GPT:9289727 != 125829119 Aug 13 07:08:52.565845 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:08:52.565856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:52.567692 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:08:52.567891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:52.574808 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:52.575505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:52.575691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:52.577116 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:52.587493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:08:52.609256 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:08:52.609486 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:08:52.609500 kernel: AES CTR mode by8 optimization enabled Aug 13 07:08:52.609511 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 13 07:08:52.609846 kernel: libata version 3.00 loaded. Aug 13 07:08:52.631218 kernel: ACPI: bus type USB registered Aug 13 07:08:52.640356 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:08:52.645733 kernel: usbcore: registered new interface driver usbfs Aug 13 07:08:52.645806 kernel: usbcore: registered new interface driver hub Aug 13 07:08:52.664236 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (448) Aug 13 07:08:52.668206 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Aug 13 07:08:52.678623 kernel: scsi host1: ata_piix Aug 13 07:08:52.678961 kernel: usbcore: registered new device driver usb Aug 13 07:08:52.694472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:08:52.748978 kernel: scsi host2: ata_piix Aug 13 07:08:52.749298 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:08:52.749314 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:08:52.753491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:08:52.759487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:52.767159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:08:52.774343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:08:52.775270 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:08:52.789498 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:08:52.792640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:08:52.799560 disk-uuid[535]: Primary Header is updated. Aug 13 07:08:52.799560 disk-uuid[535]: Secondary Entries is updated. Aug 13 07:08:52.799560 disk-uuid[535]: Secondary Header is updated. Aug 13 07:08:52.806220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:52.812393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:52.851525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:52.920006 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:08:52.920309 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:08:52.923462 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:08:52.923782 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:08:52.926877 kernel: hub 1-0:1.0: USB hub found Aug 13 07:08:52.927254 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:08:53.815235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:08:53.816634 disk-uuid[536]: The operation has completed successfully. Aug 13 07:08:53.870116 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:08:53.870274 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:08:53.879498 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:08:53.898971 sh[563]: Success Aug 13 07:08:53.921241 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:08:53.996635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:08:53.999470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:08:54.001244 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:08:54.032220 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:08:54.032289 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:54.034271 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:08:54.036918 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:08:54.036988 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:08:54.045761 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:08:54.047067 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:08:54.055447 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:08:54.058411 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:08:54.073796 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:54.073967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:54.075742 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:08:54.081266 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:08:54.100017 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:54.099595 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:08:54.109276 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:08:54.117558 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:08:54.222001 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:08:54.238223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:08:54.276120 systemd-networkd[749]: lo: Link UP Aug 13 07:08:54.276140 systemd-networkd[749]: lo: Gained carrier Aug 13 07:08:54.281683 systemd-networkd[749]: Enumeration completed Aug 13 07:08:54.282604 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:08:54.283596 systemd[1]: Reached target network.target - Network. Aug 13 07:08:54.284886 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:08:54.284890 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:08:54.289264 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:54.289268 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:08:54.291420 systemd-networkd[749]: eth0: Link UP Aug 13 07:08:54.291424 systemd-networkd[749]: eth0: Gained carrier Aug 13 07:08:54.291435 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:08:54.296168 systemd-networkd[749]: eth1: Link UP Aug 13 07:08:54.296172 systemd-networkd[749]: eth1: Gained carrier Aug 13 07:08:54.296204 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:08:54.311928 ignition[654]: Ignition 2.19.0 Aug 13 07:08:54.312824 ignition[654]: Stage: fetch-offline Aug 13 07:08:54.312907 ignition[654]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:54.314242 systemd-networkd[749]: eth0: DHCPv4 address 24.199.106.199/20, gateway 24.199.96.1 acquired from 169.254.169.253 Aug 13 07:08:54.312918 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:54.316641 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:08:54.313079 ignition[654]: parsed url from cmdline: "" Aug 13 07:08:54.313084 ignition[654]: no config URL provided Aug 13 07:08:54.313093 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:08:54.313106 ignition[654]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:08:54.313115 ignition[654]: failed to fetch config: resource requires networking Aug 13 07:08:54.313476 ignition[654]: Ignition finished successfully Aug 13 07:08:54.322614 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Aug 13 07:08:54.327289 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:08:54.358883 ignition[756]: Ignition 2.19.0 Aug 13 07:08:54.358903 ignition[756]: Stage: fetch Aug 13 07:08:54.359138 ignition[756]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:54.359150 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:54.359317 ignition[756]: parsed url from cmdline: "" Aug 13 07:08:54.359324 ignition[756]: no config URL provided Aug 13 07:08:54.359333 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:08:54.359346 ignition[756]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:08:54.359378 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:08:54.376166 ignition[756]: GET result: OK Aug 13 07:08:54.377276 ignition[756]: parsing config with SHA512: bbde8aeef629badd808bdf0c27ac1e5ef1da3538b8b93c3b48d852aaac2708fe71a2b76779786c85c36cef8a5700f689b6e702aa9ce764d84353b1baa786b744 Aug 13 07:08:54.388934 unknown[756]: fetched base config from "system" Aug 13 07:08:54.388952 unknown[756]: fetched base config from "system" Aug 13 07:08:54.389480 ignition[756]: fetch: fetch complete Aug 13 07:08:54.388963 unknown[756]: fetched user config from "digitalocean" Aug 13 07:08:54.389490 ignition[756]: fetch: fetch passed Aug 13 07:08:54.391713 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:08:54.389692 ignition[756]: Ignition finished successfully Aug 13 07:08:54.401708 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:08:54.425308 ignition[764]: Ignition 2.19.0 Aug 13 07:08:54.425322 ignition[764]: Stage: kargs Aug 13 07:08:54.425671 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:54.425694 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:54.430747 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:08:54.426839 ignition[764]: kargs: kargs passed Aug 13 07:08:54.426911 ignition[764]: Ignition finished successfully Aug 13 07:08:54.444522 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:08:54.468242 ignition[770]: Ignition 2.19.0 Aug 13 07:08:54.468259 ignition[770]: Stage: disks Aug 13 07:08:54.468461 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:54.468473 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:54.470171 ignition[770]: disks: disks passed Aug 13 07:08:54.471645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:08:54.470291 ignition[770]: Ignition finished successfully Aug 13 07:08:54.479479 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:08:54.480824 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:08:54.481692 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:08:54.484345 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:08:54.485972 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:08:54.495680 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:08:54.516574 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:08:54.521095 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:08:54.531449 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:08:54.661200 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:08:54.662059 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:08:54.663805 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:08:54.671400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:08:54.675518 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:08:54.679407 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:08:54.688450 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:08:54.692875 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (787) Aug 13 07:08:54.691813 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:08:54.691854 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:08:54.703356 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:54.703397 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:54.703415 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:08:54.706041 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:08:54.716597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:08:54.725234 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:08:54.733442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:08:54.809599 coreos-metadata[790]: Aug 13 07:08:54.809 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:08:54.823596 coreos-metadata[790]: Aug 13 07:08:54.823 INFO Fetch successful Aug 13 07:08:54.825810 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:08:54.826836 coreos-metadata[789]: Aug 13 07:08:54.826 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:08:54.831421 coreos-metadata[790]: Aug 13 07:08:54.831 INFO wrote hostname ci-4081.3.5-0-ae45d59eaf to /sysroot/etc/hostname Aug 13 07:08:54.834015 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:08:54.837458 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:08:54.841096 coreos-metadata[789]: Aug 13 07:08:54.839 INFO Fetch successful Aug 13 07:08:54.850657 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:08:54.851689 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:08:54.851891 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:08:54.862010 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:08:54.993482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:08:55.000445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:08:55.002375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:08:55.019309 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:55.030548 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:08:55.049262 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:08:55.063396 ignition[910]: INFO : Ignition 2.19.0 Aug 13 07:08:55.063396 ignition[910]: INFO : Stage: mount Aug 13 07:08:55.066530 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:55.066530 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:55.066530 ignition[910]: INFO : mount: mount passed Aug 13 07:08:55.066530 ignition[910]: INFO : Ignition finished successfully Aug 13 07:08:55.067711 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:08:55.075418 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:08:55.091609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:08:55.118234 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (920) Aug 13 07:08:55.121656 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:08:55.121748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:08:55.123348 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:08:55.128248 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:08:55.131302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:08:55.167562 ignition[937]: INFO : Ignition 2.19.0 Aug 13 07:08:55.168709 ignition[937]: INFO : Stage: files Aug 13 07:08:55.169386 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:55.169386 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:55.171271 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:08:55.172629 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:08:55.172629 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:08:55.177273 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:08:55.178656 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:08:55.178656 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:08:55.178212 unknown[937]: wrote ssh authorized keys file for user: core Aug 13 07:08:55.182153 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:08:55.182153 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 07:08:55.221015 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:08:55.307172 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:08:55.307172 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:08:55.307172 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:08:55.510772 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:08:55.607275 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:08:55.607275 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:08:55.610466 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:08:56.001238 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:08:56.048513 systemd-networkd[749]: eth0: Gained IPv6LL Aug 13 07:08:56.049447 systemd-networkd[749]: eth1: Gained IPv6LL Aug 13 07:08:56.565469 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:08:56.565469 ignition[937]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:08:56.568148 ignition[937]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:08:56.568148 ignition[937]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:08:56.568148 ignition[937]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:08:56.568148 ignition[937]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:08:56.568148 ignition[937]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:08:56.568148 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:08:56.568148 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:08:56.568148 ignition[937]: INFO : files: files passed Aug 13 07:08:56.568148 ignition[937]: INFO : Ignition finished successfully Aug 13 07:08:56.571059 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:08:56.579498 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:08:56.582826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:08:56.588298 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:08:56.588472 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:08:56.614222 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:56.614222 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:56.618624 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:08:56.620997 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:08:56.622374 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:08:56.633526 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:08:56.670018 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:08:56.670169 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:08:56.672623 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:08:56.673342 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:08:56.674262 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:08:56.681486 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:08:56.698565 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:08:56.711553 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:08:56.726811 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:08:56.727595 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:56.729202 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:08:56.730997 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:08:56.731153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:08:56.732949 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:08:56.733966 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:08:56.735331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:08:56.736577 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:08:56.737970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:08:56.739307 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:08:56.740886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:08:56.742520 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:08:56.743835 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:08:56.745109 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:08:56.746108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:08:56.746264 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:08:56.747657 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:08:56.748307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:56.749490 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:08:56.749801 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:56.751000 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:08:56.751204 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:08:56.752758 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:08:56.752984 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:08:56.754533 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:08:56.754746 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:08:56.755811 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:08:56.755913 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:08:56.763498 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:08:56.766367 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:08:56.767291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:08:56.768224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:56.772462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:08:56.772656 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:08:56.788581 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:08:56.788743 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:08:56.799228 ignition[990]: INFO : Ignition 2.19.0 Aug 13 07:08:56.799228 ignition[990]: INFO : Stage: umount Aug 13 07:08:56.799228 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:08:56.799228 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:08:56.811209 ignition[990]: INFO : umount: umount passed Aug 13 07:08:56.812167 ignition[990]: INFO : Ignition finished successfully Aug 13 07:08:56.815618 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:08:56.815789 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:08:56.817841 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:08:56.817916 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:08:56.832536 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:08:56.832648 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:08:56.834109 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:08:56.834267 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:08:56.835910 systemd[1]: Stopped target network.target - Network. Aug 13 07:08:56.837800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:08:56.837923 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:08:56.839173 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:08:56.874706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:08:56.881375 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:56.909584 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:08:56.917262 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:08:56.918835 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:08:56.918895 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:08:56.920115 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:08:56.920208 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:08:56.921165 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:08:56.921276 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:08:56.930424 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:08:56.930520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:08:56.933018 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:08:56.934800 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:08:56.937277 systemd-networkd[749]: eth0: DHCPv6 lease lost Aug 13 07:08:56.938526 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:08:56.940258 systemd-networkd[749]: eth1: DHCPv6 lease lost Aug 13 07:08:56.941307 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:08:56.941416 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:08:56.942792 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:08:56.944343 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:08:56.946773 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:08:56.946920 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:08:56.952931 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:08:56.953044 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:56.955033 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:08:56.955131 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:08:56.969437 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:08:56.972307 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:08:56.972431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:08:56.973802 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:08:56.973872 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:56.975498 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:08:56.975563 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:56.977065 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:08:56.977145 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:08:56.978615 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:08:57.000252 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:08:57.001384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:08:57.003108 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:08:57.003294 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:08:57.006750 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:08:57.006840 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:57.008572 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:08:57.008640 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:57.010088 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:08:57.010172 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:08:57.012372 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:08:57.012496 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:08:57.014091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:08:57.014266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:08:57.023488 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:08:57.024218 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:08:57.024329 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:57.026822 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:08:57.026913 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:57.029401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:08:57.029495 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:57.031960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:08:57.032052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:08:57.048498 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:08:57.048699 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:08:57.051222 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:08:57.057692 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:08:57.078711 systemd[1]: Switching root. Aug 13 07:08:57.185321 systemd-journald[184]: Journal stopped Aug 13 07:08:58.695431 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Aug 13 07:08:58.695572 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:08:58.695606 kernel: SELinux: policy capability open_perms=1 Aug 13 07:08:58.695634 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:08:58.695653 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:08:58.695672 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:08:58.695693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:08:58.695714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:08:58.695733 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:08:58.695753 kernel: audit: type=1403 audit(1755068937.386:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:08:58.695785 systemd[1]: Successfully loaded SELinux policy in 56.902ms. Aug 13 07:08:58.695824 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.185ms. Aug 13 07:08:58.695850 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:08:58.695874 systemd[1]: Detected virtualization kvm. Aug 13 07:08:58.695892 systemd[1]: Detected architecture x86-64. Aug 13 07:08:58.695910 systemd[1]: Detected first boot. Aug 13 07:08:58.695933 systemd[1]: Hostname set to . Aug 13 07:08:58.695954 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:08:58.695985 zram_generator::config[1034]: No configuration found. Aug 13 07:08:58.696008 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:08:58.696027 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:08:58.696049 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:08:58.696070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:08:58.696094 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:08:58.696116 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:08:58.696137 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:08:58.696166 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:08:58.697032 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:08:58.697069 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:08:58.697092 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:08:58.697113 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:08:58.697135 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:08:58.697153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:08:58.697266 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:08:58.697294 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:08:58.697340 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:08:58.697362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:08:58.697382 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:08:58.697402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:08:58.697422 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:08:58.697452 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:08:58.697482 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:08:58.697517 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:08:58.697540 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:08:58.697562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:08:58.697583 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:08:58.697603 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:08:58.697625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:08:58.697646 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:08:58.697668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:08:58.697699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:08:58.697720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:08:58.697741 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:08:58.697763 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:08:58.697785 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:08:58.697812 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:08:58.697834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:58.697856 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:08:58.697878 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:08:58.697908 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:08:58.697930 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:08:58.697951 systemd[1]: Reached target machines.target - Containers. Aug 13 07:08:58.697973 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:08:58.697995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:08:58.698015 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:08:58.698036 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:08:58.698056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:08:58.698081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:08:58.698111 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:08:58.698133 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:08:58.698151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:08:58.698169 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:08:58.698207 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:08:58.698230 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:08:58.698250 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:08:58.698271 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:08:58.698302 kernel: fuse: init (API version 7.39) Aug 13 07:08:58.698323 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:08:58.698343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:08:58.698363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:08:58.698384 kernel: loop: module loaded Aug 13 07:08:58.698404 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:08:58.698426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:08:58.698446 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:08:58.698468 systemd[1]: Stopped verity-setup.service. Aug 13 07:08:58.698499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:08:58.698520 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:08:58.698542 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:08:58.698564 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:08:58.698586 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:08:58.698617 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:08:58.698639 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:08:58.698661 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:08:58.698691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:08:58.698713 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:08:58.698742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:08:58.698764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:08:58.698785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:08:58.698805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:08:58.698826 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:08:58.698848 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:08:58.698870 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:08:58.698891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:08:58.698913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:08:58.698943 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:08:58.698966 kernel: ACPI: bus type drm_connector registered Aug 13 07:08:58.698986 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:08:58.699006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:08:58.699029 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:08:58.699110 systemd-journald[1114]: Collecting audit messages is disabled. Aug 13 07:08:58.699150 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:08:58.701168 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:08:58.701268 systemd-journald[1114]: Journal started Aug 13 07:08:58.701317 systemd-journald[1114]: Runtime Journal (/run/log/journal/34af93d98d544275bb48b29fae358118) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:08:58.708450 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:08:58.201091 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:08:58.228591 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:08:58.229138 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:08:58.719314 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:08:58.728273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:08:58.731227 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:08:58.735228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:08:58.759269 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:08:58.775871 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:08:58.775960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:08:58.786502 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:08:58.797791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:08:58.797933 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:08:58.803251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:08:58.814309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:08:58.839363 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:08:58.851227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:08:58.861394 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:08:58.864884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:08:58.866038 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:08:58.868095 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:08:58.870387 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:08:58.893285 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:08:58.922731 kernel: loop0: detected capacity change from 0 to 8 Aug 13 07:08:58.929233 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:08:58.950031 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:08:58.945298 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:08:58.956456 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:08:58.970441 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:08:58.986742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:08:59.007286 systemd-journald[1114]: Time spent on flushing to /var/log/journal/34af93d98d544275bb48b29fae358118 is 83ms for 999 entries. Aug 13 07:08:59.007286 systemd-journald[1114]: System Journal (/var/log/journal/34af93d98d544275bb48b29fae358118) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:08:59.105773 systemd-journald[1114]: Received client request to flush runtime journal. Aug 13 07:08:59.105859 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:08:59.029724 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:08:59.032122 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:08:59.059744 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:08:59.102885 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. Aug 13 07:08:59.102906 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. Aug 13 07:08:59.113374 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:08:59.119499 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:08:59.129542 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:08:59.148926 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:08:59.196261 kernel: loop3: detected capacity change from 0 to 229808 Aug 13 07:08:59.227012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:08:59.248481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:08:59.275235 kernel: loop4: detected capacity change from 0 to 8 Aug 13 07:08:59.280239 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:08:59.313227 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:08:59.324920 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 13 07:08:59.326413 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 13 07:08:59.342216 kernel: loop7: detected capacity change from 0 to 229808 Aug 13 07:08:59.345265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:08:59.371618 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:08:59.374101 (sd-merge)[1180]: Merged extensions into '/usr'. Aug 13 07:08:59.386027 systemd[1]: Reloading requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:08:59.386200 systemd[1]: Reloading... Aug 13 07:08:59.581233 zram_generator::config[1208]: No configuration found. Aug 13 07:08:59.864209 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:08:59.928799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:00.019120 systemd[1]: Reloading finished in 632 ms. Aug 13 07:09:00.087021 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:09:00.091948 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:09:00.109641 systemd[1]: Starting ensure-sysext.service... Aug 13 07:09:00.123552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:09:00.138404 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:09:00.138438 systemd[1]: Reloading... Aug 13 07:09:00.197697 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:09:00.198132 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:09:00.202959 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:09:00.203341 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 13 07:09:00.203441 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 13 07:09:00.216361 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:09:00.216383 systemd-tmpfiles[1252]: Skipping /boot Aug 13 07:09:00.269147 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:09:00.269437 systemd-tmpfiles[1252]: Skipping /boot Aug 13 07:09:00.309224 zram_generator::config[1281]: No configuration found. Aug 13 07:09:00.539483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:00.625925 systemd[1]: Reloading finished in 487 ms. Aug 13 07:09:00.653060 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:09:00.671249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:09:00.692649 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:00.704841 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:09:00.716551 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:09:00.724567 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:09:00.735627 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:09:00.752483 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:09:00.765068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:00.766416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:00.785675 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:00.799676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:00.830989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:00.833747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:00.834000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:00.854397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:00.854708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:00.855000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:00.876708 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:09:00.881323 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:00.907334 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:09:00.942021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:00.942316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:00.945117 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:00.946518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:00.949152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:00.950378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:00.961374 systemd[1]: Finished ensure-sysext.service. Aug 13 07:09:00.972845 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:09:00.975865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:09:00.986658 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:00.987744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:01.000607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:09:01.002663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:01.002752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:01.002844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:01.012663 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:09:01.023313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:09:01.025347 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:09:01.025408 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:01.026365 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Aug 13 07:09:01.050757 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:09:01.051437 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:09:01.085999 augenrules[1359]: No rules Aug 13 07:09:01.087446 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:01.092368 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:09:01.097001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:09:01.101637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:09:01.112474 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:09:01.280430 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:09:01.282011 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:09:01.393069 systemd-resolved[1330]: Positive Trust Anchors: Aug 13 07:09:01.393903 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:09:01.393966 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:09:01.405385 systemd-resolved[1330]: Using system hostname 'ci-4081.3.5-0-ae45d59eaf'. Aug 13 07:09:01.410840 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:09:01.411882 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:09:01.412977 systemd-networkd[1375]: lo: Link UP Aug 13 07:09:01.412984 systemd-networkd[1375]: lo: Gained carrier Aug 13 07:09:01.419490 systemd-networkd[1375]: Enumeration completed Aug 13 07:09:01.419683 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:09:01.421630 systemd[1]: Reached target network.target - Network. Aug 13 07:09:01.428777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:09:01.490460 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:09:01.548612 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:09:01.551583 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:01.551852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:09:01.560152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:09:01.574566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:09:01.586532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:09:01.589637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:09:01.589703 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:09:01.589731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:09:01.595218 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:09:01.603771 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:09:01.637238 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Aug 13 07:09:01.687140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:09:01.687883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:09:01.693718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:09:01.704420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:09:01.705956 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:09:01.706264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:09:01.720279 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:09:01.721346 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:09:01.721683 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:09:01.778626 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:09:01.784151 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:09:01.794340 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:09:01.836232 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:09:01.839644 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:09:01.848287 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:09:01.851681 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:09:01.851783 kernel: [drm] features: -context_init Aug 13 07:09:01.858384 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-a2:64:f3:d5:16:3c.network. Aug 13 07:09:01.860838 systemd-networkd[1375]: eth1: Link UP Aug 13 07:09:01.860855 systemd-networkd[1375]: eth1: Gained carrier Aug 13 07:09:01.867627 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:01.877324 kernel: [drm] number of scanouts: 1 Aug 13 07:09:01.877438 kernel: [drm] number of cap sets: 0 Aug 13 07:09:01.895237 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:09:01.936231 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:09:01.946017 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-66:8c:be:0a:be:e7.network. Aug 13 07:09:01.948719 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:01.951423 systemd-networkd[1375]: eth0: Link UP Aug 13 07:09:01.951439 systemd-networkd[1375]: eth0: Gained carrier Aug 13 07:09:01.952342 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:09:01.956158 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:09:01.954706 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:01.972062 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:09:01.996795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:02.013622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:09:02.103042 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:09:02.108085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:02.109345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:02.135492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:02.163338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:09:02.164953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:02.183452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:09:02.186397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:09:02.246410 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:09:02.275314 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:09:02.290250 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:09:02.320214 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:09:02.324342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:09:02.371105 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:09:02.371817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:09:02.371964 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:09:02.372168 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:09:02.373099 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:09:02.373795 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:09:02.374338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:09:02.374491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:09:02.374595 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:09:02.374661 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:09:02.375688 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:09:02.378124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:09:02.381718 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:09:02.392943 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:09:02.401624 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:09:02.404111 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:09:02.408577 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:09:02.410830 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:09:02.412010 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:09:02.412064 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:09:02.414427 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:09:02.423521 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:09:02.439483 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:09:02.446323 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:09:02.455424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:09:02.463573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:09:02.467078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:09:02.477886 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:09:02.487418 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:09:02.506536 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:09:02.513918 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:09:02.537559 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:09:02.545468 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:09:02.546279 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:09:02.550666 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:09:02.561523 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:09:02.562632 jq[1440]: false Aug 13 07:09:02.572072 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:09:02.585734 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:09:02.586070 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:09:02.591483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:09:02.591851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:09:02.616675 extend-filesystems[1443]: Found loop4 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found loop5 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found loop6 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found loop7 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda1 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda2 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda3 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found usr Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda4 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda6 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda7 Aug 13 07:09:02.616675 extend-filesystems[1443]: Found vda9 Aug 13 07:09:02.616675 extend-filesystems[1443]: Checking size of /dev/vda9 Aug 13 07:09:02.804258 coreos-metadata[1438]: Aug 13 07:09:02.710 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:09:02.804258 coreos-metadata[1438]: Aug 13 07:09:02.763 INFO Fetch successful Aug 13 07:09:02.703732 dbus-daemon[1439]: [system] SELinux support is enabled Aug 13 07:09:02.647034 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:09:02.817133 extend-filesystems[1443]: Resized partition /dev/vda9 Aug 13 07:09:02.829308 tar[1459]: linux-amd64/LICENSE Aug 13 07:09:02.829308 tar[1459]: linux-amd64/helm Aug 13 07:09:02.649118 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:09:02.830117 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:09:02.850472 jq[1457]: true Aug 13 07:09:02.856115 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:09:02.704078 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:09:02.734257 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:09:02.734345 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:09:02.777424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:09:02.777620 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:09:02.864447 jq[1469]: true Aug 13 07:09:02.777669 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:09:02.820191 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:09:02.897838 update_engine[1455]: I20250813 07:09:02.892769 1455 main.cc:92] Flatcar Update Engine starting Aug 13 07:09:02.938433 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:09:02.956254 update_engine[1455]: I20250813 07:09:02.950650 1455 update_check_scheduler.cc:74] Next update check in 7m52s Aug 13 07:09:02.960770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:09:02.972245 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:09:02.975533 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:09:03.048388 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1388) Aug 13 07:09:03.102240 systemd-logind[1453]: New seat seat0. Aug 13 07:09:03.115744 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:09:03.115783 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:09:03.117559 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:09:03.218423 systemd-networkd[1375]: eth1: Gained IPv6LL Aug 13 07:09:03.220481 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:03.230562 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:09:03.237352 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:09:03.251395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:03.262597 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:09:03.286735 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:09:03.328266 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:09:03.328266 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:09:03.328266 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:09:03.332053 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:09:03.359151 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Aug 13 07:09:03.359151 extend-filesystems[1443]: Found vdb Aug 13 07:09:03.380378 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:09:03.335957 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:09:03.353161 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:09:03.370535 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:09:03.394435 systemd[1]: Starting sshkeys.service... Aug 13 07:09:03.451895 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:09:03.469208 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:09:03.497129 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:09:03.553381 coreos-metadata[1521]: Aug 13 07:09:03.551 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:09:03.579278 coreos-metadata[1521]: Aug 13 07:09:03.577 INFO Fetch successful Aug 13 07:09:03.616133 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:09:03.623881 unknown[1521]: wrote ssh authorized keys file for user: core Aug 13 07:09:03.666635 systemd-networkd[1375]: eth0: Gained IPv6LL Aug 13 07:09:03.667374 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:03.736307 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:09:03.739280 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:09:03.747584 systemd[1]: Finished sshkeys.service. Aug 13 07:09:03.775748 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:09:03.800704 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:09:03.863722 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:09:03.864069 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:09:03.883966 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:09:03.917420 containerd[1471]: time="2025-08-13T07:09:03.916237795Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:09:03.963818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:09:03.983507 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:09:03.998420 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:09:04.001984 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:09:04.041059 containerd[1471]: time="2025-08-13T07:09:04.040975720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.044479 containerd[1471]: time="2025-08-13T07:09:04.044415918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:04.044687 containerd[1471]: time="2025-08-13T07:09:04.044661196Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:09:04.044761 containerd[1471]: time="2025-08-13T07:09:04.044746028Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:09:04.045068 containerd[1471]: time="2025-08-13T07:09:04.045036695Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:09:04.045430 containerd[1471]: time="2025-08-13T07:09:04.045158087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.045430 containerd[1471]: time="2025-08-13T07:09:04.045307302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:04.045430 containerd[1471]: time="2025-08-13T07:09:04.045331909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.047569 containerd[1471]: time="2025-08-13T07:09:04.046901044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:04.047569 containerd[1471]: time="2025-08-13T07:09:04.046952035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.047569 containerd[1471]: time="2025-08-13T07:09:04.046977631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:04.047569 containerd[1471]: time="2025-08-13T07:09:04.046993833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.047569 containerd[1471]: time="2025-08-13T07:09:04.047154440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.047907 containerd[1471]: time="2025-08-13T07:09:04.047878462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:09:04.048195 containerd[1471]: time="2025-08-13T07:09:04.048151616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:09:04.048275 containerd[1471]: time="2025-08-13T07:09:04.048258464Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:09:04.048503 containerd[1471]: time="2025-08-13T07:09:04.048481343Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:09:04.048629 containerd[1471]: time="2025-08-13T07:09:04.048613416Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:09:04.059113 containerd[1471]: time="2025-08-13T07:09:04.058934910Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:09:04.059113 containerd[1471]: time="2025-08-13T07:09:04.059054773Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:09:04.059719 containerd[1471]: time="2025-08-13T07:09:04.059320193Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:09:04.059719 containerd[1471]: time="2025-08-13T07:09:04.059358123Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:09:04.059719 containerd[1471]: time="2025-08-13T07:09:04.059395539Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:09:04.059719 containerd[1471]: time="2025-08-13T07:09:04.059625002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061205792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061487696Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061528853Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061554101Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061593322Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061617013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061636692Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061701051Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061728574Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061743504Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061757666Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061771881Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061807001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062160 containerd[1471]: time="2025-08-13T07:09:04.061844452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061858827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061874211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061887300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061915645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061932775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061947184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061960699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061983812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.061997225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.062008944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.062024633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.062040790Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.062064188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.062076977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.062536 containerd[1471]: time="2025-08-13T07:09:04.062088026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:09:04.062954 containerd[1471]: time="2025-08-13T07:09:04.062811444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:09:04.062985 containerd[1471]: time="2025-08-13T07:09:04.062942962Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:09:04.062985 containerd[1471]: time="2025-08-13T07:09:04.062968541Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:09:04.063023 containerd[1471]: time="2025-08-13T07:09:04.062991879Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:09:04.063023 containerd[1471]: time="2025-08-13T07:09:04.063011750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.063066 containerd[1471]: time="2025-08-13T07:09:04.063035991Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:09:04.063088 containerd[1471]: time="2025-08-13T07:09:04.063070969Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:09:04.063108 containerd[1471]: time="2025-08-13T07:09:04.063091997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:09:04.065551 containerd[1471]: time="2025-08-13T07:09:04.063648188Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:09:04.065551 containerd[1471]: time="2025-08-13T07:09:04.063894803Z" level=info msg="Connect containerd service" Aug 13 07:09:04.065551 containerd[1471]: time="2025-08-13T07:09:04.063999280Z" level=info msg="using legacy CRI server" Aug 13 07:09:04.065551 containerd[1471]: time="2025-08-13T07:09:04.064015815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:09:04.065551 containerd[1471]: time="2025-08-13T07:09:04.064238417Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:09:04.065992 containerd[1471]: time="2025-08-13T07:09:04.065608384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066331797Z" level=info msg="Start subscribing containerd event" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066472508Z" level=info msg="Start recovering state" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066576898Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066580219Z" level=info msg="Start event monitor" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066639988Z" level=info msg="Start snapshots syncer" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066656482Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066669744Z" level=info msg="Start streaming server" Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066682146Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:09:04.069402 containerd[1471]: time="2025-08-13T07:09:04.066801910Z" level=info msg="containerd successfully booted in 0.152803s" Aug 13 07:09:04.066965 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:09:04.650533 tar[1459]: linux-amd64/README.md Aug 13 07:09:04.679563 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:09:05.210393 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:09:05.219672 systemd[1]: Started sshd@0-24.199.106.199:22-139.178.89.65:36550.service - OpenSSH per-connection server daemon (139.178.89.65:36550). Aug 13 07:09:05.297941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:05.304997 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:09:05.308786 systemd[1]: Startup finished in 1.417s (kernel) + 6.592s (initrd) + 7.978s (userspace) = 15.988s. Aug 13 07:09:05.310164 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:05.375843 sshd[1556]: Accepted publickey for core from 139.178.89.65 port 36550 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:05.377219 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:05.400521 systemd-logind[1453]: New session 1 of user core. Aug 13 07:09:05.401752 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:09:05.410191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:09:05.454100 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:09:05.464758 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:09:05.472163 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:09:05.632758 systemd[1570]: Queued start job for default target default.target. Aug 13 07:09:05.638141 systemd[1570]: Created slice app.slice - User Application Slice. Aug 13 07:09:05.638222 systemd[1570]: Reached target paths.target - Paths. Aug 13 07:09:05.638242 systemd[1570]: Reached target timers.target - Timers. Aug 13 07:09:05.640584 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:09:05.681568 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:09:05.683962 systemd[1570]: Reached target sockets.target - Sockets. Aug 13 07:09:05.684011 systemd[1570]: Reached target basic.target - Basic System. Aug 13 07:09:05.684107 systemd[1570]: Reached target default.target - Main User Target. Aug 13 07:09:05.684160 systemd[1570]: Startup finished in 201ms. Aug 13 07:09:05.684423 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:09:05.689714 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:09:05.772899 systemd[1]: Started sshd@1-24.199.106.199:22-139.178.89.65:36558.service - OpenSSH per-connection server daemon (139.178.89.65:36558). Aug 13 07:09:05.866723 sshd[1585]: Accepted publickey for core from 139.178.89.65 port 36558 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:05.870649 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:05.879280 systemd-logind[1453]: New session 2 of user core. Aug 13 07:09:05.884538 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:09:05.957725 sshd[1585]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:05.967490 systemd[1]: sshd@1-24.199.106.199:22-139.178.89.65:36558.service: Deactivated successfully. Aug 13 07:09:05.972246 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:09:05.976141 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:09:05.984070 systemd[1]: Started sshd@2-24.199.106.199:22-139.178.89.65:36564.service - OpenSSH per-connection server daemon (139.178.89.65:36564). Aug 13 07:09:05.987663 systemd-logind[1453]: Removed session 2. Aug 13 07:09:06.033589 sshd[1592]: Accepted publickey for core from 139.178.89.65 port 36564 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:06.036941 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:06.047532 systemd-logind[1453]: New session 3 of user core. Aug 13 07:09:06.050553 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:09:06.114160 sshd[1592]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:06.126590 systemd[1]: sshd@2-24.199.106.199:22-139.178.89.65:36564.service: Deactivated successfully. Aug 13 07:09:06.130562 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:09:06.132082 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:09:06.144243 systemd[1]: Started sshd@3-24.199.106.199:22-139.178.89.65:36572.service - OpenSSH per-connection server daemon (139.178.89.65:36572). Aug 13 07:09:06.147834 systemd-logind[1453]: Removed session 3. Aug 13 07:09:06.202970 sshd[1599]: Accepted publickey for core from 139.178.89.65 port 36572 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:06.204453 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:06.215359 systemd-logind[1453]: New session 4 of user core. Aug 13 07:09:06.226909 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:09:06.263600 kubelet[1562]: E0813 07:09:06.263443 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:06.271495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:06.273098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:06.274354 systemd[1]: kubelet.service: Consumed 1.716s CPU time. Aug 13 07:09:06.301998 sshd[1599]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:06.311471 systemd[1]: sshd@3-24.199.106.199:22-139.178.89.65:36572.service: Deactivated successfully. Aug 13 07:09:06.313815 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:09:06.317794 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:09:06.326258 systemd[1]: Started sshd@4-24.199.106.199:22-139.178.89.65:36574.service - OpenSSH per-connection server daemon (139.178.89.65:36574). Aug 13 07:09:06.329059 systemd-logind[1453]: Removed session 4. Aug 13 07:09:06.388272 sshd[1608]: Accepted publickey for core from 139.178.89.65 port 36574 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:06.391069 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:06.403553 systemd-logind[1453]: New session 5 of user core. Aug 13 07:09:06.418368 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:09:06.502658 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:09:06.503045 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:06.518817 sudo[1611]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:06.526079 sshd[1608]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:06.537565 systemd[1]: sshd@4-24.199.106.199:22-139.178.89.65:36574.service: Deactivated successfully. Aug 13 07:09:06.539744 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:09:06.542059 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:09:06.548936 systemd[1]: Started sshd@5-24.199.106.199:22-139.178.89.65:36578.service - OpenSSH per-connection server daemon (139.178.89.65:36578). Aug 13 07:09:06.550889 systemd-logind[1453]: Removed session 5. Aug 13 07:09:06.615286 sshd[1616]: Accepted publickey for core from 139.178.89.65 port 36578 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:06.617663 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:06.625005 systemd-logind[1453]: New session 6 of user core. Aug 13 07:09:06.631605 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:09:06.696457 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:09:06.696892 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:06.703275 sudo[1620]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:06.711927 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:09:06.712654 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:06.734390 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:06.736884 auditctl[1623]: No rules Aug 13 07:09:06.737333 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:09:06.737769 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:06.744831 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:09:06.794138 augenrules[1641]: No rules Aug 13 07:09:06.796101 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:09:06.797938 sudo[1619]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:06.802277 sshd[1616]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:06.812203 systemd[1]: sshd@5-24.199.106.199:22-139.178.89.65:36578.service: Deactivated successfully. Aug 13 07:09:06.814734 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:09:06.817238 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:09:06.840812 systemd[1]: Started sshd@6-24.199.106.199:22-139.178.89.65:36594.service - OpenSSH per-connection server daemon (139.178.89.65:36594). Aug 13 07:09:06.843601 systemd-logind[1453]: Removed session 6. Aug 13 07:09:06.890992 sshd[1649]: Accepted publickey for core from 139.178.89.65 port 36594 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:06.893663 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:06.908809 systemd-logind[1453]: New session 7 of user core. Aug 13 07:09:06.925642 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:09:06.986630 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:09:06.986989 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:09:07.591274 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:09:07.603906 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:09:08.209744 dockerd[1668]: time="2025-08-13T07:09:08.209135286Z" level=info msg="Starting up" Aug 13 07:09:08.399116 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3207444778-merged.mount: Deactivated successfully. Aug 13 07:09:08.549867 dockerd[1668]: time="2025-08-13T07:09:08.549743489Z" level=info msg="Loading containers: start." Aug 13 07:09:08.721207 kernel: Initializing XFRM netlink socket Aug 13 07:09:08.759719 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:08.760379 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:08.772855 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:08.845360 systemd-networkd[1375]: docker0: Link UP Aug 13 07:09:08.845860 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Aug 13 07:09:08.871991 dockerd[1668]: time="2025-08-13T07:09:08.871912909Z" level=info msg="Loading containers: done." Aug 13 07:09:08.897838 dockerd[1668]: time="2025-08-13T07:09:08.897759076Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:09:08.898081 dockerd[1668]: time="2025-08-13T07:09:08.897938970Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:09:08.898136 dockerd[1668]: time="2025-08-13T07:09:08.898115824Z" level=info msg="Daemon has completed initialization" Aug 13 07:09:08.951858 dockerd[1668]: time="2025-08-13T07:09:08.951702870Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:09:08.952358 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:09:09.787116 containerd[1471]: time="2025-08-13T07:09:09.786719566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 07:09:10.420322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365604254.mount: Deactivated successfully. Aug 13 07:09:12.103304 containerd[1471]: time="2025-08-13T07:09:12.103224360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.106221 containerd[1471]: time="2025-08-13T07:09:12.104910374Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 07:09:12.108203 containerd[1471]: time="2025-08-13T07:09:12.108113639Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.113948 containerd[1471]: time="2025-08-13T07:09:12.113885744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.115745 containerd[1471]: time="2025-08-13T07:09:12.115676433Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.328888849s" Aug 13 07:09:12.115745 containerd[1471]: time="2025-08-13T07:09:12.115741535Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 07:09:12.117114 containerd[1471]: time="2025-08-13T07:09:12.116905970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 07:09:13.774281 containerd[1471]: time="2025-08-13T07:09:13.774078260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:13.776023 containerd[1471]: time="2025-08-13T07:09:13.775807913Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 07:09:13.777535 containerd[1471]: time="2025-08-13T07:09:13.777451191Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:13.781491 containerd[1471]: time="2025-08-13T07:09:13.781376578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:13.783135 containerd[1471]: time="2025-08-13T07:09:13.782936112Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.665982123s" Aug 13 07:09:13.783135 containerd[1471]: time="2025-08-13T07:09:13.783001768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 07:09:13.784539 containerd[1471]: time="2025-08-13T07:09:13.784386802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 07:09:15.380632 containerd[1471]: time="2025-08-13T07:09:15.380551019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:15.382444 containerd[1471]: time="2025-08-13T07:09:15.382366770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 07:09:15.385612 containerd[1471]: time="2025-08-13T07:09:15.385364984Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:15.393771 containerd[1471]: time="2025-08-13T07:09:15.393604347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:15.394885 containerd[1471]: time="2025-08-13T07:09:15.394685124Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.609960521s" Aug 13 07:09:15.394885 containerd[1471]: time="2025-08-13T07:09:15.394750040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 07:09:15.396161 containerd[1471]: time="2025-08-13T07:09:15.395778875Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:09:16.522237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:09:16.529893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:16.763579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:16.775959 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:16.886835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748407576.mount: Deactivated successfully. Aug 13 07:09:16.889153 kubelet[1888]: E0813 07:09:16.889085 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:16.898952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:16.899385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:17.686453 containerd[1471]: time="2025-08-13T07:09:17.685333660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:17.687531 containerd[1471]: time="2025-08-13T07:09:17.687438477Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 07:09:17.688531 containerd[1471]: time="2025-08-13T07:09:17.688448623Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:17.691235 containerd[1471]: time="2025-08-13T07:09:17.691146659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:17.692652 containerd[1471]: time="2025-08-13T07:09:17.692451358Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.296623692s" Aug 13 07:09:17.692652 containerd[1471]: time="2025-08-13T07:09:17.692511711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 07:09:17.693337 containerd[1471]: time="2025-08-13T07:09:17.693273150Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 07:09:17.695862 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:09:18.272782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618250189.mount: Deactivated successfully. Aug 13 07:09:19.630377 containerd[1471]: time="2025-08-13T07:09:19.630295953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:19.632582 containerd[1471]: time="2025-08-13T07:09:19.632493259Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 07:09:19.633698 containerd[1471]: time="2025-08-13T07:09:19.633141535Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:19.640309 containerd[1471]: time="2025-08-13T07:09:19.640164358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:19.642398 containerd[1471]: time="2025-08-13T07:09:19.642136505Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.948390882s" Aug 13 07:09:19.642398 containerd[1471]: time="2025-08-13T07:09:19.642243923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 07:09:19.643633 containerd[1471]: time="2025-08-13T07:09:19.643092842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:09:20.223154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002049835.mount: Deactivated successfully. Aug 13 07:09:20.230424 containerd[1471]: time="2025-08-13T07:09:20.230313372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:20.232208 containerd[1471]: time="2025-08-13T07:09:20.232127578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:09:20.233921 containerd[1471]: time="2025-08-13T07:09:20.233831804Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:20.239309 containerd[1471]: time="2025-08-13T07:09:20.239227615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:20.240771 containerd[1471]: time="2025-08-13T07:09:20.240073186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 596.93613ms" Aug 13 07:09:20.240771 containerd[1471]: time="2025-08-13T07:09:20.240118063Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:09:20.241726 containerd[1471]: time="2025-08-13T07:09:20.241655359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 07:09:20.752659 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:09:20.766928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936586718.mount: Deactivated successfully. Aug 13 07:09:24.076407 containerd[1471]: time="2025-08-13T07:09:24.076263777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:24.085873 containerd[1471]: time="2025-08-13T07:09:24.085743968Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 07:09:24.088108 containerd[1471]: time="2025-08-13T07:09:24.087970798Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:24.095948 containerd[1471]: time="2025-08-13T07:09:24.095840218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:24.098863 containerd[1471]: time="2025-08-13T07:09:24.097267205Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.855549277s" Aug 13 07:09:24.098863 containerd[1471]: time="2025-08-13T07:09:24.097453265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 07:09:27.012593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:09:27.024462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:27.304605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:27.313794 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:09:27.376670 kubelet[2039]: E0813 07:09:27.376615 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:09:27.379083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:09:27.379443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:09:29.046835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:29.054622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:29.105627 systemd[1]: Reloading requested from client PID 2053 ('systemctl') (unit session-7.scope)... Aug 13 07:09:29.105651 systemd[1]: Reloading... Aug 13 07:09:29.252210 zram_generator::config[2092]: No configuration found. Aug 13 07:09:29.453676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:29.564144 systemd[1]: Reloading finished in 457 ms. Aug 13 07:09:29.632003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:09:29.632084 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:09:29.632412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:29.643643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:29.859564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:29.863149 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:29.930723 kubelet[2147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:29.930723 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:09:29.930723 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:29.932415 kubelet[2147]: I0813 07:09:29.932312 2147 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:09:30.273000 kubelet[2147]: I0813 07:09:30.272908 2147 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:09:30.273000 kubelet[2147]: I0813 07:09:30.272956 2147 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:09:30.273476 kubelet[2147]: I0813 07:09:30.273347 2147 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:09:30.308223 kubelet[2147]: I0813 07:09:30.308028 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:09:30.309662 kubelet[2147]: E0813 07:09:30.309577 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://24.199.106.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:09:30.322896 kubelet[2147]: E0813 07:09:30.322411 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:09:30.322896 kubelet[2147]: I0813 07:09:30.322457 2147 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:09:30.331134 kubelet[2147]: I0813 07:09:30.331070 2147 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:09:30.333017 kubelet[2147]: I0813 07:09:30.332920 2147 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:09:30.337297 kubelet[2147]: I0813 07:09:30.333016 2147 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-0-ae45d59eaf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:09:30.337583 kubelet[2147]: I0813 07:09:30.337340 2147 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:09:30.337583 kubelet[2147]: I0813 07:09:30.337369 2147 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:09:30.337671 kubelet[2147]: I0813 07:09:30.337594 2147 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:30.340783 kubelet[2147]: I0813 07:09:30.340557 2147 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:09:30.340783 kubelet[2147]: I0813 07:09:30.340610 2147 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:09:30.340783 kubelet[2147]: I0813 07:09:30.340651 2147 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:09:30.340783 kubelet[2147]: I0813 07:09:30.340669 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:09:30.355115 kubelet[2147]: E0813 07:09:30.354639 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://24.199.106.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-0-ae45d59eaf&limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:09:30.358983 kubelet[2147]: E0813 07:09:30.357782 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://24.199.106.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:09:30.358983 kubelet[2147]: I0813 07:09:30.358008 2147 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:09:30.358983 kubelet[2147]: I0813 07:09:30.358840 2147 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:09:30.360129 kubelet[2147]: W0813 07:09:30.360105 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:09:30.365158 kubelet[2147]: I0813 07:09:30.365125 2147 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:09:30.366070 kubelet[2147]: I0813 07:09:30.366047 2147 server.go:1289] "Started kubelet" Aug 13 07:09:30.377567 kubelet[2147]: E0813 07:09:30.372074 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.106.199:6443/api/v1/namespaces/default/events\": dial tcp 24.199.106.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-0-ae45d59eaf.185b41ee06f2c7c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-0-ae45d59eaf,UID:ci-4081.3.5-0-ae45d59eaf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-0-ae45d59eaf,},FirstTimestamp:2025-08-13 07:09:30.365536192 +0000 UTC m=+0.496159515,LastTimestamp:2025-08-13 07:09:30.365536192 +0000 UTC m=+0.496159515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-0-ae45d59eaf,}" Aug 13 07:09:30.377567 kubelet[2147]: I0813 07:09:30.375487 2147 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:09:30.377567 kubelet[2147]: I0813 07:09:30.376932 2147 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:09:30.378382 kubelet[2147]: I0813 07:09:30.378077 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:09:30.384903 kubelet[2147]: I0813 07:09:30.384797 2147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:09:30.385442 kubelet[2147]: I0813 07:09:30.385383 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:09:30.386042 kubelet[2147]: I0813 07:09:30.386010 2147 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:09:30.389834 kubelet[2147]: I0813 07:09:30.389756 2147 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:09:30.390152 kubelet[2147]: E0813 07:09:30.390112 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" Aug 13 07:09:30.392634 kubelet[2147]: I0813 07:09:30.392590 2147 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:09:30.392804 kubelet[2147]: I0813 07:09:30.392709 2147 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:09:30.393986 kubelet[2147]: E0813 07:09:30.393767 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-0-ae45d59eaf?timeout=10s\": dial tcp 24.199.106.199:6443: connect: connection refused" interval="200ms" Aug 13 07:09:30.395555 kubelet[2147]: I0813 07:09:30.394135 2147 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:09:30.395555 kubelet[2147]: I0813 07:09:30.394345 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:09:30.396095 kubelet[2147]: E0813 07:09:30.395982 2147 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:09:30.398117 kubelet[2147]: E0813 07:09:30.397105 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://24.199.106.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:09:30.399498 kubelet[2147]: I0813 07:09:30.399337 2147 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:09:30.433246 kubelet[2147]: I0813 07:09:30.432717 2147 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:09:30.433246 kubelet[2147]: I0813 07:09:30.432750 2147 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:09:30.433246 kubelet[2147]: I0813 07:09:30.432777 2147 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:30.439892 kubelet[2147]: I0813 07:09:30.439803 2147 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:09:30.442210 kubelet[2147]: I0813 07:09:30.441860 2147 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:09:30.442210 kubelet[2147]: I0813 07:09:30.441901 2147 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:09:30.442210 kubelet[2147]: I0813 07:09:30.441937 2147 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:09:30.442210 kubelet[2147]: I0813 07:09:30.441952 2147 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:09:30.442210 kubelet[2147]: E0813 07:09:30.442019 2147 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:09:30.443812 kubelet[2147]: I0813 07:09:30.443291 2147 policy_none.go:49] "None policy: Start" Aug 13 07:09:30.443812 kubelet[2147]: I0813 07:09:30.443354 2147 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:09:30.443812 kubelet[2147]: I0813 07:09:30.443380 2147 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:09:30.452347 kubelet[2147]: E0813 07:09:30.451944 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://24.199.106.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:09:30.458704 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:09:30.475350 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:09:30.480613 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:09:30.488793 kubelet[2147]: E0813 07:09:30.488650 2147 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:09:30.489710 kubelet[2147]: I0813 07:09:30.489586 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:09:30.489861 kubelet[2147]: I0813 07:09:30.489620 2147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:09:30.490865 kubelet[2147]: I0813 07:09:30.490108 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:09:30.492369 kubelet[2147]: E0813 07:09:30.492335 2147 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:09:30.492510 kubelet[2147]: E0813 07:09:30.492404 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-0-ae45d59eaf\" not found" Aug 13 07:09:30.559720 systemd[1]: Created slice kubepods-burstable-pod8c9f8ed727c374fb640f6fa9f519d9f8.slice - libcontainer container kubepods-burstable-pod8c9f8ed727c374fb640f6fa9f519d9f8.slice. Aug 13 07:09:30.574761 kubelet[2147]: E0813 07:09:30.574699 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.581111 systemd[1]: Created slice kubepods-burstable-pod045543ba3e6b8d88bd8f6b20b618889e.slice - libcontainer container kubepods-burstable-pod045543ba3e6b8d88bd8f6b20b618889e.slice. Aug 13 07:09:30.591959 kubelet[2147]: I0813 07:09:30.591162 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.594327 kubelet[2147]: E0813 07:09:30.594103 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.199:6443/api/v1/nodes\": dial tcp 24.199.106.199:6443: connect: connection refused" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.594327 kubelet[2147]: E0813 07:09:30.594255 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-0-ae45d59eaf?timeout=10s\": dial tcp 24.199.106.199:6443: connect: connection refused" interval="400ms" Aug 13 07:09:30.595164 kubelet[2147]: E0813 07:09:30.594512 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.599559 systemd[1]: Created slice kubepods-burstable-podd499b9d98d44f24a1ebc61f401214b9c.slice - libcontainer container kubepods-burstable-podd499b9d98d44f24a1ebc61f401214b9c.slice. Aug 13 07:09:30.602403 kubelet[2147]: E0813 07:09:30.602345 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.694889 kubelet[2147]: I0813 07:09:30.694816 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.694889 kubelet[2147]: I0813 07:09:30.694882 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695120 kubelet[2147]: I0813 07:09:30.694930 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695120 kubelet[2147]: I0813 07:09:30.694959 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/045543ba3e6b8d88bd8f6b20b618889e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-0-ae45d59eaf\" (UID: \"045543ba3e6b8d88bd8f6b20b618889e\") " pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695120 kubelet[2147]: I0813 07:09:30.694989 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d499b9d98d44f24a1ebc61f401214b9c-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" (UID: \"d499b9d98d44f24a1ebc61f401214b9c\") " pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695120 kubelet[2147]: I0813 07:09:30.695014 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d499b9d98d44f24a1ebc61f401214b9c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" (UID: \"d499b9d98d44f24a1ebc61f401214b9c\") " pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695120 kubelet[2147]: I0813 07:09:30.695043 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d499b9d98d44f24a1ebc61f401214b9c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" (UID: \"d499b9d98d44f24a1ebc61f401214b9c\") " pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695330 kubelet[2147]: I0813 07:09:30.695071 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.695330 kubelet[2147]: I0813 07:09:30.695093 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.796627 kubelet[2147]: I0813 07:09:30.796579 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.797128 kubelet[2147]: E0813 07:09:30.797092 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.199:6443/api/v1/nodes\": dial tcp 24.199.106.199:6443: connect: connection refused" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:30.876533 kubelet[2147]: E0813 07:09:30.876319 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:30.880576 containerd[1471]: time="2025-08-13T07:09:30.880227974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-0-ae45d59eaf,Uid:8c9f8ed727c374fb640f6fa9f519d9f8,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:30.886044 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Aug 13 07:09:30.896665 kubelet[2147]: E0813 07:09:30.896245 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:30.903976 containerd[1471]: time="2025-08-13T07:09:30.903739145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-0-ae45d59eaf,Uid:045543ba3e6b8d88bd8f6b20b618889e,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:30.905005 kubelet[2147]: E0813 07:09:30.904956 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:30.906021 containerd[1471]: time="2025-08-13T07:09:30.905806414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-0-ae45d59eaf,Uid:d499b9d98d44f24a1ebc61f401214b9c,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:30.995226 kubelet[2147]: E0813 07:09:30.995064 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-0-ae45d59eaf?timeout=10s\": dial tcp 24.199.106.199:6443: connect: connection refused" interval="800ms" Aug 13 07:09:31.199469 kubelet[2147]: I0813 07:09:31.199301 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:31.200049 kubelet[2147]: E0813 07:09:31.199942 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.199:6443/api/v1/nodes\": dial tcp 24.199.106.199:6443: connect: connection refused" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:31.420167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967053180.mount: Deactivated successfully. Aug 13 07:09:31.431657 containerd[1471]: time="2025-08-13T07:09:31.431583167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:31.434382 containerd[1471]: time="2025-08-13T07:09:31.434294587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:31.435482 containerd[1471]: time="2025-08-13T07:09:31.435337261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:31.436697 containerd[1471]: time="2025-08-13T07:09:31.436637952Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:31.438264 containerd[1471]: time="2025-08-13T07:09:31.438163702Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:31.439770 containerd[1471]: time="2025-08-13T07:09:31.439472685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:09:31.439770 containerd[1471]: time="2025-08-13T07:09:31.439722811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:09:31.446217 containerd[1471]: time="2025-08-13T07:09:31.446118162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:09:31.450927 containerd[1471]: time="2025-08-13T07:09:31.450755969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.892306ms" Aug 13 07:09:31.457563 containerd[1471]: time="2025-08-13T07:09:31.457486004Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.132219ms" Aug 13 07:09:31.457817 containerd[1471]: time="2025-08-13T07:09:31.457780727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.856892ms" Aug 13 07:09:31.577717 kubelet[2147]: E0813 07:09:31.576568 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://24.199.106.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:09:31.681849 containerd[1471]: time="2025-08-13T07:09:31.681535077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:31.681849 containerd[1471]: time="2025-08-13T07:09:31.681649600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:31.681849 containerd[1471]: time="2025-08-13T07:09:31.681676095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:31.683359 containerd[1471]: time="2025-08-13T07:09:31.681817621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:31.693441 containerd[1471]: time="2025-08-13T07:09:31.692777360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:31.694057 containerd[1471]: time="2025-08-13T07:09:31.693775339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:31.696800 containerd[1471]: time="2025-08-13T07:09:31.694926204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:31.696800 containerd[1471]: time="2025-08-13T07:09:31.695105285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:31.708043 containerd[1471]: time="2025-08-13T07:09:31.706850108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:31.708043 containerd[1471]: time="2025-08-13T07:09:31.706955417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:31.708043 containerd[1471]: time="2025-08-13T07:09:31.706983790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:31.708043 containerd[1471]: time="2025-08-13T07:09:31.707114548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:31.736624 systemd[1]: Started cri-containerd-bcc727c433095ffc81b96bb962b9e0df06aeb75579e8a4bb835e151b7730abca.scope - libcontainer container bcc727c433095ffc81b96bb962b9e0df06aeb75579e8a4bb835e151b7730abca. Aug 13 07:09:31.744503 systemd[1]: Started cri-containerd-ac138c085b4ffa353890078dc4b32dbcc4bec79a08283c5df7988d3453240944.scope - libcontainer container ac138c085b4ffa353890078dc4b32dbcc4bec79a08283c5df7988d3453240944. Aug 13 07:09:31.778622 systemd[1]: Started cri-containerd-d4625b65f385723b4af5c9ad859bcf2fa2c373927c463c4539c5576a77e4c129.scope - libcontainer container d4625b65f385723b4af5c9ad859bcf2fa2c373927c463c4539c5576a77e4c129. Aug 13 07:09:31.796131 kubelet[2147]: E0813 07:09:31.796070 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-0-ae45d59eaf?timeout=10s\": dial tcp 24.199.106.199:6443: connect: connection refused" interval="1.6s" Aug 13 07:09:31.847386 containerd[1471]: time="2025-08-13T07:09:31.847103244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-0-ae45d59eaf,Uid:045543ba3e6b8d88bd8f6b20b618889e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcc727c433095ffc81b96bb962b9e0df06aeb75579e8a4bb835e151b7730abca\"" Aug 13 07:09:31.855338 kubelet[2147]: E0813 07:09:31.855258 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:31.873566 containerd[1471]: time="2025-08-13T07:09:31.873511958Z" level=info msg="CreateContainer within sandbox \"bcc727c433095ffc81b96bb962b9e0df06aeb75579e8a4bb835e151b7730abca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:09:31.889120 kubelet[2147]: E0813 07:09:31.888895 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://24.199.106.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:09:31.894846 containerd[1471]: time="2025-08-13T07:09:31.894353774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-0-ae45d59eaf,Uid:8c9f8ed727c374fb640f6fa9f519d9f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac138c085b4ffa353890078dc4b32dbcc4bec79a08283c5df7988d3453240944\"" Aug 13 07:09:31.896474 kubelet[2147]: E0813 07:09:31.895960 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:31.901421 containerd[1471]: time="2025-08-13T07:09:31.901238257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-0-ae45d59eaf,Uid:d499b9d98d44f24a1ebc61f401214b9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4625b65f385723b4af5c9ad859bcf2fa2c373927c463c4539c5576a77e4c129\"" Aug 13 07:09:31.904346 kubelet[2147]: E0813 07:09:31.904043 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:31.905513 containerd[1471]: time="2025-08-13T07:09:31.905398591Z" level=info msg="CreateContainer within sandbox \"ac138c085b4ffa353890078dc4b32dbcc4bec79a08283c5df7988d3453240944\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:09:31.910880 containerd[1471]: time="2025-08-13T07:09:31.910641782Z" level=info msg="CreateContainer within sandbox \"bcc727c433095ffc81b96bb962b9e0df06aeb75579e8a4bb835e151b7730abca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ee838506f6a03d8d05d42add20842083bcb3a5fe15599f312aacea2f12e7475\"" Aug 13 07:09:31.912303 containerd[1471]: time="2025-08-13T07:09:31.912058551Z" level=info msg="CreateContainer within sandbox \"d4625b65f385723b4af5c9ad859bcf2fa2c373927c463c4539c5576a77e4c129\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:09:31.913213 containerd[1471]: time="2025-08-13T07:09:31.912530371Z" level=info msg="StartContainer for \"9ee838506f6a03d8d05d42add20842083bcb3a5fe15599f312aacea2f12e7475\"" Aug 13 07:09:31.925524 containerd[1471]: time="2025-08-13T07:09:31.925464185Z" level=info msg="CreateContainer within sandbox \"ac138c085b4ffa353890078dc4b32dbcc4bec79a08283c5df7988d3453240944\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9765910e1f8a29b83acb3d972ad22cf5b666979f1274027fc5b82d74f69ad9f\"" Aug 13 07:09:31.926653 containerd[1471]: time="2025-08-13T07:09:31.926614928Z" level=info msg="StartContainer for \"f9765910e1f8a29b83acb3d972ad22cf5b666979f1274027fc5b82d74f69ad9f\"" Aug 13 07:09:31.931999 kubelet[2147]: E0813 07:09:31.931804 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://24.199.106.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-0-ae45d59eaf&limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:09:31.941128 containerd[1471]: time="2025-08-13T07:09:31.940964076Z" level=info msg="CreateContainer within sandbox \"d4625b65f385723b4af5c9ad859bcf2fa2c373927c463c4539c5576a77e4c129\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21aebca0a8857436b43d9cc3737cac4dd360b8f8dc2b9aab697b8d27b4006bc3\"" Aug 13 07:09:31.942053 containerd[1471]: time="2025-08-13T07:09:31.941850140Z" level=info msg="StartContainer for \"21aebca0a8857436b43d9cc3737cac4dd360b8f8dc2b9aab697b8d27b4006bc3\"" Aug 13 07:09:31.964543 kubelet[2147]: E0813 07:09:31.963097 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://24.199.106.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:09:31.974076 systemd[1]: Started cri-containerd-9ee838506f6a03d8d05d42add20842083bcb3a5fe15599f312aacea2f12e7475.scope - libcontainer container 9ee838506f6a03d8d05d42add20842083bcb3a5fe15599f312aacea2f12e7475. Aug 13 07:09:31.998858 systemd[1]: Started cri-containerd-f9765910e1f8a29b83acb3d972ad22cf5b666979f1274027fc5b82d74f69ad9f.scope - libcontainer container f9765910e1f8a29b83acb3d972ad22cf5b666979f1274027fc5b82d74f69ad9f. Aug 13 07:09:32.002760 kubelet[2147]: I0813 07:09:32.002682 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:32.004107 kubelet[2147]: E0813 07:09:32.004025 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.199:6443/api/v1/nodes\": dial tcp 24.199.106.199:6443: connect: connection refused" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:32.034006 systemd[1]: Started cri-containerd-21aebca0a8857436b43d9cc3737cac4dd360b8f8dc2b9aab697b8d27b4006bc3.scope - libcontainer container 21aebca0a8857436b43d9cc3737cac4dd360b8f8dc2b9aab697b8d27b4006bc3. Aug 13 07:09:32.089481 containerd[1471]: time="2025-08-13T07:09:32.088750105Z" level=info msg="StartContainer for \"9ee838506f6a03d8d05d42add20842083bcb3a5fe15599f312aacea2f12e7475\" returns successfully" Aug 13 07:09:32.119262 containerd[1471]: time="2025-08-13T07:09:32.119053407Z" level=info msg="StartContainer for \"f9765910e1f8a29b83acb3d972ad22cf5b666979f1274027fc5b82d74f69ad9f\" returns successfully" Aug 13 07:09:32.147127 containerd[1471]: time="2025-08-13T07:09:32.147064300Z" level=info msg="StartContainer for \"21aebca0a8857436b43d9cc3737cac4dd360b8f8dc2b9aab697b8d27b4006bc3\" returns successfully" Aug 13 07:09:32.432831 kubelet[2147]: E0813 07:09:32.432755 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://24.199.106.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.106.199:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:09:32.478400 kubelet[2147]: E0813 07:09:32.477897 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:32.478400 kubelet[2147]: E0813 07:09:32.478116 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:32.482540 kubelet[2147]: E0813 07:09:32.482502 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:32.483036 kubelet[2147]: E0813 07:09:32.482945 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:32.486849 kubelet[2147]: E0813 07:09:32.486812 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:32.487378 kubelet[2147]: E0813 07:09:32.487253 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:33.494236 kubelet[2147]: E0813 07:09:33.493949 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:33.495368 kubelet[2147]: E0813 07:09:33.495066 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:33.495964 kubelet[2147]: E0813 07:09:33.495935 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:33.496223 kubelet[2147]: E0813 07:09:33.496157 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:33.606225 kubelet[2147]: I0813 07:09:33.606127 2147 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:34.501250 kubelet[2147]: E0813 07:09:34.497429 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:34.501250 kubelet[2147]: E0813 07:09:34.497732 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:34.957810 kubelet[2147]: E0813 07:09:34.957687 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-0-ae45d59eaf\" not found" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.074474 kubelet[2147]: I0813 07:09:35.074383 2147 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.074474 kubelet[2147]: E0813 07:09:35.074439 2147 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-0-ae45d59eaf\": node \"ci-4081.3.5-0-ae45d59eaf\" not found" Aug 13 07:09:35.092440 kubelet[2147]: I0813 07:09:35.092328 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.102624 kubelet[2147]: E0813 07:09:35.102376 2147 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.5-0-ae45d59eaf.185b41ee06f2c7c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-0-ae45d59eaf,UID:ci-4081.3.5-0-ae45d59eaf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-0-ae45d59eaf,},FirstTimestamp:2025-08-13 07:09:30.365536192 +0000 UTC m=+0.496159515,LastTimestamp:2025-08-13 07:09:30.365536192 +0000 UTC m=+0.496159515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-0-ae45d59eaf,}" Aug 13 07:09:35.107827 kubelet[2147]: E0813 07:09:35.107744 2147 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.107827 kubelet[2147]: I0813 07:09:35.107797 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.112221 kubelet[2147]: E0813 07:09:35.111251 2147 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-0-ae45d59eaf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.112221 kubelet[2147]: I0813 07:09:35.111304 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.114636 kubelet[2147]: E0813 07:09:35.114577 2147 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.359920 kubelet[2147]: I0813 07:09:35.359875 2147 apiserver.go:52] "Watching apiserver" Aug 13 07:09:35.393520 kubelet[2147]: I0813 07:09:35.393424 2147 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:09:35.962805 kubelet[2147]: I0813 07:09:35.962453 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:35.971849 kubelet[2147]: I0813 07:09:35.971794 2147 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:35.972439 kubelet[2147]: E0813 07:09:35.972096 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:36.502134 kubelet[2147]: E0813 07:09:36.500847 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:37.351019 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Aug 13 07:09:37.351798 systemd[1]: Reloading... Aug 13 07:09:37.367194 kubelet[2147]: I0813 07:09:37.366970 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:37.384545 kubelet[2147]: I0813 07:09:37.384003 2147 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:37.386759 kubelet[2147]: E0813 07:09:37.386613 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:37.502173 kubelet[2147]: E0813 07:09:37.502122 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:37.573746 zram_generator::config[2472]: No configuration found. Aug 13 07:09:37.807305 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:09:37.953290 systemd[1]: Reloading finished in 600 ms. Aug 13 07:09:38.013436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:38.030389 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:09:38.030993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:38.031074 systemd[1]: kubelet.service: Consumed 1.128s CPU time, 128.5M memory peak, 0B memory swap peak. Aug 13 07:09:38.043836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:09:38.269665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:09:38.279746 (kubelet)[2522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:09:38.368389 kubelet[2522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:38.368389 kubelet[2522]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:09:38.368389 kubelet[2522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:09:38.368389 kubelet[2522]: I0813 07:09:38.367696 2522 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:09:38.381038 kubelet[2522]: I0813 07:09:38.380937 2522 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:09:38.381316 kubelet[2522]: I0813 07:09:38.381296 2522 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:09:38.382246 kubelet[2522]: I0813 07:09:38.381879 2522 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:09:38.386690 kubelet[2522]: I0813 07:09:38.386630 2522 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 07:09:38.399413 kubelet[2522]: I0813 07:09:38.399287 2522 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:09:38.409698 kubelet[2522]: E0813 07:09:38.409619 2522 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:09:38.409698 kubelet[2522]: I0813 07:09:38.409687 2522 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:09:38.422228 kubelet[2522]: I0813 07:09:38.421562 2522 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:09:38.424548 kubelet[2522]: I0813 07:09:38.424482 2522 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:09:38.426264 kubelet[2522]: I0813 07:09:38.424535 2522 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-0-ae45d59eaf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:09:38.426264 kubelet[2522]: I0813 07:09:38.424793 2522 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:09:38.426264 kubelet[2522]: I0813 07:09:38.424808 2522 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:09:38.426264 kubelet[2522]: I0813 07:09:38.424876 2522 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:38.426264 kubelet[2522]: I0813 07:09:38.425094 2522 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:09:38.426658 kubelet[2522]: I0813 07:09:38.425116 2522 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:09:38.426658 kubelet[2522]: I0813 07:09:38.425147 2522 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:09:38.426658 kubelet[2522]: I0813 07:09:38.425165 2522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:09:38.432984 kubelet[2522]: I0813 07:09:38.432906 2522 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:09:38.435778 kubelet[2522]: I0813 07:09:38.435538 2522 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:09:38.440620 kubelet[2522]: I0813 07:09:38.440587 2522 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:09:38.440810 kubelet[2522]: I0813 07:09:38.440798 2522 server.go:1289] "Started kubelet" Aug 13 07:09:38.445884 kubelet[2522]: I0813 07:09:38.445840 2522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:09:38.464546 kubelet[2522]: I0813 07:09:38.462302 2522 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:09:38.472498 kubelet[2522]: I0813 07:09:38.472460 2522 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:09:38.487555 sudo[2538]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:09:38.488855 sudo[2538]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:09:38.491113 kubelet[2522]: I0813 07:09:38.490129 2522 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:09:38.491511 kubelet[2522]: E0813 07:09:38.491483 2522 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-0-ae45d59eaf\" not found" Aug 13 07:09:38.493675 kubelet[2522]: I0813 07:09:38.493634 2522 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:09:38.493675 kubelet[2522]: I0813 07:09:38.485034 2522 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:09:38.494836 kubelet[2522]: I0813 07:09:38.493892 2522 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:09:38.497814 kubelet[2522]: I0813 07:09:38.474434 2522 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:09:38.511921 kubelet[2522]: I0813 07:09:38.511874 2522 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:09:38.513765 kubelet[2522]: I0813 07:09:38.513716 2522 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:09:38.517310 kubelet[2522]: E0813 07:09:38.516793 2522 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:09:38.519150 kubelet[2522]: I0813 07:09:38.519111 2522 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:09:38.519589 kubelet[2522]: I0813 07:09:38.519567 2522 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:09:38.544894 kubelet[2522]: I0813 07:09:38.544723 2522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:09:38.568832 kubelet[2522]: I0813 07:09:38.567373 2522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:09:38.568832 kubelet[2522]: I0813 07:09:38.567416 2522 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:09:38.568832 kubelet[2522]: I0813 07:09:38.567445 2522 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:09:38.568832 kubelet[2522]: I0813 07:09:38.567456 2522 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:09:38.568832 kubelet[2522]: E0813 07:09:38.567527 2522 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:09:38.656094 kubelet[2522]: I0813 07:09:38.656061 2522 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:09:38.656561 kubelet[2522]: I0813 07:09:38.656542 2522 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:09:38.656830 kubelet[2522]: I0813 07:09:38.656715 2522 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:09:38.657250 kubelet[2522]: I0813 07:09:38.657138 2522 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:09:38.657453 kubelet[2522]: I0813 07:09:38.657172 2522 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:09:38.657453 kubelet[2522]: I0813 07:09:38.657386 2522 policy_none.go:49] "None policy: Start" Aug 13 07:09:38.657453 kubelet[2522]: I0813 07:09:38.657400 2522 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:09:38.657709 kubelet[2522]: I0813 07:09:38.657553 2522 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:09:38.657846 kubelet[2522]: I0813 07:09:38.657776 2522 state_mem.go:75] "Updated machine memory state" Aug 13 07:09:38.664869 kubelet[2522]: E0813 07:09:38.664788 2522 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:09:38.666301 kubelet[2522]: I0813 07:09:38.665615 2522 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:09:38.666301 kubelet[2522]: I0813 07:09:38.665729 2522 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:09:38.669954 kubelet[2522]: I0813 07:09:38.668838 2522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:09:38.673450 kubelet[2522]: E0813 07:09:38.672164 2522 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:09:38.674510 kubelet[2522]: I0813 07:09:38.674480 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.680954 kubelet[2522]: I0813 07:09:38.680915 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.683292 kubelet[2522]: I0813 07:09:38.681757 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.712558 kubelet[2522]: I0813 07:09:38.712503 2522 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:38.714659 kubelet[2522]: I0813 07:09:38.714622 2522 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:38.715134 kubelet[2522]: E0813 07:09:38.714926 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.721003 kubelet[2522]: I0813 07:09:38.720799 2522 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:38.721003 kubelet[2522]: E0813 07:09:38.720881 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.790612 kubelet[2522]: I0813 07:09:38.790568 2522 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797620 kubelet[2522]: I0813 07:09:38.797459 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797620 kubelet[2522]: I0813 07:09:38.797519 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797620 kubelet[2522]: I0813 07:09:38.797573 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d499b9d98d44f24a1ebc61f401214b9c-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" (UID: \"d499b9d98d44f24a1ebc61f401214b9c\") " pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797620 kubelet[2522]: I0813 07:09:38.797617 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d499b9d98d44f24a1ebc61f401214b9c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" (UID: \"d499b9d98d44f24a1ebc61f401214b9c\") " pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797897 kubelet[2522]: I0813 07:09:38.797668 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d499b9d98d44f24a1ebc61f401214b9c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" (UID: \"d499b9d98d44f24a1ebc61f401214b9c\") " pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797897 kubelet[2522]: I0813 07:09:38.797707 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797897 kubelet[2522]: I0813 07:09:38.797752 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797897 kubelet[2522]: I0813 07:09:38.797790 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c9f8ed727c374fb640f6fa9f519d9f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-0-ae45d59eaf\" (UID: \"8c9f8ed727c374fb640f6fa9f519d9f8\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.797897 kubelet[2522]: I0813 07:09:38.797850 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/045543ba3e6b8d88bd8f6b20b618889e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-0-ae45d59eaf\" (UID: \"045543ba3e6b8d88bd8f6b20b618889e\") " pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.813263 kubelet[2522]: I0813 07:09:38.813209 2522 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:38.813475 kubelet[2522]: I0813 07:09:38.813315 2522 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:39.016761 kubelet[2522]: E0813 07:09:39.016695 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:39.019397 kubelet[2522]: E0813 07:09:39.019043 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:39.022004 kubelet[2522]: E0813 07:09:39.021773 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:39.618385 systemd-timesyncd[1354]: Contacted time server 102.129.185.135:123 (2.flatcar.pool.ntp.org). Aug 13 07:09:39.618482 systemd-timesyncd[1354]: Initial clock synchronization to Wed 2025-08-13 07:09:39.617952 UTC. Aug 13 07:09:39.619304 systemd-resolved[1330]: Clock change detected. Flushing caches. Aug 13 07:09:39.977333 sudo[2538]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:40.006693 kubelet[2522]: I0813 07:09:40.006623 2522 apiserver.go:52] "Watching apiserver" Aug 13 07:09:40.046016 kubelet[2522]: I0813 07:09:40.045938 2522 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:09:40.169942 kubelet[2522]: E0813 07:09:40.169436 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:40.169942 kubelet[2522]: I0813 07:09:40.169607 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:40.170977 kubelet[2522]: I0813 07:09:40.170645 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:40.184973 kubelet[2522]: I0813 07:09:40.184120 2522 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:40.184973 kubelet[2522]: E0813 07:09:40.184209 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-0-ae45d59eaf\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:40.184973 kubelet[2522]: E0813 07:09:40.184472 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:40.190186 kubelet[2522]: I0813 07:09:40.188841 2522 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 07:09:40.190559 kubelet[2522]: E0813 07:09:40.190522 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-0-ae45d59eaf\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" Aug 13 07:09:40.191005 kubelet[2522]: E0813 07:09:40.190959 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:40.247801 kubelet[2522]: I0813 07:09:40.247415 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-0-ae45d59eaf" podStartSLOduration=5.247395269 podStartE2EDuration="5.247395269s" podCreationTimestamp="2025-08-13 07:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:40.232046257 +0000 UTC m=+1.391235852" watchObservedRunningTime="2025-08-13 07:09:40.247395269 +0000 UTC m=+1.406584853" Aug 13 07:09:40.247801 kubelet[2522]: I0813 07:09:40.247569 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-0-ae45d59eaf" podStartSLOduration=3.247562975 podStartE2EDuration="3.247562975s" podCreationTimestamp="2025-08-13 07:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:40.247336834 +0000 UTC m=+1.406526444" watchObservedRunningTime="2025-08-13 07:09:40.247562975 +0000 UTC m=+1.406752570" Aug 13 07:09:40.283751 kubelet[2522]: I0813 07:09:40.283559 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-0-ae45d59eaf" podStartSLOduration=2.283540689 podStartE2EDuration="2.283540689s" podCreationTimestamp="2025-08-13 07:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:40.266470074 +0000 UTC m=+1.425659672" watchObservedRunningTime="2025-08-13 07:09:40.283540689 +0000 UTC m=+1.442730287" Aug 13 07:09:41.174763 kubelet[2522]: E0813 07:09:41.171556 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:41.174763 kubelet[2522]: E0813 07:09:41.172375 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:41.174763 kubelet[2522]: E0813 07:09:41.172768 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:41.997055 sudo[1652]: pam_unix(sudo:session): session closed for user root Aug 13 07:09:42.002108 sshd[1649]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:42.008537 systemd[1]: sshd@6-24.199.106.199:22-139.178.89.65:36594.service: Deactivated successfully. Aug 13 07:09:42.013134 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:09:42.013583 systemd[1]: session-7.scope: Consumed 7.900s CPU time, 144.9M memory peak, 0B memory swap peak. Aug 13 07:09:42.015798 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:09:42.017652 systemd-logind[1453]: Removed session 7. Aug 13 07:09:42.600900 kubelet[2522]: E0813 07:09:42.599581 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:42.721700 kubelet[2522]: I0813 07:09:42.721539 2522 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:09:42.724096 containerd[1471]: time="2025-08-13T07:09:42.722743164Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:09:42.724617 kubelet[2522]: I0813 07:09:42.723318 2522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:09:43.175666 kubelet[2522]: E0813 07:09:43.175631 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:43.654033 systemd[1]: Created slice kubepods-besteffort-podb66e254e_2559_441a_94fa_0aeef2eee753.slice - libcontainer container kubepods-besteffort-podb66e254e_2559_441a_94fa_0aeef2eee753.slice. Aug 13 07:09:43.681796 kubelet[2522]: I0813 07:09:43.681367 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b66e254e-2559-441a-94fa-0aeef2eee753-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xvf4m\" (UID: \"b66e254e-2559-441a-94fa-0aeef2eee753\") " pod="kube-system/cilium-operator-6c4d7847fc-xvf4m" Aug 13 07:09:43.681796 kubelet[2522]: I0813 07:09:43.681424 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgst\" (UniqueName: \"kubernetes.io/projected/b66e254e-2559-441a-94fa-0aeef2eee753-kube-api-access-mvgst\") pod \"cilium-operator-6c4d7847fc-xvf4m\" (UID: \"b66e254e-2559-441a-94fa-0aeef2eee753\") " pod="kube-system/cilium-operator-6c4d7847fc-xvf4m" Aug 13 07:09:43.740017 systemd[1]: Created slice kubepods-besteffort-pode8fc3884_5ba6_4868_9e93_8ae3e60f9017.slice - libcontainer container kubepods-besteffort-pode8fc3884_5ba6_4868_9e93_8ae3e60f9017.slice. Aug 13 07:09:43.758923 systemd[1]: Created slice kubepods-burstable-podb15c92a8_2ffd_4846_b24c_50aafaaf1856.slice - libcontainer container kubepods-burstable-podb15c92a8_2ffd_4846_b24c_50aafaaf1856.slice. Aug 13 07:09:43.782552 kubelet[2522]: I0813 07:09:43.782489 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8fc3884-5ba6-4868-9e93-8ae3e60f9017-kube-proxy\") pod \"kube-proxy-4t8cr\" (UID: \"e8fc3884-5ba6-4868-9e93-8ae3e60f9017\") " pod="kube-system/kube-proxy-4t8cr" Aug 13 07:09:43.782829 kubelet[2522]: I0813 07:09:43.782810 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-run\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.782953 kubelet[2522]: I0813 07:09:43.782931 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-lib-modules\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.783126 kubelet[2522]: I0813 07:09:43.783109 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-kernel\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.783252 kubelet[2522]: I0813 07:09:43.783232 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8fc3884-5ba6-4868-9e93-8ae3e60f9017-lib-modules\") pod \"kube-proxy-4t8cr\" (UID: \"e8fc3884-5ba6-4868-9e93-8ae3e60f9017\") " pod="kube-system/kube-proxy-4t8cr" Aug 13 07:09:43.783397 kubelet[2522]: I0813 07:09:43.783352 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq2jw\" (UniqueName: \"kubernetes.io/projected/e8fc3884-5ba6-4868-9e93-8ae3e60f9017-kube-api-access-xq2jw\") pod \"kube-proxy-4t8cr\" (UID: \"e8fc3884-5ba6-4868-9e93-8ae3e60f9017\") " pod="kube-system/kube-proxy-4t8cr" Aug 13 07:09:43.783536 kubelet[2522]: I0813 07:09:43.783467 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hostproc\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.783536 kubelet[2522]: I0813 07:09:43.783508 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-etc-cni-netd\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.783950 kubelet[2522]: I0813 07:09:43.783794 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8fc3884-5ba6-4868-9e93-8ae3e60f9017-xtables-lock\") pod \"kube-proxy-4t8cr\" (UID: \"e8fc3884-5ba6-4868-9e93-8ae3e60f9017\") " pod="kube-system/kube-proxy-4t8cr" Aug 13 07:09:43.783950 kubelet[2522]: I0813 07:09:43.783846 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-xtables-lock\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.783950 kubelet[2522]: I0813 07:09:43.783900 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv8n\" (UniqueName: \"kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-kube-api-access-7jv8n\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.783950 kubelet[2522]: I0813 07:09:43.783948 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b15c92a8-2ffd-4846-b24c-50aafaaf1856-clustermesh-secrets\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.784139 kubelet[2522]: I0813 07:09:43.783973 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hubble-tls\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.784139 kubelet[2522]: I0813 07:09:43.784006 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-bpf-maps\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.784139 kubelet[2522]: I0813 07:09:43.784033 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-cgroup\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.784139 kubelet[2522]: I0813 07:09:43.784075 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-config-path\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.784139 kubelet[2522]: I0813 07:09:43.784103 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-net\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.784263 kubelet[2522]: I0813 07:09:43.784168 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cni-path\") pod \"cilium-x72xb\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " pod="kube-system/cilium-x72xb" Aug 13 07:09:43.866898 kubelet[2522]: E0813 07:09:43.865491 2522 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:09:43.866898 kubelet[2522]: E0813 07:09:43.865544 2522 projected.go:194] Error preparing data for projected volume kube-api-access-mvgst for pod kube-system/cilium-operator-6c4d7847fc-xvf4m: configmap "kube-root-ca.crt" not found Aug 13 07:09:43.866898 kubelet[2522]: E0813 07:09:43.865658 2522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b66e254e-2559-441a-94fa-0aeef2eee753-kube-api-access-mvgst podName:b66e254e-2559-441a-94fa-0aeef2eee753 nodeName:}" failed. No retries permitted until 2025-08-13 07:09:44.365628067 +0000 UTC m=+5.524817665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mvgst" (UniqueName: "kubernetes.io/projected/b66e254e-2559-441a-94fa-0aeef2eee753-kube-api-access-mvgst") pod "cilium-operator-6c4d7847fc-xvf4m" (UID: "b66e254e-2559-441a-94fa-0aeef2eee753") : configmap "kube-root-ca.crt" not found Aug 13 07:09:43.921506 kubelet[2522]: E0813 07:09:43.918349 2522 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:09:43.921506 kubelet[2522]: E0813 07:09:43.918396 2522 projected.go:194] Error preparing data for projected volume kube-api-access-xq2jw for pod kube-system/kube-proxy-4t8cr: configmap "kube-root-ca.crt" not found Aug 13 07:09:43.921506 kubelet[2522]: E0813 07:09:43.918467 2522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8fc3884-5ba6-4868-9e93-8ae3e60f9017-kube-api-access-xq2jw podName:e8fc3884-5ba6-4868-9e93-8ae3e60f9017 nodeName:}" failed. No retries permitted until 2025-08-13 07:09:44.418439085 +0000 UTC m=+5.577628671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xq2jw" (UniqueName: "kubernetes.io/projected/e8fc3884-5ba6-4868-9e93-8ae3e60f9017-kube-api-access-xq2jw") pod "kube-proxy-4t8cr" (UID: "e8fc3884-5ba6-4868-9e93-8ae3e60f9017") : configmap "kube-root-ca.crt" not found Aug 13 07:09:44.067289 kubelet[2522]: E0813 07:09:44.067228 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:44.068338 containerd[1471]: time="2025-08-13T07:09:44.068250367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x72xb,Uid:b15c92a8-2ffd-4846-b24c-50aafaaf1856,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:44.118240 containerd[1471]: time="2025-08-13T07:09:44.118061615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:44.118240 containerd[1471]: time="2025-08-13T07:09:44.118151852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:44.118240 containerd[1471]: time="2025-08-13T07:09:44.118169714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.131107 containerd[1471]: time="2025-08-13T07:09:44.118306420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.151283 systemd[1]: Started cri-containerd-52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d.scope - libcontainer container 52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d. Aug 13 07:09:44.195409 containerd[1471]: time="2025-08-13T07:09:44.194505147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x72xb,Uid:b15c92a8-2ffd-4846-b24c-50aafaaf1856,Namespace:kube-system,Attempt:0,} returns sandbox id \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\"" Aug 13 07:09:44.196736 kubelet[2522]: E0813 07:09:44.196239 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:44.200746 containerd[1471]: time="2025-08-13T07:09:44.200672984Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:09:44.566457 kubelet[2522]: E0813 07:09:44.565937 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:44.567570 containerd[1471]: time="2025-08-13T07:09:44.567236999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xvf4m,Uid:b66e254e-2559-441a-94fa-0aeef2eee753,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:44.615902 containerd[1471]: time="2025-08-13T07:09:44.615068517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:44.615902 containerd[1471]: time="2025-08-13T07:09:44.615810162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:44.616372 containerd[1471]: time="2025-08-13T07:09:44.615833789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.616372 containerd[1471]: time="2025-08-13T07:09:44.616076271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.651535 kubelet[2522]: E0813 07:09:44.648577 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:44.651833 containerd[1471]: time="2025-08-13T07:09:44.649929411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4t8cr,Uid:e8fc3884-5ba6-4868-9e93-8ae3e60f9017,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:44.663580 systemd[1]: Started cri-containerd-05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612.scope - libcontainer container 05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612. Aug 13 07:09:44.722897 containerd[1471]: time="2025-08-13T07:09:44.720427201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:44.722897 containerd[1471]: time="2025-08-13T07:09:44.720528423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:44.722897 containerd[1471]: time="2025-08-13T07:09:44.720549337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.722897 containerd[1471]: time="2025-08-13T07:09:44.720680635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:44.755300 systemd[1]: Started cri-containerd-bf2a3316a2ce89489f38d15ed78d2cd732022190d95deef0ca093e2895ffeb1e.scope - libcontainer container bf2a3316a2ce89489f38d15ed78d2cd732022190d95deef0ca093e2895ffeb1e. Aug 13 07:09:44.765072 containerd[1471]: time="2025-08-13T07:09:44.764987685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xvf4m,Uid:b66e254e-2559-441a-94fa-0aeef2eee753,Namespace:kube-system,Attempt:0,} returns sandbox id \"05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612\"" Aug 13 07:09:44.766919 kubelet[2522]: E0813 07:09:44.766604 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:44.801352 containerd[1471]: time="2025-08-13T07:09:44.801288789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4t8cr,Uid:e8fc3884-5ba6-4868-9e93-8ae3e60f9017,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf2a3316a2ce89489f38d15ed78d2cd732022190d95deef0ca093e2895ffeb1e\"" Aug 13 07:09:44.802726 kubelet[2522]: E0813 07:09:44.802694 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:44.815722 containerd[1471]: time="2025-08-13T07:09:44.815654790Z" level=info msg="CreateContainer within sandbox \"bf2a3316a2ce89489f38d15ed78d2cd732022190d95deef0ca093e2895ffeb1e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:09:44.841674 containerd[1471]: time="2025-08-13T07:09:44.841417418Z" level=info msg="CreateContainer within sandbox \"bf2a3316a2ce89489f38d15ed78d2cd732022190d95deef0ca093e2895ffeb1e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f750b7a06c286bfd41b9ccbe208ae321c82b62cb1be89c0f1ff532b65e6e4de\"" Aug 13 07:09:44.845164 containerd[1471]: time="2025-08-13T07:09:44.845102809Z" level=info msg="StartContainer for \"4f750b7a06c286bfd41b9ccbe208ae321c82b62cb1be89c0f1ff532b65e6e4de\"" Aug 13 07:09:44.887179 systemd[1]: Started cri-containerd-4f750b7a06c286bfd41b9ccbe208ae321c82b62cb1be89c0f1ff532b65e6e4de.scope - libcontainer container 4f750b7a06c286bfd41b9ccbe208ae321c82b62cb1be89c0f1ff532b65e6e4de. Aug 13 07:09:44.941970 containerd[1471]: time="2025-08-13T07:09:44.941288551Z" level=info msg="StartContainer for \"4f750b7a06c286bfd41b9ccbe208ae321c82b62cb1be89c0f1ff532b65e6e4de\" returns successfully" Aug 13 07:09:45.186622 kubelet[2522]: E0813 07:09:45.185514 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:47.141535 kubelet[2522]: E0813 07:09:47.141490 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:47.179498 kubelet[2522]: I0813 07:09:47.179092 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4t8cr" podStartSLOduration=4.179069052 podStartE2EDuration="4.179069052s" podCreationTimestamp="2025-08-13 07:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:45.214481998 +0000 UTC m=+6.373671601" watchObservedRunningTime="2025-08-13 07:09:47.179069052 +0000 UTC m=+8.338258642" Aug 13 07:09:47.220619 kubelet[2522]: E0813 07:09:47.220111 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:48.223246 kubelet[2522]: E0813 07:09:48.222693 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:48.876047 update_engine[1455]: I20250813 07:09:48.875941 1455 update_attempter.cc:509] Updating boot flags... Aug 13 07:09:48.975332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2900) Aug 13 07:09:49.056541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2901) Aug 13 07:09:50.175915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835537966.mount: Deactivated successfully. Aug 13 07:09:51.171832 kubelet[2522]: E0813 07:09:51.171358 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:53.475320 containerd[1471]: time="2025-08-13T07:09:53.475197367Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:53.477732 containerd[1471]: time="2025-08-13T07:09:53.477644588Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:09:53.478797 containerd[1471]: time="2025-08-13T07:09:53.478735494Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:53.481542 containerd[1471]: time="2025-08-13T07:09:53.481300331Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.280557353s" Aug 13 07:09:53.481542 containerd[1471]: time="2025-08-13T07:09:53.481367623Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:09:53.485063 containerd[1471]: time="2025-08-13T07:09:53.484004778Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:09:53.489941 containerd[1471]: time="2025-08-13T07:09:53.489807619Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:09:53.594958 containerd[1471]: time="2025-08-13T07:09:53.594833612Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\"" Aug 13 07:09:53.597356 containerd[1471]: time="2025-08-13T07:09:53.597304119Z" level=info msg="StartContainer for \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\"" Aug 13 07:09:53.857214 systemd[1]: Started cri-containerd-f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50.scope - libcontainer container f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50. Aug 13 07:09:53.904995 containerd[1471]: time="2025-08-13T07:09:53.904524913Z" level=info msg="StartContainer for \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\" returns successfully" Aug 13 07:09:53.926768 systemd[1]: cri-containerd-f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50.scope: Deactivated successfully. Aug 13 07:09:54.164155 containerd[1471]: time="2025-08-13T07:09:54.149631248Z" level=info msg="shim disconnected" id=f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50 namespace=k8s.io Aug 13 07:09:54.164155 containerd[1471]: time="2025-08-13T07:09:54.163805163Z" level=warning msg="cleaning up after shim disconnected" id=f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50 namespace=k8s.io Aug 13 07:09:54.164155 containerd[1471]: time="2025-08-13T07:09:54.163838052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:54.252933 kubelet[2522]: E0813 07:09:54.252019 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:54.262702 containerd[1471]: time="2025-08-13T07:09:54.262623272Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:09:54.290402 containerd[1471]: time="2025-08-13T07:09:54.290163808Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\"" Aug 13 07:09:54.294886 containerd[1471]: time="2025-08-13T07:09:54.294521201Z" level=info msg="StartContainer for \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\"" Aug 13 07:09:54.335284 systemd[1]: Started cri-containerd-811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7.scope - libcontainer container 811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7. Aug 13 07:09:54.386841 containerd[1471]: time="2025-08-13T07:09:54.386593335Z" level=info msg="StartContainer for \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\" returns successfully" Aug 13 07:09:54.403309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:09:54.403697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:54.403823 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:54.414468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:09:54.416619 systemd[1]: cri-containerd-811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7.scope: Deactivated successfully. Aug 13 07:09:54.460616 containerd[1471]: time="2025-08-13T07:09:54.460289267Z" level=info msg="shim disconnected" id=811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7 namespace=k8s.io Aug 13 07:09:54.460616 containerd[1471]: time="2025-08-13T07:09:54.460370643Z" level=warning msg="cleaning up after shim disconnected" id=811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7 namespace=k8s.io Aug 13 07:09:54.460616 containerd[1471]: time="2025-08-13T07:09:54.460383345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:54.480573 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:09:54.571402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50-rootfs.mount: Deactivated successfully. Aug 13 07:09:55.259638 kubelet[2522]: E0813 07:09:55.259572 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:55.319124 containerd[1471]: time="2025-08-13T07:09:55.319069945Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:09:55.467337 containerd[1471]: time="2025-08-13T07:09:55.467166817Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\"" Aug 13 07:09:55.470570 containerd[1471]: time="2025-08-13T07:09:55.470500430Z" level=info msg="StartContainer for \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\"" Aug 13 07:09:55.472286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975890537.mount: Deactivated successfully. Aug 13 07:09:55.531200 systemd[1]: Started cri-containerd-f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348.scope - libcontainer container f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348. Aug 13 07:09:55.634503 containerd[1471]: time="2025-08-13T07:09:55.633735442Z" level=info msg="StartContainer for \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\" returns successfully" Aug 13 07:09:55.637176 systemd[1]: cri-containerd-f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348.scope: Deactivated successfully. Aug 13 07:09:55.707966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348-rootfs.mount: Deactivated successfully. Aug 13 07:09:55.718944 containerd[1471]: time="2025-08-13T07:09:55.718623211Z" level=info msg="shim disconnected" id=f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348 namespace=k8s.io Aug 13 07:09:55.718944 containerd[1471]: time="2025-08-13T07:09:55.718683706Z" level=warning msg="cleaning up after shim disconnected" id=f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348 namespace=k8s.io Aug 13 07:09:55.718944 containerd[1471]: time="2025-08-13T07:09:55.718713038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:56.268660 kubelet[2522]: E0813 07:09:56.268610 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:56.286986 containerd[1471]: time="2025-08-13T07:09:56.286707759Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:09:56.316367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639794019.mount: Deactivated successfully. Aug 13 07:09:56.325478 containerd[1471]: time="2025-08-13T07:09:56.325323487Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\"" Aug 13 07:09:56.327544 containerd[1471]: time="2025-08-13T07:09:56.327495407Z" level=info msg="StartContainer for \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\"" Aug 13 07:09:56.380994 containerd[1471]: time="2025-08-13T07:09:56.380936073Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:56.383784 containerd[1471]: time="2025-08-13T07:09:56.383276571Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:09:56.385963 containerd[1471]: time="2025-08-13T07:09:56.384824067Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:56.386481 systemd[1]: Started cri-containerd-a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6.scope - libcontainer container a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6. Aug 13 07:09:56.394915 containerd[1471]: time="2025-08-13T07:09:56.393828853Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.909767805s" Aug 13 07:09:56.394915 containerd[1471]: time="2025-08-13T07:09:56.394017146Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:09:56.459334 containerd[1471]: time="2025-08-13T07:09:56.459275596Z" level=info msg="CreateContainer within sandbox \"05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:09:56.471230 systemd[1]: cri-containerd-a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6.scope: Deactivated successfully. Aug 13 07:09:56.477393 containerd[1471]: time="2025-08-13T07:09:56.477133115Z" level=info msg="StartContainer for \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\" returns successfully" Aug 13 07:09:56.490337 containerd[1471]: time="2025-08-13T07:09:56.490140882Z" level=info msg="CreateContainer within sandbox \"05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\"" Aug 13 07:09:56.495679 containerd[1471]: time="2025-08-13T07:09:56.492673517Z" level=info msg="StartContainer for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\"" Aug 13 07:09:56.543186 containerd[1471]: time="2025-08-13T07:09:56.541794039Z" level=info msg="shim disconnected" id=a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6 namespace=k8s.io Aug 13 07:09:56.543186 containerd[1471]: time="2025-08-13T07:09:56.542996661Z" level=warning msg="cleaning up after shim disconnected" id=a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6 namespace=k8s.io Aug 13 07:09:56.543186 containerd[1471]: time="2025-08-13T07:09:56.543018312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:56.554203 systemd[1]: Started cri-containerd-1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68.scope - libcontainer container 1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68. Aug 13 07:09:56.573740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6-rootfs.mount: Deactivated successfully. Aug 13 07:09:56.601623 containerd[1471]: time="2025-08-13T07:09:56.601551338Z" level=info msg="StartContainer for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" returns successfully" Aug 13 07:09:57.278892 kubelet[2522]: E0813 07:09:57.278830 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:57.282678 kubelet[2522]: E0813 07:09:57.282078 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:57.287837 containerd[1471]: time="2025-08-13T07:09:57.287559899Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:09:57.316738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469425531.mount: Deactivated successfully. Aug 13 07:09:57.321351 containerd[1471]: time="2025-08-13T07:09:57.319065966Z" level=info msg="CreateContainer within sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\"" Aug 13 07:09:57.322016 containerd[1471]: time="2025-08-13T07:09:57.321719440Z" level=info msg="StartContainer for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\"" Aug 13 07:09:57.397168 systemd[1]: Started cri-containerd-a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626.scope - libcontainer container a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626. Aug 13 07:09:57.510634 containerd[1471]: time="2025-08-13T07:09:57.510576874Z" level=info msg="StartContainer for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" returns successfully" Aug 13 07:09:57.973027 kubelet[2522]: I0813 07:09:57.971355 2522 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:09:58.043240 kubelet[2522]: I0813 07:09:58.040427 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xvf4m" podStartSLOduration=3.406837754 podStartE2EDuration="15.039412412s" podCreationTimestamp="2025-08-13 07:09:43 +0000 UTC" firstStartedPulling="2025-08-13 07:09:44.769152506 +0000 UTC m=+5.928342076" lastFinishedPulling="2025-08-13 07:09:56.401727146 +0000 UTC m=+17.560916734" observedRunningTime="2025-08-13 07:09:57.629601976 +0000 UTC m=+18.788791578" watchObservedRunningTime="2025-08-13 07:09:58.039412412 +0000 UTC m=+19.198602007" Aug 13 07:09:58.065945 systemd[1]: Created slice kubepods-burstable-pod0ea57cc4_1400_4432_9fa1_127479408849.slice - libcontainer container kubepods-burstable-pod0ea57cc4_1400_4432_9fa1_127479408849.slice. Aug 13 07:09:58.084659 systemd[1]: Created slice kubepods-burstable-podfe5db471_f2c2_404f_a65a_5a1914f5136e.slice - libcontainer container kubepods-burstable-podfe5db471_f2c2_404f_a65a_5a1914f5136e.slice. Aug 13 07:09:58.101380 kubelet[2522]: I0813 07:09:58.101118 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ea57cc4-1400-4432-9fa1-127479408849-config-volume\") pod \"coredns-674b8bbfcf-k6pwx\" (UID: \"0ea57cc4-1400-4432-9fa1-127479408849\") " pod="kube-system/coredns-674b8bbfcf-k6pwx" Aug 13 07:09:58.101380 kubelet[2522]: I0813 07:09:58.101196 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66md6\" (UniqueName: \"kubernetes.io/projected/fe5db471-f2c2-404f-a65a-5a1914f5136e-kube-api-access-66md6\") pod \"coredns-674b8bbfcf-96mqs\" (UID: \"fe5db471-f2c2-404f-a65a-5a1914f5136e\") " pod="kube-system/coredns-674b8bbfcf-96mqs" Aug 13 07:09:58.101380 kubelet[2522]: I0813 07:09:58.101237 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe5db471-f2c2-404f-a65a-5a1914f5136e-config-volume\") pod \"coredns-674b8bbfcf-96mqs\" (UID: \"fe5db471-f2c2-404f-a65a-5a1914f5136e\") " pod="kube-system/coredns-674b8bbfcf-96mqs" Aug 13 07:09:58.101380 kubelet[2522]: I0813 07:09:58.101278 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6fx\" (UniqueName: \"kubernetes.io/projected/0ea57cc4-1400-4432-9fa1-127479408849-kube-api-access-6f6fx\") pod \"coredns-674b8bbfcf-k6pwx\" (UID: \"0ea57cc4-1400-4432-9fa1-127479408849\") " pod="kube-system/coredns-674b8bbfcf-k6pwx" Aug 13 07:09:58.293688 kubelet[2522]: E0813 07:09:58.292826 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:58.293688 kubelet[2522]: E0813 07:09:58.292930 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:58.325443 kubelet[2522]: I0813 07:09:58.325359 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x72xb" podStartSLOduration=6.041633975 podStartE2EDuration="15.325333456s" podCreationTimestamp="2025-08-13 07:09:43 +0000 UTC" firstStartedPulling="2025-08-13 07:09:44.19939095 +0000 UTC m=+5.358580544" lastFinishedPulling="2025-08-13 07:09:53.483090447 +0000 UTC m=+14.642280025" observedRunningTime="2025-08-13 07:09:58.324224569 +0000 UTC m=+19.483414162" watchObservedRunningTime="2025-08-13 07:09:58.325333456 +0000 UTC m=+19.484523053" Aug 13 07:09:58.377775 kubelet[2522]: E0813 07:09:58.377700 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:58.380105 containerd[1471]: time="2025-08-13T07:09:58.379234003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k6pwx,Uid:0ea57cc4-1400-4432-9fa1-127479408849,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:58.395073 kubelet[2522]: E0813 07:09:58.394580 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:58.402577 containerd[1471]: time="2025-08-13T07:09:58.399270377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96mqs,Uid:fe5db471-f2c2-404f-a65a-5a1914f5136e,Namespace:kube-system,Attempt:0,}" Aug 13 07:09:59.298120 kubelet[2522]: E0813 07:09:59.298087 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:00.301047 kubelet[2522]: E0813 07:10:00.300266 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:00.714611 systemd-networkd[1375]: cilium_host: Link UP Aug 13 07:10:00.717159 systemd-networkd[1375]: cilium_net: Link UP Aug 13 07:10:00.719172 systemd-networkd[1375]: cilium_net: Gained carrier Aug 13 07:10:00.719550 systemd-networkd[1375]: cilium_host: Gained carrier Aug 13 07:10:00.936066 systemd-networkd[1375]: cilium_vxlan: Link UP Aug 13 07:10:00.937076 systemd-networkd[1375]: cilium_vxlan: Gained carrier Aug 13 07:10:01.307022 systemd-networkd[1375]: cilium_host: Gained IPv6LL Aug 13 07:10:01.370726 systemd-networkd[1375]: cilium_net: Gained IPv6LL Aug 13 07:10:01.580386 kernel: NET: Registered PF_ALG protocol family Aug 13 07:10:02.329410 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Aug 13 07:10:03.597970 systemd-networkd[1375]: lxc_health: Link UP Aug 13 07:10:03.609786 systemd-networkd[1375]: lxc_health: Gained carrier Aug 13 07:10:04.046840 systemd-networkd[1375]: lxc3300acfa5ff0: Link UP Aug 13 07:10:04.060963 kernel: eth0: renamed from tmpbb299 Aug 13 07:10:04.069044 systemd-networkd[1375]: lxc3300acfa5ff0: Gained carrier Aug 13 07:10:04.081093 kubelet[2522]: E0813 07:10:04.081030 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:04.104454 systemd-networkd[1375]: lxc27759c5bdbf7: Link UP Aug 13 07:10:04.115065 kernel: eth0: renamed from tmp96ba8 Aug 13 07:10:04.124220 systemd-networkd[1375]: lxc27759c5bdbf7: Gained carrier Aug 13 07:10:04.319957 kubelet[2522]: E0813 07:10:04.319442 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:05.020117 systemd-networkd[1375]: lxc_health: Gained IPv6LL Aug 13 07:10:05.208122 systemd-networkd[1375]: lxc27759c5bdbf7: Gained IPv6LL Aug 13 07:10:05.326057 kubelet[2522]: E0813 07:10:05.325527 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:05.464117 systemd-networkd[1375]: lxc3300acfa5ff0: Gained IPv6LL Aug 13 07:10:10.853689 containerd[1471]: time="2025-08-13T07:10:10.851285410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:10.853689 containerd[1471]: time="2025-08-13T07:10:10.853624154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:10.853689 containerd[1471]: time="2025-08-13T07:10:10.853648238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:10.854553 containerd[1471]: time="2025-08-13T07:10:10.853810363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:10.914606 systemd[1]: Started cri-containerd-96ba8922d940ff196839edc90502b93d15e63532af3894c562ff3c857034861e.scope - libcontainer container 96ba8922d940ff196839edc90502b93d15e63532af3894c562ff3c857034861e. Aug 13 07:10:10.938242 containerd[1471]: time="2025-08-13T07:10:10.936378491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:10:10.938242 containerd[1471]: time="2025-08-13T07:10:10.937958211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:10:10.938242 containerd[1471]: time="2025-08-13T07:10:10.937999864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:10.944572 containerd[1471]: time="2025-08-13T07:10:10.941731612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:10:11.003192 systemd[1]: Started cri-containerd-bb29906d55c93b9abd83e29d35f196911946dbe9b43de8497bf3aa5d682cc2cd.scope - libcontainer container bb29906d55c93b9abd83e29d35f196911946dbe9b43de8497bf3aa5d682cc2cd. Aug 13 07:10:11.099949 containerd[1471]: time="2025-08-13T07:10:11.098622305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k6pwx,Uid:0ea57cc4-1400-4432-9fa1-127479408849,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ba8922d940ff196839edc90502b93d15e63532af3894c562ff3c857034861e\"" Aug 13 07:10:11.100173 kubelet[2522]: E0813 07:10:11.100074 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:11.111792 containerd[1471]: time="2025-08-13T07:10:11.110801125Z" level=info msg="CreateContainer within sandbox \"96ba8922d940ff196839edc90502b93d15e63532af3894c562ff3c857034861e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:10:11.153986 containerd[1471]: time="2025-08-13T07:10:11.153627062Z" level=info msg="CreateContainer within sandbox \"96ba8922d940ff196839edc90502b93d15e63532af3894c562ff3c857034861e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"936e4800f52926bf429f84be1ce1c38b46444d21798614274650d5767feae1c6\"" Aug 13 07:10:11.159973 containerd[1471]: time="2025-08-13T07:10:11.159722601Z" level=info msg="StartContainer for \"936e4800f52926bf429f84be1ce1c38b46444d21798614274650d5767feae1c6\"" Aug 13 07:10:11.177224 containerd[1471]: time="2025-08-13T07:10:11.176606836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96mqs,Uid:fe5db471-f2c2-404f-a65a-5a1914f5136e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb29906d55c93b9abd83e29d35f196911946dbe9b43de8497bf3aa5d682cc2cd\"" Aug 13 07:10:11.179781 kubelet[2522]: E0813 07:10:11.179467 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:11.194348 containerd[1471]: time="2025-08-13T07:10:11.194249965Z" level=info msg="CreateContainer within sandbox \"bb29906d55c93b9abd83e29d35f196911946dbe9b43de8497bf3aa5d682cc2cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:10:11.220053 containerd[1471]: time="2025-08-13T07:10:11.219127550Z" level=info msg="CreateContainer within sandbox \"bb29906d55c93b9abd83e29d35f196911946dbe9b43de8497bf3aa5d682cc2cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ca1b250a8e141954350aaf505d26896267f09e50fb4469b021067bfe25529af\"" Aug 13 07:10:11.225694 containerd[1471]: time="2025-08-13T07:10:11.225638770Z" level=info msg="StartContainer for \"9ca1b250a8e141954350aaf505d26896267f09e50fb4469b021067bfe25529af\"" Aug 13 07:10:11.237161 systemd[1]: Started cri-containerd-936e4800f52926bf429f84be1ce1c38b46444d21798614274650d5767feae1c6.scope - libcontainer container 936e4800f52926bf429f84be1ce1c38b46444d21798614274650d5767feae1c6. Aug 13 07:10:11.305178 systemd[1]: Started cri-containerd-9ca1b250a8e141954350aaf505d26896267f09e50fb4469b021067bfe25529af.scope - libcontainer container 9ca1b250a8e141954350aaf505d26896267f09e50fb4469b021067bfe25529af. Aug 13 07:10:11.317231 containerd[1471]: time="2025-08-13T07:10:11.317167308Z" level=info msg="StartContainer for \"936e4800f52926bf429f84be1ce1c38b46444d21798614274650d5767feae1c6\" returns successfully" Aug 13 07:10:11.353224 kubelet[2522]: E0813 07:10:11.353167 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:11.385768 kubelet[2522]: I0813 07:10:11.384978 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k6pwx" podStartSLOduration=28.38492239 podStartE2EDuration="28.38492239s" podCreationTimestamp="2025-08-13 07:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:11.381831872 +0000 UTC m=+32.541021473" watchObservedRunningTime="2025-08-13 07:10:11.38492239 +0000 UTC m=+32.544111995" Aug 13 07:10:11.396363 containerd[1471]: time="2025-08-13T07:10:11.396270946Z" level=info msg="StartContainer for \"9ca1b250a8e141954350aaf505d26896267f09e50fb4469b021067bfe25529af\" returns successfully" Aug 13 07:10:11.866162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230162003.mount: Deactivated successfully. Aug 13 07:10:12.372362 kubelet[2522]: E0813 07:10:12.372280 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:12.375112 kubelet[2522]: E0813 07:10:12.374379 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:12.398169 kubelet[2522]: I0813 07:10:12.397732 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-96mqs" podStartSLOduration=29.397703457 podStartE2EDuration="29.397703457s" podCreationTimestamp="2025-08-13 07:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:10:12.395811975 +0000 UTC m=+33.555001580" watchObservedRunningTime="2025-08-13 07:10:12.397703457 +0000 UTC m=+33.556893058" Aug 13 07:10:13.374920 kubelet[2522]: E0813 07:10:13.374507 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:13.376417 kubelet[2522]: E0813 07:10:13.376290 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:14.376942 kubelet[2522]: E0813 07:10:14.376702 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:21.010477 systemd[1]: Started sshd@7-24.199.106.199:22-139.178.89.65:35990.service - OpenSSH per-connection server daemon (139.178.89.65:35990). Aug 13 07:10:21.113494 sshd[3928]: Accepted publickey for core from 139.178.89.65 port 35990 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:21.117468 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:21.130199 systemd-logind[1453]: New session 8 of user core. Aug 13 07:10:21.138141 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:10:21.906300 sshd[3928]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:21.911750 systemd[1]: sshd@7-24.199.106.199:22-139.178.89.65:35990.service: Deactivated successfully. Aug 13 07:10:21.917614 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:10:21.920350 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:10:21.927098 systemd-logind[1453]: Removed session 8. Aug 13 07:10:26.925460 systemd[1]: Started sshd@8-24.199.106.199:22-139.178.89.65:36002.service - OpenSSH per-connection server daemon (139.178.89.65:36002). Aug 13 07:10:26.986526 sshd[3942]: Accepted publickey for core from 139.178.89.65 port 36002 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:26.989018 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:26.996766 systemd-logind[1453]: New session 9 of user core. Aug 13 07:10:27.004255 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:10:27.210274 sshd[3942]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:27.217694 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:10:27.221009 systemd[1]: sshd@8-24.199.106.199:22-139.178.89.65:36002.service: Deactivated successfully. Aug 13 07:10:27.228663 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:10:27.231131 systemd-logind[1453]: Removed session 9. Aug 13 07:10:32.238337 systemd[1]: Started sshd@9-24.199.106.199:22-139.178.89.65:35508.service - OpenSSH per-connection server daemon (139.178.89.65:35508). Aug 13 07:10:32.301506 sshd[3956]: Accepted publickey for core from 139.178.89.65 port 35508 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:32.305542 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:32.314441 systemd-logind[1453]: New session 10 of user core. Aug 13 07:10:32.325486 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:10:32.501580 sshd[3956]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:32.508264 systemd[1]: sshd@9-24.199.106.199:22-139.178.89.65:35508.service: Deactivated successfully. Aug 13 07:10:32.513373 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:10:32.515228 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:10:32.516721 systemd-logind[1453]: Removed session 10. Aug 13 07:10:37.523440 systemd[1]: Started sshd@10-24.199.106.199:22-139.178.89.65:35520.service - OpenSSH per-connection server daemon (139.178.89.65:35520). Aug 13 07:10:37.574779 sshd[3970]: Accepted publickey for core from 139.178.89.65 port 35520 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:37.575951 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:37.590391 systemd-logind[1453]: New session 11 of user core. Aug 13 07:10:37.602194 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:10:37.790401 sshd[3970]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:37.803135 systemd[1]: sshd@10-24.199.106.199:22-139.178.89.65:35520.service: Deactivated successfully. Aug 13 07:10:37.807793 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:10:37.812256 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:10:37.822376 systemd[1]: Started sshd@11-24.199.106.199:22-139.178.89.65:35534.service - OpenSSH per-connection server daemon (139.178.89.65:35534). Aug 13 07:10:37.824015 systemd-logind[1453]: Removed session 11. Aug 13 07:10:37.880974 sshd[3984]: Accepted publickey for core from 139.178.89.65 port 35534 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:37.884014 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:37.892087 systemd-logind[1453]: New session 12 of user core. Aug 13 07:10:37.904292 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:10:38.168216 sshd[3984]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:38.182117 systemd[1]: sshd@11-24.199.106.199:22-139.178.89.65:35534.service: Deactivated successfully. Aug 13 07:10:38.189455 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:10:38.192762 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:10:38.209350 systemd[1]: Started sshd@12-24.199.106.199:22-139.178.89.65:35538.service - OpenSSH per-connection server daemon (139.178.89.65:35538). Aug 13 07:10:38.212630 systemd-logind[1453]: Removed session 12. Aug 13 07:10:38.275948 sshd[3995]: Accepted publickey for core from 139.178.89.65 port 35538 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:38.277836 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:38.285749 systemd-logind[1453]: New session 13 of user core. Aug 13 07:10:38.289414 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:10:38.469742 sshd[3995]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:38.474634 systemd[1]: sshd@12-24.199.106.199:22-139.178.89.65:35538.service: Deactivated successfully. Aug 13 07:10:38.478529 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:10:38.482305 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:10:38.484479 systemd-logind[1453]: Removed session 13. Aug 13 07:10:43.499435 systemd[1]: Started sshd@13-24.199.106.199:22-139.178.89.65:36148.service - OpenSSH per-connection server daemon (139.178.89.65:36148). Aug 13 07:10:43.552935 sshd[4011]: Accepted publickey for core from 139.178.89.65 port 36148 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:43.557822 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:43.566905 systemd-logind[1453]: New session 14 of user core. Aug 13 07:10:43.577478 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:10:43.745298 sshd[4011]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:43.749503 systemd[1]: sshd@13-24.199.106.199:22-139.178.89.65:36148.service: Deactivated successfully. Aug 13 07:10:43.753580 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:10:43.758987 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:10:43.761157 systemd-logind[1453]: Removed session 14. Aug 13 07:10:48.769612 systemd[1]: Started sshd@14-24.199.106.199:22-139.178.89.65:36154.service - OpenSSH per-connection server daemon (139.178.89.65:36154). Aug 13 07:10:48.839922 sshd[4026]: Accepted publickey for core from 139.178.89.65 port 36154 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:48.843144 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:48.850964 systemd-logind[1453]: New session 15 of user core. Aug 13 07:10:48.858336 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:10:49.034187 sshd[4026]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:49.039735 systemd[1]: sshd@14-24.199.106.199:22-139.178.89.65:36154.service: Deactivated successfully. Aug 13 07:10:49.044467 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:10:49.046757 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:10:49.048332 systemd-logind[1453]: Removed session 15. Aug 13 07:10:54.056413 systemd[1]: Started sshd@15-24.199.106.199:22-139.178.89.65:33146.service - OpenSSH per-connection server daemon (139.178.89.65:33146). Aug 13 07:10:54.110909 sshd[4039]: Accepted publickey for core from 139.178.89.65 port 33146 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:54.112936 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:54.120723 systemd-logind[1453]: New session 16 of user core. Aug 13 07:10:54.131281 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:10:54.324374 sshd[4039]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:54.341988 systemd[1]: sshd@15-24.199.106.199:22-139.178.89.65:33146.service: Deactivated successfully. Aug 13 07:10:54.347620 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:10:54.348975 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:10:54.357469 systemd[1]: Started sshd@16-24.199.106.199:22-139.178.89.65:33160.service - OpenSSH per-connection server daemon (139.178.89.65:33160). Aug 13 07:10:54.359538 systemd-logind[1453]: Removed session 16. Aug 13 07:10:54.419927 sshd[4051]: Accepted publickey for core from 139.178.89.65 port 33160 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:54.423623 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:54.432403 systemd-logind[1453]: New session 17 of user core. Aug 13 07:10:54.441426 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:10:54.867958 sshd[4051]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:54.883750 systemd[1]: sshd@16-24.199.106.199:22-139.178.89.65:33160.service: Deactivated successfully. Aug 13 07:10:54.888015 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:10:54.891532 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:10:54.902019 systemd[1]: Started sshd@17-24.199.106.199:22-139.178.89.65:33164.service - OpenSSH per-connection server daemon (139.178.89.65:33164). Aug 13 07:10:54.904972 systemd-logind[1453]: Removed session 17. Aug 13 07:10:54.973787 sshd[4062]: Accepted publickey for core from 139.178.89.65 port 33164 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:54.976352 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:54.987198 systemd-logind[1453]: New session 18 of user core. Aug 13 07:10:54.993432 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:10:55.879317 sshd[4062]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:55.894399 systemd[1]: sshd@17-24.199.106.199:22-139.178.89.65:33164.service: Deactivated successfully. Aug 13 07:10:55.903481 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:10:55.908608 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:10:55.920739 systemd[1]: Started sshd@18-24.199.106.199:22-139.178.89.65:33176.service - OpenSSH per-connection server daemon (139.178.89.65:33176). Aug 13 07:10:55.925990 systemd-logind[1453]: Removed session 18. Aug 13 07:10:56.019992 sshd[4079]: Accepted publickey for core from 139.178.89.65 port 33176 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:56.020732 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:56.031573 systemd-logind[1453]: New session 19 of user core. Aug 13 07:10:56.036217 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:10:56.121910 kubelet[2522]: E0813 07:10:56.120958 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:56.483569 sshd[4079]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:56.498510 systemd[1]: sshd@18-24.199.106.199:22-139.178.89.65:33176.service: Deactivated successfully. Aug 13 07:10:56.505819 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:10:56.508392 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:10:56.522674 systemd[1]: Started sshd@19-24.199.106.199:22-139.178.89.65:33188.service - OpenSSH per-connection server daemon (139.178.89.65:33188). Aug 13 07:10:56.529955 systemd-logind[1453]: Removed session 19. Aug 13 07:10:56.598736 sshd[4090]: Accepted publickey for core from 139.178.89.65 port 33188 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:56.600106 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:56.610149 systemd-logind[1453]: New session 20 of user core. Aug 13 07:10:56.614305 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:10:56.803954 sshd[4090]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:56.813729 systemd[1]: sshd@19-24.199.106.199:22-139.178.89.65:33188.service: Deactivated successfully. Aug 13 07:10:56.819210 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:10:56.822406 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:10:56.825073 systemd-logind[1453]: Removed session 20. Aug 13 07:10:58.120623 kubelet[2522]: E0813 07:10:58.120569 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:01.826592 systemd[1]: Started sshd@20-24.199.106.199:22-139.178.89.65:40644.service - OpenSSH per-connection server daemon (139.178.89.65:40644). Aug 13 07:11:01.907975 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 40644 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:01.910060 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:01.922728 systemd-logind[1453]: New session 21 of user core. Aug 13 07:11:01.934462 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:11:02.112564 sshd[4105]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:02.120747 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:11:02.121505 systemd[1]: sshd@20-24.199.106.199:22-139.178.89.65:40644.service: Deactivated successfully. Aug 13 07:11:02.125743 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:11:02.127648 systemd-logind[1453]: Removed session 21. Aug 13 07:11:07.120749 kubelet[2522]: E0813 07:11:07.120003 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:07.140017 systemd[1]: Started sshd@21-24.199.106.199:22-139.178.89.65:40660.service - OpenSSH per-connection server daemon (139.178.89.65:40660). Aug 13 07:11:07.201933 sshd[4118]: Accepted publickey for core from 139.178.89.65 port 40660 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:07.203851 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:07.215432 systemd-logind[1453]: New session 22 of user core. Aug 13 07:11:07.222226 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:11:07.396194 sshd[4118]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:07.402726 systemd[1]: sshd@21-24.199.106.199:22-139.178.89.65:40660.service: Deactivated successfully. Aug 13 07:11:07.405808 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:11:07.409291 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:11:07.411229 systemd-logind[1453]: Removed session 22. Aug 13 07:11:12.428488 systemd[1]: Started sshd@22-24.199.106.199:22-139.178.89.65:47516.service - OpenSSH per-connection server daemon (139.178.89.65:47516). Aug 13 07:11:12.488778 sshd[4131]: Accepted publickey for core from 139.178.89.65 port 47516 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:12.491158 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:12.497297 systemd-logind[1453]: New session 23 of user core. Aug 13 07:11:12.509642 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:11:12.702168 sshd[4131]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:12.717104 systemd[1]: sshd@22-24.199.106.199:22-139.178.89.65:47516.service: Deactivated successfully. Aug 13 07:11:12.720302 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:11:12.722979 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:11:12.730568 systemd[1]: Started sshd@23-24.199.106.199:22-139.178.89.65:47530.service - OpenSSH per-connection server daemon (139.178.89.65:47530). Aug 13 07:11:12.733253 systemd-logind[1453]: Removed session 23. Aug 13 07:11:12.821304 sshd[4144]: Accepted publickey for core from 139.178.89.65 port 47530 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:12.823513 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:12.831453 systemd-logind[1453]: New session 24 of user core. Aug 13 07:11:12.848230 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:11:14.764248 systemd[1]: run-containerd-runc-k8s.io-a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626-runc.Q2mZYn.mount: Deactivated successfully. Aug 13 07:11:14.779469 containerd[1471]: time="2025-08-13T07:11:14.779034503Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:11:14.803455 containerd[1471]: time="2025-08-13T07:11:14.803320033Z" level=info msg="StopContainer for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" with timeout 2 (s)" Aug 13 07:11:14.803455 containerd[1471]: time="2025-08-13T07:11:14.803395514Z" level=info msg="StopContainer for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" with timeout 30 (s)" Aug 13 07:11:14.805977 containerd[1471]: time="2025-08-13T07:11:14.805691428Z" level=info msg="Stop container \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" with signal terminated" Aug 13 07:11:14.805977 containerd[1471]: time="2025-08-13T07:11:14.805886992Z" level=info msg="Stop container \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" with signal terminated" Aug 13 07:11:14.827769 systemd-networkd[1375]: lxc_health: Link DOWN Aug 13 07:11:14.827781 systemd-networkd[1375]: lxc_health: Lost carrier Aug 13 07:11:14.850191 systemd[1]: cri-containerd-1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68.scope: Deactivated successfully. Aug 13 07:11:14.862244 systemd[1]: cri-containerd-a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626.scope: Deactivated successfully. Aug 13 07:11:14.862572 systemd[1]: cri-containerd-a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626.scope: Consumed 11.437s CPU time. Aug 13 07:11:14.900469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68-rootfs.mount: Deactivated successfully. Aug 13 07:11:14.911182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626-rootfs.mount: Deactivated successfully. Aug 13 07:11:14.916668 containerd[1471]: time="2025-08-13T07:11:14.916464464Z" level=info msg="shim disconnected" id=1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68 namespace=k8s.io Aug 13 07:11:14.917346 containerd[1471]: time="2025-08-13T07:11:14.916838387Z" level=warning msg="cleaning up after shim disconnected" id=1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68 namespace=k8s.io Aug 13 07:11:14.917346 containerd[1471]: time="2025-08-13T07:11:14.916816177Z" level=info msg="shim disconnected" id=a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626 namespace=k8s.io Aug 13 07:11:14.917346 containerd[1471]: time="2025-08-13T07:11:14.917232047Z" level=warning msg="cleaning up after shim disconnected" id=a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626 namespace=k8s.io Aug 13 07:11:14.917346 containerd[1471]: time="2025-08-13T07:11:14.917240093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:14.917648 containerd[1471]: time="2025-08-13T07:11:14.917498032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:14.967234 containerd[1471]: time="2025-08-13T07:11:14.967186214Z" level=info msg="StopContainer for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" returns successfully" Aug 13 07:11:14.968776 containerd[1471]: time="2025-08-13T07:11:14.968404989Z" level=info msg="StopContainer for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" returns successfully" Aug 13 07:11:14.969593 containerd[1471]: time="2025-08-13T07:11:14.969531672Z" level=info msg="StopPodSandbox for \"05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612\"" Aug 13 07:11:14.969743 containerd[1471]: time="2025-08-13T07:11:14.969615475Z" level=info msg="Container to stop \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:11:14.969828 containerd[1471]: time="2025-08-13T07:11:14.969799098Z" level=info msg="StopPodSandbox for \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\"" Aug 13 07:11:14.969881 containerd[1471]: time="2025-08-13T07:11:14.969838945Z" level=info msg="Container to stop \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:11:14.969911 containerd[1471]: time="2025-08-13T07:11:14.969882004Z" level=info msg="Container to stop \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:11:14.969943 containerd[1471]: time="2025-08-13T07:11:14.969901658Z" level=info msg="Container to stop \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:11:14.969943 containerd[1471]: time="2025-08-13T07:11:14.969921384Z" level=info msg="Container to stop \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:11:14.970016 containerd[1471]: time="2025-08-13T07:11:14.969941176Z" level=info msg="Container to stop \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:11:14.975595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612-shm.mount: Deactivated successfully. Aug 13 07:11:14.975785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d-shm.mount: Deactivated successfully. Aug 13 07:11:14.984778 systemd[1]: cri-containerd-52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d.scope: Deactivated successfully. Aug 13 07:11:14.988329 systemd[1]: cri-containerd-05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612.scope: Deactivated successfully. Aug 13 07:11:15.034216 containerd[1471]: time="2025-08-13T07:11:15.034004751Z" level=info msg="shim disconnected" id=52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d namespace=k8s.io Aug 13 07:11:15.034216 containerd[1471]: time="2025-08-13T07:11:15.034061182Z" level=warning msg="cleaning up after shim disconnected" id=52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d namespace=k8s.io Aug 13 07:11:15.034216 containerd[1471]: time="2025-08-13T07:11:15.034070277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:15.038513 containerd[1471]: time="2025-08-13T07:11:15.037226718Z" level=info msg="shim disconnected" id=05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612 namespace=k8s.io Aug 13 07:11:15.038513 containerd[1471]: time="2025-08-13T07:11:15.037311661Z" level=warning msg="cleaning up after shim disconnected" id=05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612 namespace=k8s.io Aug 13 07:11:15.038513 containerd[1471]: time="2025-08-13T07:11:15.037325519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:15.072070 containerd[1471]: time="2025-08-13T07:11:15.071541475Z" level=info msg="TearDown network for sandbox \"05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612\" successfully" Aug 13 07:11:15.072070 containerd[1471]: time="2025-08-13T07:11:15.071594646Z" level=info msg="StopPodSandbox for \"05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612\" returns successfully" Aug 13 07:11:15.075895 containerd[1471]: time="2025-08-13T07:11:15.075772363Z" level=info msg="TearDown network for sandbox \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" successfully" Aug 13 07:11:15.075895 containerd[1471]: time="2025-08-13T07:11:15.075866243Z" level=info msg="StopPodSandbox for \"52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d\" returns successfully" Aug 13 07:11:15.131337 kubelet[2522]: E0813 07:11:15.130928 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:15.238780 kubelet[2522]: I0813 07:11:15.237609 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-xtables-lock\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.238780 kubelet[2522]: I0813 07:11:15.237670 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-net\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.238780 kubelet[2522]: I0813 07:11:15.237710 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hubble-tls\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.238780 kubelet[2522]: I0813 07:11:15.237744 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-config-path\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.238780 kubelet[2522]: I0813 07:11:15.237777 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-kernel\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.238780 kubelet[2522]: I0813 07:11:15.237802 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hostproc\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.239225 kubelet[2522]: I0813 07:11:15.237829 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-run\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.239225 kubelet[2522]: I0813 07:11:15.237870 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cni-path\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.239225 kubelet[2522]: I0813 07:11:15.237900 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jv8n\" (UniqueName: \"kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-kube-api-access-7jv8n\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.239225 kubelet[2522]: I0813 07:11:15.237931 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgst\" (UniqueName: \"kubernetes.io/projected/b66e254e-2559-441a-94fa-0aeef2eee753-kube-api-access-mvgst\") pod \"b66e254e-2559-441a-94fa-0aeef2eee753\" (UID: \"b66e254e-2559-441a-94fa-0aeef2eee753\") " Aug 13 07:11:15.239225 kubelet[2522]: I0813 07:11:15.237959 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b66e254e-2559-441a-94fa-0aeef2eee753-cilium-config-path\") pod \"b66e254e-2559-441a-94fa-0aeef2eee753\" (UID: \"b66e254e-2559-441a-94fa-0aeef2eee753\") " Aug 13 07:11:15.239225 kubelet[2522]: I0813 07:11:15.237987 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-bpf-maps\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.240425 kubelet[2522]: I0813 07:11:15.238016 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-cgroup\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.240425 kubelet[2522]: I0813 07:11:15.238043 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-lib-modules\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.240425 kubelet[2522]: I0813 07:11:15.238075 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b15c92a8-2ffd-4846-b24c-50aafaaf1856-clustermesh-secrets\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.240425 kubelet[2522]: I0813 07:11:15.238101 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-etc-cni-netd\") pod \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\" (UID: \"b15c92a8-2ffd-4846-b24c-50aafaaf1856\") " Aug 13 07:11:15.258333 kubelet[2522]: I0813 07:11:15.258042 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-kube-api-access-7jv8n" (OuterVolumeSpecName: "kube-api-access-7jv8n") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "kube-api-access-7jv8n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:11:15.258333 kubelet[2522]: I0813 07:11:15.258159 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.259009 kubelet[2522]: I0813 07:11:15.255668 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.262525 kubelet[2522]: I0813 07:11:15.262463 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:11:15.263282 kubelet[2522]: I0813 07:11:15.263245 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66e254e-2559-441a-94fa-0aeef2eee753-kube-api-access-mvgst" (OuterVolumeSpecName: "kube-api-access-mvgst") pod "b66e254e-2559-441a-94fa-0aeef2eee753" (UID: "b66e254e-2559-441a-94fa-0aeef2eee753"). InnerVolumeSpecName "kube-api-access-mvgst". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:11:15.266405 kubelet[2522]: I0813 07:11:15.266325 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:11:15.266568 kubelet[2522]: I0813 07:11:15.266455 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266568 kubelet[2522]: I0813 07:11:15.266487 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hostproc" (OuterVolumeSpecName: "hostproc") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266568 kubelet[2522]: I0813 07:11:15.266511 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266568 kubelet[2522]: I0813 07:11:15.266536 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cni-path" (OuterVolumeSpecName: "cni-path") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266568 kubelet[2522]: I0813 07:11:15.266562 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266771 kubelet[2522]: I0813 07:11:15.266586 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266771 kubelet[2522]: I0813 07:11:15.266609 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.266771 kubelet[2522]: I0813 07:11:15.266632 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 07:11:15.268129 kubelet[2522]: I0813 07:11:15.268087 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66e254e-2559-441a-94fa-0aeef2eee753-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b66e254e-2559-441a-94fa-0aeef2eee753" (UID: "b66e254e-2559-441a-94fa-0aeef2eee753"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:11:15.271594 kubelet[2522]: I0813 07:11:15.270916 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15c92a8-2ffd-4846-b24c-50aafaaf1856-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b15c92a8-2ffd-4846-b24c-50aafaaf1856" (UID: "b15c92a8-2ffd-4846-b24c-50aafaaf1856"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.338904 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-cgroup\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.338961 2522 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-lib-modules\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.338971 2522 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b15c92a8-2ffd-4846-b24c-50aafaaf1856-clustermesh-secrets\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.338989 2522 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-etc-cni-netd\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.338999 2522 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-xtables-lock\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.339008 2522 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-net\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.339018 2522 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hubble-tls\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339137 kubelet[2522]: I0813 07:11:15.339027 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-config-path\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339038 2522 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-host-proc-sys-kernel\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339049 2522 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-hostproc\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339059 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cilium-run\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339069 2522 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-cni-path\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339078 2522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jv8n\" (UniqueName: \"kubernetes.io/projected/b15c92a8-2ffd-4846-b24c-50aafaaf1856-kube-api-access-7jv8n\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339088 2522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mvgst\" (UniqueName: \"kubernetes.io/projected/b66e254e-2559-441a-94fa-0aeef2eee753-kube-api-access-mvgst\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339098 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b66e254e-2559-441a-94fa-0aeef2eee753-cilium-config-path\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.339527 kubelet[2522]: I0813 07:11:15.339107 2522 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b15c92a8-2ffd-4846-b24c-50aafaaf1856-bpf-maps\") on node \"ci-4081.3.5-0-ae45d59eaf\" DevicePath \"\"" Aug 13 07:11:15.558056 systemd[1]: Removed slice kubepods-besteffort-podb66e254e_2559_441a_94fa_0aeef2eee753.slice - libcontainer container kubepods-besteffort-podb66e254e_2559_441a_94fa_0aeef2eee753.slice. Aug 13 07:11:15.575561 kubelet[2522]: I0813 07:11:15.574910 2522 scope.go:117] "RemoveContainer" containerID="1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68" Aug 13 07:11:15.579312 containerd[1471]: time="2025-08-13T07:11:15.579011893Z" level=info msg="RemoveContainer for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\"" Aug 13 07:11:15.588992 containerd[1471]: time="2025-08-13T07:11:15.588500301Z" level=info msg="RemoveContainer for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" returns successfully" Aug 13 07:11:15.591238 systemd[1]: Removed slice kubepods-burstable-podb15c92a8_2ffd_4846_b24c_50aafaaf1856.slice - libcontainer container kubepods-burstable-podb15c92a8_2ffd_4846_b24c_50aafaaf1856.slice. Aug 13 07:11:15.593011 systemd[1]: kubepods-burstable-podb15c92a8_2ffd_4846_b24c_50aafaaf1856.slice: Consumed 11.562s CPU time. Aug 13 07:11:15.595417 kubelet[2522]: I0813 07:11:15.595353 2522 scope.go:117] "RemoveContainer" containerID="1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68" Aug 13 07:11:15.628219 containerd[1471]: time="2025-08-13T07:11:15.601111724Z" level=error msg="ContainerStatus for \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\": not found" Aug 13 07:11:15.642381 kubelet[2522]: E0813 07:11:15.642287 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\": not found" containerID="1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68" Aug 13 07:11:15.652506 kubelet[2522]: I0813 07:11:15.643463 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68"} err="failed to get container status \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f1efe06130607f743d8e315b7ce49eb5fd43a28fecb7ce83163c1b6ecf3ac68\": not found" Aug 13 07:11:15.652506 kubelet[2522]: I0813 07:11:15.652431 2522 scope.go:117] "RemoveContainer" containerID="a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626" Aug 13 07:11:15.656613 containerd[1471]: time="2025-08-13T07:11:15.656533864Z" level=info msg="RemoveContainer for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\"" Aug 13 07:11:15.662749 containerd[1471]: time="2025-08-13T07:11:15.662677561Z" level=info msg="RemoveContainer for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" returns successfully" Aug 13 07:11:15.663605 kubelet[2522]: I0813 07:11:15.663463 2522 scope.go:117] "RemoveContainer" containerID="a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6" Aug 13 07:11:15.665554 containerd[1471]: time="2025-08-13T07:11:15.665410509Z" level=info msg="RemoveContainer for \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\"" Aug 13 07:11:15.672836 containerd[1471]: time="2025-08-13T07:11:15.672512093Z" level=info msg="RemoveContainer for \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\" returns successfully" Aug 13 07:11:15.674892 kubelet[2522]: I0813 07:11:15.674099 2522 scope.go:117] "RemoveContainer" containerID="f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348" Aug 13 07:11:15.679286 containerd[1471]: time="2025-08-13T07:11:15.678695864Z" level=info msg="RemoveContainer for \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\"" Aug 13 07:11:15.688619 containerd[1471]: time="2025-08-13T07:11:15.688506068Z" level=info msg="RemoveContainer for \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\" returns successfully" Aug 13 07:11:15.688993 kubelet[2522]: I0813 07:11:15.688955 2522 scope.go:117] "RemoveContainer" containerID="811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7" Aug 13 07:11:15.691041 containerd[1471]: time="2025-08-13T07:11:15.690971797Z" level=info msg="RemoveContainer for \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\"" Aug 13 07:11:15.694878 containerd[1471]: time="2025-08-13T07:11:15.694776135Z" level=info msg="RemoveContainer for \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\" returns successfully" Aug 13 07:11:15.695840 kubelet[2522]: I0813 07:11:15.695316 2522 scope.go:117] "RemoveContainer" containerID="f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50" Aug 13 07:11:15.697230 containerd[1471]: time="2025-08-13T07:11:15.697182852Z" level=info msg="RemoveContainer for \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\"" Aug 13 07:11:15.701283 containerd[1471]: time="2025-08-13T07:11:15.701221134Z" level=info msg="RemoveContainer for \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\" returns successfully" Aug 13 07:11:15.701913 kubelet[2522]: I0813 07:11:15.701832 2522 scope.go:117] "RemoveContainer" containerID="a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626" Aug 13 07:11:15.702472 containerd[1471]: time="2025-08-13T07:11:15.702391832Z" level=error msg="ContainerStatus for \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\": not found" Aug 13 07:11:15.702792 kubelet[2522]: E0813 07:11:15.702764 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\": not found" containerID="a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626" Aug 13 07:11:15.702908 kubelet[2522]: I0813 07:11:15.702810 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626"} err="failed to get container status \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\": rpc error: code = NotFound desc = an error occurred when try to find container \"a59d0ecd90f3669cba122da2ff7de7f4d8cd6fab5094f02b71eeb2e58a248626\": not found" Aug 13 07:11:15.702908 kubelet[2522]: I0813 07:11:15.702834 2522 scope.go:117] "RemoveContainer" containerID="a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6" Aug 13 07:11:15.703758 containerd[1471]: time="2025-08-13T07:11:15.703553056Z" level=error msg="ContainerStatus for \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\": not found" Aug 13 07:11:15.703997 kubelet[2522]: E0813 07:11:15.703961 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\": not found" containerID="a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6" Aug 13 07:11:15.703997 kubelet[2522]: I0813 07:11:15.703997 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6"} err="failed to get container status \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4f35579795a6d779ae86c7c05ba796de5c63b7764e7bbddf0a3973920f3c0e6\": not found" Aug 13 07:11:15.704127 kubelet[2522]: I0813 07:11:15.704014 2522 scope.go:117] "RemoveContainer" containerID="f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348" Aug 13 07:11:15.704746 containerd[1471]: time="2025-08-13T07:11:15.704359704Z" level=error msg="ContainerStatus for \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\": not found" Aug 13 07:11:15.704843 kubelet[2522]: E0813 07:11:15.704563 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\": not found" containerID="f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348" Aug 13 07:11:15.704843 kubelet[2522]: I0813 07:11:15.704600 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348"} err="failed to get container status \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\": rpc error: code = NotFound desc = an error occurred when try to find container \"f18a31d33e425c0c8d3d31aaba1f8ab6d60ea7e356926ecf64638387ef30e348\": not found" Aug 13 07:11:15.704843 kubelet[2522]: I0813 07:11:15.704628 2522 scope.go:117] "RemoveContainer" containerID="811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7" Aug 13 07:11:15.705122 kubelet[2522]: E0813 07:11:15.704985 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\": not found" containerID="811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7" Aug 13 07:11:15.705122 kubelet[2522]: I0813 07:11:15.705006 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7"} err="failed to get container status \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\": rpc error: code = NotFound desc = an error occurred when try to find container \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\": not found" Aug 13 07:11:15.705122 kubelet[2522]: I0813 07:11:15.705035 2522 scope.go:117] "RemoveContainer" containerID="f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50" Aug 13 07:11:15.705224 containerd[1471]: time="2025-08-13T07:11:15.704878119Z" level=error msg="ContainerStatus for \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"811e19bbe508774c8843206513c5afc4bbcffece10e7299c2870de82feb00da7\": not found" Aug 13 07:11:15.706887 containerd[1471]: time="2025-08-13T07:11:15.705238826Z" level=error msg="ContainerStatus for \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\": not found" Aug 13 07:11:15.706975 kubelet[2522]: E0813 07:11:15.705416 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\": not found" containerID="f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50" Aug 13 07:11:15.706975 kubelet[2522]: I0813 07:11:15.705450 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50"} err="failed to get container status \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5b5b219b29826e89db5188b255b73e1ddcaf4ca75ea51638d5cbfb59a339c50\": not found" Aug 13 07:11:15.746146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05873fd08a8a2906486ae939f7b36217cadaeb6ae0b2b47d1c22a2c2b5eb4612-rootfs.mount: Deactivated successfully. Aug 13 07:11:15.746325 systemd[1]: var-lib-kubelet-pods-b66e254e\x2d2559\x2d441a\x2d94fa\x2d0aeef2eee753-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmvgst.mount: Deactivated successfully. Aug 13 07:11:15.746424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52f3e777305437c61ecae31c7aa9dd6758c8e457f017cc750ae527bce238eb7d-rootfs.mount: Deactivated successfully. Aug 13 07:11:15.746551 systemd[1]: var-lib-kubelet-pods-b15c92a8\x2d2ffd\x2d4846\x2db24c\x2d50aafaaf1856-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7jv8n.mount: Deactivated successfully. Aug 13 07:11:15.746650 systemd[1]: var-lib-kubelet-pods-b15c92a8\x2d2ffd\x2d4846\x2db24c\x2d50aafaaf1856-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:11:15.746740 systemd[1]: var-lib-kubelet-pods-b15c92a8\x2d2ffd\x2d4846\x2db24c\x2d50aafaaf1856-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:11:16.653888 sshd[4144]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:16.668008 systemd[1]: sshd@23-24.199.106.199:22-139.178.89.65:47530.service: Deactivated successfully. Aug 13 07:11:16.672116 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:11:16.672770 systemd[1]: session-24.scope: Consumed 1.129s CPU time. Aug 13 07:11:16.675646 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:11:16.685491 systemd[1]: Started sshd@24-24.199.106.199:22-139.178.89.65:47540.service - OpenSSH per-connection server daemon (139.178.89.65:47540). Aug 13 07:11:16.688348 systemd-logind[1453]: Removed session 24. Aug 13 07:11:16.743900 sshd[4303]: Accepted publickey for core from 139.178.89.65 port 47540 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:16.746313 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:16.755008 systemd-logind[1453]: New session 25 of user core. Aug 13 07:11:16.765291 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:11:17.130898 kubelet[2522]: I0813 07:11:17.130077 2522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15c92a8-2ffd-4846-b24c-50aafaaf1856" path="/var/lib/kubelet/pods/b15c92a8-2ffd-4846-b24c-50aafaaf1856/volumes" Aug 13 07:11:17.132445 kubelet[2522]: I0813 07:11:17.132399 2522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66e254e-2559-441a-94fa-0aeef2eee753" path="/var/lib/kubelet/pods/b66e254e-2559-441a-94fa-0aeef2eee753/volumes" Aug 13 07:11:17.606950 sshd[4303]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:17.623879 systemd[1]: sshd@24-24.199.106.199:22-139.178.89.65:47540.service: Deactivated successfully. Aug 13 07:11:17.631673 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:11:17.638802 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:11:17.651182 systemd[1]: Started sshd@25-24.199.106.199:22-139.178.89.65:47556.service - OpenSSH per-connection server daemon (139.178.89.65:47556). Aug 13 07:11:17.659193 systemd-logind[1453]: Removed session 25. Aug 13 07:11:17.720135 systemd[1]: Created slice kubepods-burstable-pod1367a405_5feb_4a96_ab05_4ec045f7df6a.slice - libcontainer container kubepods-burstable-pod1367a405_5feb_4a96_ab05_4ec045f7df6a.slice. Aug 13 07:11:17.751911 sshd[4314]: Accepted publickey for core from 139.178.89.65 port 47556 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:17.752810 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:17.763970 kubelet[2522]: I0813 07:11:17.763291 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-cilium-run\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.763970 kubelet[2522]: I0813 07:11:17.763379 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-hostproc\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.763970 kubelet[2522]: I0813 07:11:17.763406 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-cni-path\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.763970 kubelet[2522]: I0813 07:11:17.763456 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-cilium-cgroup\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.763970 kubelet[2522]: I0813 07:11:17.763481 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-lib-modules\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.763970 kubelet[2522]: I0813 07:11:17.763834 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1367a405-5feb-4a96-ab05-4ec045f7df6a-hubble-tls\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.767274 kubelet[2522]: I0813 07:11:17.764961 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgt85\" (UniqueName: \"kubernetes.io/projected/1367a405-5feb-4a96-ab05-4ec045f7df6a-kube-api-access-hgt85\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.767274 kubelet[2522]: I0813 07:11:17.765335 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-bpf-maps\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.767274 kubelet[2522]: I0813 07:11:17.765401 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1367a405-5feb-4a96-ab05-4ec045f7df6a-cilium-ipsec-secrets\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.767274 kubelet[2522]: I0813 07:11:17.765436 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1367a405-5feb-4a96-ab05-4ec045f7df6a-clustermesh-secrets\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.768445 kubelet[2522]: I0813 07:11:17.767964 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-host-proc-sys-net\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.768445 kubelet[2522]: I0813 07:11:17.768046 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-host-proc-sys-kernel\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.768445 kubelet[2522]: I0813 07:11:17.768107 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-etc-cni-netd\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.768445 kubelet[2522]: I0813 07:11:17.768176 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1367a405-5feb-4a96-ab05-4ec045f7df6a-xtables-lock\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.768445 kubelet[2522]: I0813 07:11:17.768212 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1367a405-5feb-4a96-ab05-4ec045f7df6a-cilium-config-path\") pod \"cilium-b5qq2\" (UID: \"1367a405-5feb-4a96-ab05-4ec045f7df6a\") " pod="kube-system/cilium-b5qq2" Aug 13 07:11:17.772688 systemd-logind[1453]: New session 26 of user core. Aug 13 07:11:17.778444 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:11:17.852731 sshd[4314]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:17.865804 systemd[1]: sshd@25-24.199.106.199:22-139.178.89.65:47556.service: Deactivated successfully. Aug 13 07:11:17.869791 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:11:17.876605 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:11:17.890388 systemd[1]: Started sshd@26-24.199.106.199:22-139.178.89.65:47564.service - OpenSSH per-connection server daemon (139.178.89.65:47564). Aug 13 07:11:17.937959 systemd-logind[1453]: Removed session 26. Aug 13 07:11:17.991951 sshd[4323]: Accepted publickey for core from 139.178.89.65 port 47564 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:17.996126 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:18.006984 systemd-logind[1453]: New session 27 of user core. Aug 13 07:11:18.012208 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:11:18.031523 kubelet[2522]: E0813 07:11:18.031452 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:18.032563 containerd[1471]: time="2025-08-13T07:11:18.032126378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5qq2,Uid:1367a405-5feb-4a96-ab05-4ec045f7df6a,Namespace:kube-system,Attempt:0,}" Aug 13 07:11:18.068904 containerd[1471]: time="2025-08-13T07:11:18.064816503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:11:18.071015 containerd[1471]: time="2025-08-13T07:11:18.070511282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:11:18.071015 containerd[1471]: time="2025-08-13T07:11:18.070559677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:18.072467 containerd[1471]: time="2025-08-13T07:11:18.070925310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:11:18.108139 systemd[1]: Started cri-containerd-0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783.scope - libcontainer container 0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783. Aug 13 07:11:18.181947 containerd[1471]: time="2025-08-13T07:11:18.181363431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5qq2,Uid:1367a405-5feb-4a96-ab05-4ec045f7df6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\"" Aug 13 07:11:18.185164 kubelet[2522]: E0813 07:11:18.185000 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:18.211998 containerd[1471]: time="2025-08-13T07:11:18.210849887Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:11:18.235584 containerd[1471]: time="2025-08-13T07:11:18.235479409Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e\"" Aug 13 07:11:18.237638 containerd[1471]: time="2025-08-13T07:11:18.236739875Z" level=info msg="StartContainer for \"cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e\"" Aug 13 07:11:18.282610 systemd[1]: Started cri-containerd-cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e.scope - libcontainer container cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e. Aug 13 07:11:18.349024 containerd[1471]: time="2025-08-13T07:11:18.348529104Z" level=info msg="StartContainer for \"cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e\" returns successfully" Aug 13 07:11:18.364078 systemd[1]: cri-containerd-cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e.scope: Deactivated successfully. Aug 13 07:11:18.415596 containerd[1471]: time="2025-08-13T07:11:18.415504174Z" level=info msg="shim disconnected" id=cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e namespace=k8s.io Aug 13 07:11:18.416278 containerd[1471]: time="2025-08-13T07:11:18.416009167Z" level=warning msg="cleaning up after shim disconnected" id=cc29464c2f5f0f981d0ae9bc069b9d0ae41de02625e18e99b42a6c610dc3710e namespace=k8s.io Aug 13 07:11:18.416278 containerd[1471]: time="2025-08-13T07:11:18.416078475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:18.600019 kubelet[2522]: E0813 07:11:18.599696 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:18.609900 containerd[1471]: time="2025-08-13T07:11:18.608884388Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:11:18.626802 containerd[1471]: time="2025-08-13T07:11:18.626730960Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9\"" Aug 13 07:11:18.629221 containerd[1471]: time="2025-08-13T07:11:18.629155525Z" level=info msg="StartContainer for \"26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9\"" Aug 13 07:11:18.675184 systemd[1]: Started cri-containerd-26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9.scope - libcontainer container 26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9. Aug 13 07:11:18.721963 containerd[1471]: time="2025-08-13T07:11:18.721398338Z" level=info msg="StartContainer for \"26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9\" returns successfully" Aug 13 07:11:18.733837 systemd[1]: cri-containerd-26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9.scope: Deactivated successfully. Aug 13 07:11:18.768384 containerd[1471]: time="2025-08-13T07:11:18.768313607Z" level=info msg="shim disconnected" id=26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9 namespace=k8s.io Aug 13 07:11:18.768384 containerd[1471]: time="2025-08-13T07:11:18.768374178Z" level=warning msg="cleaning up after shim disconnected" id=26721fe825df9855792b04a77c34db9d455235b7262ba11ed04dd51fa0cc62b9 namespace=k8s.io Aug 13 07:11:18.768384 containerd[1471]: time="2025-08-13T07:11:18.768384461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:19.266594 kubelet[2522]: E0813 07:11:19.266466 2522 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:11:19.607000 kubelet[2522]: E0813 07:11:19.605530 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:19.619320 containerd[1471]: time="2025-08-13T07:11:19.618949200Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:11:19.649938 containerd[1471]: time="2025-08-13T07:11:19.649041947Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7\"" Aug 13 07:11:19.650428 containerd[1471]: time="2025-08-13T07:11:19.650388104Z" level=info msg="StartContainer for \"d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7\"" Aug 13 07:11:19.716193 systemd[1]: Started cri-containerd-d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7.scope - libcontainer container d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7. Aug 13 07:11:19.762780 containerd[1471]: time="2025-08-13T07:11:19.762702559Z" level=info msg="StartContainer for \"d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7\" returns successfully" Aug 13 07:11:19.771642 systemd[1]: cri-containerd-d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7.scope: Deactivated successfully. Aug 13 07:11:19.850986 containerd[1471]: time="2025-08-13T07:11:19.850900858Z" level=info msg="shim disconnected" id=d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7 namespace=k8s.io Aug 13 07:11:19.850986 containerd[1471]: time="2025-08-13T07:11:19.850976311Z" level=warning msg="cleaning up after shim disconnected" id=d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7 namespace=k8s.io Aug 13 07:11:19.850986 containerd[1471]: time="2025-08-13T07:11:19.850990291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:19.894167 systemd[1]: run-containerd-runc-k8s.io-d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7-runc.kxYKtc.mount: Deactivated successfully. Aug 13 07:11:19.894342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5c8f4d3546ee4afd3aada762e1cd89f9bab4c8d8137e9dc44a8fea85d21c6a7-rootfs.mount: Deactivated successfully. Aug 13 07:11:20.616408 kubelet[2522]: E0813 07:11:20.615491 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:20.626335 containerd[1471]: time="2025-08-13T07:11:20.626122255Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:11:20.652683 containerd[1471]: time="2025-08-13T07:11:20.652630739Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673\"" Aug 13 07:11:20.654228 containerd[1471]: time="2025-08-13T07:11:20.654176078Z" level=info msg="StartContainer for \"c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673\"" Aug 13 07:11:20.725321 systemd[1]: Started cri-containerd-c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673.scope - libcontainer container c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673. Aug 13 07:11:20.821318 containerd[1471]: time="2025-08-13T07:11:20.821237927Z" level=info msg="StartContainer for \"c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673\" returns successfully" Aug 13 07:11:20.826648 systemd[1]: cri-containerd-c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673.scope: Deactivated successfully. Aug 13 07:11:20.882905 containerd[1471]: time="2025-08-13T07:11:20.882285112Z" level=info msg="shim disconnected" id=c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673 namespace=k8s.io Aug 13 07:11:20.882905 containerd[1471]: time="2025-08-13T07:11:20.882416667Z" level=warning msg="cleaning up after shim disconnected" id=c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673 namespace=k8s.io Aug 13 07:11:20.882905 containerd[1471]: time="2025-08-13T07:11:20.882435624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:11:20.896234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c909cd198bf538a6e62c92e9dc88dfbcbc0f89bb4c95c6949038f4592f788673-rootfs.mount: Deactivated successfully. Aug 13 07:11:21.621638 kubelet[2522]: E0813 07:11:21.621568 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:21.632587 containerd[1471]: time="2025-08-13T07:11:21.632346545Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:11:21.658233 containerd[1471]: time="2025-08-13T07:11:21.658154799Z" level=info msg="CreateContainer within sandbox \"0804cd396c511e69d73b39dab5884df9d36080916e9e02cca44480e911c0e783\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28ad8ecdae2d3baa405753da2cfebc23acb9a4140355fe44a57879bb03f1b185\"" Aug 13 07:11:21.660021 containerd[1471]: time="2025-08-13T07:11:21.659385693Z" level=info msg="StartContainer for \"28ad8ecdae2d3baa405753da2cfebc23acb9a4140355fe44a57879bb03f1b185\"" Aug 13 07:11:21.713351 systemd[1]: Started cri-containerd-28ad8ecdae2d3baa405753da2cfebc23acb9a4140355fe44a57879bb03f1b185.scope - libcontainer container 28ad8ecdae2d3baa405753da2cfebc23acb9a4140355fe44a57879bb03f1b185. Aug 13 07:11:21.761510 containerd[1471]: time="2025-08-13T07:11:21.761420054Z" level=info msg="StartContainer for \"28ad8ecdae2d3baa405753da2cfebc23acb9a4140355fe44a57879bb03f1b185\" returns successfully" Aug 13 07:11:22.200331 kubelet[2522]: I0813 07:11:22.200127 2522 setters.go:618] "Node became not ready" node="ci-4081.3.5-0-ae45d59eaf" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T07:11:22Z","lastTransitionTime":"2025-08-13T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 07:11:22.305025 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:11:22.629050 kubelet[2522]: E0813 07:11:22.628653 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:22.656221 kubelet[2522]: I0813 07:11:22.656143 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b5qq2" podStartSLOduration=5.656120634 podStartE2EDuration="5.656120634s" podCreationTimestamp="2025-08-13 07:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:11:22.653638324 +0000 UTC m=+103.812827969" watchObservedRunningTime="2025-08-13 07:11:22.656120634 +0000 UTC m=+103.815310227" Aug 13 07:11:23.121529 kubelet[2522]: E0813 07:11:23.120175 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-96mqs" podUID="fe5db471-f2c2-404f-a65a-5a1914f5136e" Aug 13 07:11:24.034893 kubelet[2522]: E0813 07:11:24.034734 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:25.121420 kubelet[2522]: E0813 07:11:25.120093 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:26.204527 systemd-networkd[1375]: lxc_health: Link UP Aug 13 07:11:26.209365 systemd-networkd[1375]: lxc_health: Gained carrier Aug 13 07:11:27.641938 systemd-networkd[1375]: lxc_health: Gained IPv6LL Aug 13 07:11:28.035562 kubelet[2522]: E0813 07:11:28.035414 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:28.652891 kubelet[2522]: E0813 07:11:28.652836 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:29.657723 kubelet[2522]: E0813 07:11:29.657100 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:11:31.611240 systemd[1]: run-containerd-runc-k8s.io-28ad8ecdae2d3baa405753da2cfebc23acb9a4140355fe44a57879bb03f1b185-runc.I0DqEa.mount: Deactivated successfully. Aug 13 07:11:31.719887 sshd[4323]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:31.728073 systemd[1]: sshd@26-24.199.106.199:22-139.178.89.65:47564.service: Deactivated successfully. Aug 13 07:11:31.731256 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:11:31.732676 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:11:31.735340 systemd-logind[1453]: Removed session 27. Aug 13 07:11:34.121030 kubelet[2522]: E0813 07:11:34.120480 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"