Aug 13 07:12:02.940012 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:12:02.940054 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:12:02.940074 kernel: BIOS-provided physical RAM map: Aug 13 07:12:02.940085 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:12:02.940095 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:12:02.940105 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:12:02.940117 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:12:02.940127 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:12:02.940137 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:12:02.940151 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:12:02.940162 kernel: NX (Execute Disable) protection: active Aug 13 07:12:02.940171 kernel: APIC: Static calls initialized Aug 13 07:12:02.940189 kernel: SMBIOS 2.8 present. Aug 13 07:12:02.940200 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:12:02.940214 kernel: Hypervisor detected: KVM Aug 13 07:12:02.940230 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:12:02.940246 kernel: kvm-clock: using sched offset of 3144918487 cycles Aug 13 07:12:02.940260 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:12:02.940272 kernel: tsc: Detected 2494.140 MHz processor Aug 13 07:12:02.940284 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:12:02.940297 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:12:02.940305 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:12:02.940313 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:12:02.940321 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:12:02.940333 kernel: ACPI: Early table checksum verification disabled Aug 13 07:12:02.940341 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:12:02.940349 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940357 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940365 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940373 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:12:02.940380 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940388 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940396 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940407 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:12:02.940415 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:12:02.940423 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:12:02.940864 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:12:02.940877 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:12:02.940885 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:12:02.940893 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:12:02.940910 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:12:02.940918 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:12:02.940936 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:12:02.940945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:12:02.940954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:12:02.940967 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:12:02.940976 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:12:02.940988 kernel: Zone ranges: Aug 13 07:12:02.940996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:12:02.941004 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:12:02.941012 kernel: Normal empty Aug 13 07:12:02.941021 kernel: Movable zone start for each node Aug 13 07:12:02.941029 kernel: Early memory node ranges Aug 13 07:12:02.941038 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:12:02.941046 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:12:02.941054 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:12:02.941065 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:12:02.941077 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:12:02.941093 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:12:02.941105 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:12:02.941120 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:12:02.941131 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:12:02.941144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:12:02.941157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:12:02.941170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:12:02.941182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:12:02.941191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:12:02.941199 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:12:02.941208 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:12:02.941216 kernel: TSC deadline timer available Aug 13 07:12:02.941224 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:12:02.941233 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:12:02.941241 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:12:02.941252 kernel: Booting paravirtualized kernel on KVM Aug 13 07:12:02.941261 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:12:02.941272 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:12:02.941281 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:12:02.941290 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:12:02.941298 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:12:02.941306 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:12:02.941316 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:12:02.941325 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:12:02.941333 kernel: random: crng init done Aug 13 07:12:02.941343 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:12:02.941352 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:12:02.941360 kernel: Fallback order for Node 0: 0 Aug 13 07:12:02.941368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:12:02.941377 kernel: Policy zone: DMA32 Aug 13 07:12:02.941385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:12:02.941394 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:12:02.941402 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:12:02.941414 kernel: Kernel/User page tables isolation: enabled Aug 13 07:12:02.941422 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:12:02.941430 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:12:02.941439 kernel: Dynamic Preempt: voluntary Aug 13 07:12:02.941447 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:12:02.941456 kernel: rcu: RCU event tracing is enabled. Aug 13 07:12:02.941465 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:12:02.941473 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:12:02.941482 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:12:02.941490 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:12:02.941502 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:12:02.941510 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:12:02.941524 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:12:02.941536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:12:02.941551 kernel: Console: colour VGA+ 80x25 Aug 13 07:12:02.941563 kernel: printk: console [tty0] enabled Aug 13 07:12:02.941576 kernel: printk: console [ttyS0] enabled Aug 13 07:12:02.941588 kernel: ACPI: Core revision 20230628 Aug 13 07:12:02.941600 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:12:02.941616 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:12:02.941629 kernel: x2apic enabled Aug 13 07:12:02.941642 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:12:02.941651 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:12:02.941660 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 07:12:02.941668 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 07:12:02.941677 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:12:02.941685 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:12:02.941705 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:12:02.941714 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:12:02.941723 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:12:02.941734 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:12:02.941743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:12:02.941752 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:12:02.941761 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:12:02.941769 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:12:02.941778 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:12:02.941793 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:12:02.941802 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:12:02.941811 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:12:02.941819 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:12:02.941828 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:12:02.941837 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:12:02.941846 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:12:02.941854 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:12:02.941872 kernel: landlock: Up and running. Aug 13 07:12:02.941885 kernel: SELinux: Initializing. Aug 13 07:12:02.941898 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:12:02.941910 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:12:02.941923 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:12:02.941949 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:12:02.941963 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:12:02.941977 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:12:02.941991 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:12:02.942005 kernel: signal: max sigframe size: 1776 Aug 13 07:12:02.942014 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:12:02.942026 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:12:02.942046 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:12:02.942064 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:12:02.942085 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:12:02.942102 kernel: .... node #0, CPUs: #1 Aug 13 07:12:02.942110 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:12:02.942123 kernel: smpboot: Max logical packages: 1 Aug 13 07:12:02.942141 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 07:12:02.942153 kernel: devtmpfs: initialized Aug 13 07:12:02.942167 kernel: x86/mm: Memory block size: 128MB Aug 13 07:12:02.942180 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:12:02.942194 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:12:02.942207 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:12:02.942220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:12:02.942234 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:12:02.942248 kernel: audit: type=2000 audit(1755069121.711:1): state=initialized audit_enabled=0 res=1 Aug 13 07:12:02.942266 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:12:02.942281 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:12:02.942305 kernel: cpuidle: using governor menu Aug 13 07:12:02.942319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:12:02.942340 kernel: dca service started, version 1.12.1 Aug 13 07:12:02.942360 kernel: PCI: Using configuration type 1 for base access Aug 13 07:12:02.942375 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:12:02.942388 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:12:02.942402 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:12:02.942422 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:12:02.942435 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:12:02.942448 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:12:02.942461 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:12:02.942475 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:12:02.942488 kernel: ACPI: Interpreter enabled Aug 13 07:12:02.942502 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:12:02.942517 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:12:02.942529 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:12:02.942548 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:12:02.942561 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:12:02.942575 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:12:02.942847 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:12:02.945572 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:12:02.945717 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:12:02.945731 kernel: acpiphp: Slot [3] registered Aug 13 07:12:02.945748 kernel: acpiphp: Slot [4] registered Aug 13 07:12:02.945761 kernel: acpiphp: Slot [5] registered Aug 13 07:12:02.945774 kernel: acpiphp: Slot [6] registered Aug 13 07:12:02.945783 kernel: acpiphp: Slot [7] registered Aug 13 07:12:02.945792 kernel: acpiphp: Slot [8] registered Aug 13 07:12:02.945801 kernel: acpiphp: Slot [9] registered Aug 13 07:12:02.945810 kernel: acpiphp: Slot [10] registered Aug 13 07:12:02.945819 kernel: acpiphp: Slot [11] registered Aug 13 07:12:02.945828 kernel: acpiphp: Slot [12] registered Aug 13 07:12:02.945840 kernel: acpiphp: Slot [13] registered Aug 13 07:12:02.945849 kernel: acpiphp: Slot [14] registered Aug 13 07:12:02.945857 kernel: acpiphp: Slot [15] registered Aug 13 07:12:02.945866 kernel: acpiphp: Slot [16] registered Aug 13 07:12:02.945875 kernel: acpiphp: Slot [17] registered Aug 13 07:12:02.945884 kernel: acpiphp: Slot [18] registered Aug 13 07:12:02.945893 kernel: acpiphp: Slot [19] registered Aug 13 07:12:02.945904 kernel: acpiphp: Slot [20] registered Aug 13 07:12:02.945916 kernel: acpiphp: Slot [21] registered Aug 13 07:12:02.945925 kernel: acpiphp: Slot [22] registered Aug 13 07:12:02.945954 kernel: acpiphp: Slot [23] registered Aug 13 07:12:02.945963 kernel: acpiphp: Slot [24] registered Aug 13 07:12:02.945976 kernel: acpiphp: Slot [25] registered Aug 13 07:12:02.945989 kernel: acpiphp: Slot [26] registered Aug 13 07:12:02.946001 kernel: acpiphp: Slot [27] registered Aug 13 07:12:02.946013 kernel: acpiphp: Slot [28] registered Aug 13 07:12:02.946025 kernel: acpiphp: Slot [29] registered Aug 13 07:12:02.946039 kernel: acpiphp: Slot [30] registered Aug 13 07:12:02.946051 kernel: acpiphp: Slot [31] registered Aug 13 07:12:02.946064 kernel: PCI host bridge to bus 0000:00 Aug 13 07:12:02.946200 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:12:02.946347 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:12:02.946465 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:12:02.946555 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:12:02.946641 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:12:02.946725 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:12:02.946883 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:12:02.950119 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:12:02.950265 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:12:02.950419 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:12:02.950520 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:12:02.950616 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:12:02.950711 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:12:02.950813 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:12:02.950923 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:12:02.952139 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:12:02.952275 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:12:02.952375 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:12:02.952616 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:12:02.952743 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:12:02.952906 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:12:02.954139 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:12:02.954254 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:12:02.954377 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:12:02.954498 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:12:02.954628 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:12:02.954734 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:12:02.954830 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:12:02.955998 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:12:02.956225 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:12:02.956437 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:12:02.956561 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:12:02.956668 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:12:02.956835 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:12:02.958047 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:12:02.958162 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:12:02.958285 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:12:02.958454 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:12:02.958557 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:12:02.958661 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:12:02.958755 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:12:02.958859 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:12:02.960045 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:12:02.960167 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:12:02.960289 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:12:02.960437 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:12:02.960552 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:12:02.960649 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:12:02.960661 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:12:02.960671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:12:02.960680 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:12:02.960689 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:12:02.960698 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:12:02.960711 kernel: iommu: Default domain type: Translated Aug 13 07:12:02.960720 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:12:02.960729 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:12:02.960738 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:12:02.960747 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:12:02.960756 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:12:02.960858 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:12:02.962054 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:12:02.962198 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:12:02.962220 kernel: vgaarb: loaded Aug 13 07:12:02.962229 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:12:02.962239 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:12:02.962248 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:12:02.962257 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:12:02.962267 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:12:02.962276 kernel: pnp: PnP ACPI init Aug 13 07:12:02.962285 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:12:02.962308 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:12:02.962320 kernel: NET: Registered PF_INET protocol family Aug 13 07:12:02.962329 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:12:02.962339 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:12:02.962347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:12:02.962356 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:12:02.962365 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:12:02.962374 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:12:02.962383 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:12:02.962392 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:12:02.962404 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:12:02.962413 kernel: NET: Registered PF_XDP protocol family Aug 13 07:12:02.962514 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:12:02.962602 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:12:02.962687 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:12:02.962775 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:12:02.962861 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:12:02.964103 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:12:02.964227 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:12:02.964241 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:12:02.964342 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29884 usecs Aug 13 07:12:02.964355 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:12:02.964364 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:12:02.964374 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 07:12:02.964383 kernel: Initialise system trusted keyrings Aug 13 07:12:02.964392 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:12:02.964405 kernel: Key type asymmetric registered Aug 13 07:12:02.964414 kernel: Asymmetric key parser 'x509' registered Aug 13 07:12:02.964422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:12:02.964432 kernel: io scheduler mq-deadline registered Aug 13 07:12:02.964441 kernel: io scheduler kyber registered Aug 13 07:12:02.964449 kernel: io scheduler bfq registered Aug 13 07:12:02.964458 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:12:02.964467 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:12:02.964476 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:12:02.964485 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:12:02.964497 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:12:02.964506 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:12:02.964515 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:12:02.964524 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:12:02.964533 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:12:02.964645 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:12:02.964658 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:12:02.964752 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:12:02.964874 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:12:02 UTC (1755069122) Aug 13 07:12:02.966043 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:12:02.966062 kernel: intel_pstate: CPU model not supported Aug 13 07:12:02.966072 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:12:02.966082 kernel: Segment Routing with IPv6 Aug 13 07:12:02.966091 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:12:02.966100 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:12:02.966109 kernel: Key type dns_resolver registered Aug 13 07:12:02.966124 kernel: IPI shorthand broadcast: enabled Aug 13 07:12:02.966133 kernel: sched_clock: Marking stable (1021005394, 111193387)->(1237226649, -105027868) Aug 13 07:12:02.966142 kernel: registered taskstats version 1 Aug 13 07:12:02.966151 kernel: Loading compiled-in X.509 certificates Aug 13 07:12:02.966160 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:12:02.966169 kernel: Key type .fscrypt registered Aug 13 07:12:02.966178 kernel: Key type fscrypt-provisioning registered Aug 13 07:12:02.966187 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:12:02.966196 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:12:02.966208 kernel: ima: No architecture policies found Aug 13 07:12:02.966217 kernel: clk: Disabling unused clocks Aug 13 07:12:02.966226 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:12:02.966235 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:12:02.966244 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:12:02.966273 kernel: Run /init as init process Aug 13 07:12:02.966285 kernel: with arguments: Aug 13 07:12:02.966311 kernel: /init Aug 13 07:12:02.966325 kernel: with environment: Aug 13 07:12:02.966340 kernel: HOME=/ Aug 13 07:12:02.966349 kernel: TERM=linux Aug 13 07:12:02.966358 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:12:02.966370 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:12:02.966382 systemd[1]: Detected virtualization kvm. Aug 13 07:12:02.966393 systemd[1]: Detected architecture x86-64. Aug 13 07:12:02.966403 systemd[1]: Running in initrd. Aug 13 07:12:02.966412 systemd[1]: No hostname configured, using default hostname. Aug 13 07:12:02.966424 systemd[1]: Hostname set to . Aug 13 07:12:02.966434 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:12:02.966444 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:12:02.966454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:12:02.966464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:12:02.966475 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:12:02.966485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:12:02.966497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:12:02.966507 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:12:02.966518 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:12:02.966528 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:12:02.966538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:12:02.966549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:12:02.966558 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:12:02.966571 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:12:02.966581 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:12:02.966591 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:12:02.966603 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:12:02.966613 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:12:02.966623 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:12:02.966636 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:12:02.966646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:12:02.972115 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:12:02.972154 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:12:02.972166 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:12:02.972182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:12:02.972195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:12:02.972206 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:12:02.972224 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:12:02.972235 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:12:02.972245 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:12:02.972255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:12:02.972265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:12:02.972274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:12:02.972334 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:12:02.972375 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:12:02.972393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:12:02.972410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:12:02.972421 systemd-journald[183]: Journal started Aug 13 07:12:02.972442 systemd-journald[183]: Runtime Journal (/run/log/journal/147d769d175f4616b653b81cc00d01e1) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:12:02.943022 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:12:03.018737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:12:03.018782 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:12:03.018800 kernel: Bridge firewalling registered Aug 13 07:12:03.018813 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:12:02.999010 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:12:03.019520 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:12:03.027242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:12:03.037225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:12:03.038997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:12:03.041410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:12:03.045976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:12:03.060182 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:12:03.064077 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:12:03.069106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:12:03.071141 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:12:03.081135 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:12:03.090659 dracut-cmdline[215]: dracut-dracut-053 Aug 13 07:12:03.094225 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:12:03.121225 systemd-resolved[220]: Positive Trust Anchors: Aug 13 07:12:03.121241 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:12:03.121277 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:12:03.124770 systemd-resolved[220]: Defaulting to hostname 'linux'. Aug 13 07:12:03.125923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:12:03.127698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:12:03.190998 kernel: SCSI subsystem initialized Aug 13 07:12:03.202965 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:12:03.219015 kernel: iscsi: registered transport (tcp) Aug 13 07:12:03.241975 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:12:03.242048 kernel: QLogic iSCSI HBA Driver Aug 13 07:12:03.293165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:12:03.299172 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:12:03.329006 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:12:03.329077 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:12:03.330143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:12:03.376995 kernel: raid6: avx2x4 gen() 15086 MB/s Aug 13 07:12:03.393998 kernel: raid6: avx2x2 gen() 15389 MB/s Aug 13 07:12:03.411033 kernel: raid6: avx2x1 gen() 11847 MB/s Aug 13 07:12:03.411130 kernel: raid6: using algorithm avx2x2 gen() 15389 MB/s Aug 13 07:12:03.429459 kernel: raid6: .... xor() 13524 MB/s, rmw enabled Aug 13 07:12:03.429556 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:12:03.452968 kernel: xor: automatically using best checksumming function avx Aug 13 07:12:03.619145 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:12:03.632629 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:12:03.638166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:12:03.655288 systemd-udevd[403]: Using default interface naming scheme 'v255'. Aug 13 07:12:03.660585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:12:03.669136 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:12:03.688013 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Aug 13 07:12:03.726503 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:12:03.733164 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:12:03.793797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:12:03.803160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:12:03.824085 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:12:03.827378 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:12:03.827916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:12:03.829667 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:12:03.834392 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:12:03.862997 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:12:03.906954 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:12:03.919008 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:12:03.926990 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:12:03.935275 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:12:03.939258 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:12:03.939278 kernel: AES CTR mode by8 optimization enabled Aug 13 07:12:03.952961 kernel: libata version 3.00 loaded. Aug 13 07:12:03.966323 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:12:03.966398 kernel: GPT:9289727 != 125829119 Aug 13 07:12:03.966412 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:12:03.966424 kernel: GPT:9289727 != 125829119 Aug 13 07:12:03.966435 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:12:03.966447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:12:03.970004 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:12:03.971956 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:12:03.974052 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 07:12:03.979846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:12:03.984968 kernel: scsi host1: ata_piix Aug 13 07:12:03.985189 kernel: scsi host2: ata_piix Aug 13 07:12:03.985312 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:12:03.985335 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:12:03.980029 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:12:03.985424 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:12:03.990881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:12:03.991079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:12:03.991676 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:12:03.998291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:12:04.006977 kernel: ACPI: bus type USB registered Aug 13 07:12:04.021951 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (455) Aug 13 07:12:04.022010 kernel: usbcore: registered new interface driver usbfs Aug 13 07:12:04.030925 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (447) Aug 13 07:12:04.032010 kernel: usbcore: registered new interface driver hub Aug 13 07:12:04.032031 kernel: usbcore: registered new device driver usb Aug 13 07:12:04.046142 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:12:04.077908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:12:04.086810 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:12:04.094469 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:12:04.094945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:12:04.100661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:12:04.114429 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:12:04.117847 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:12:04.123563 disk-uuid[539]: Primary Header is updated. Aug 13 07:12:04.123563 disk-uuid[539]: Secondary Entries is updated. Aug 13 07:12:04.123563 disk-uuid[539]: Secondary Header is updated. Aug 13 07:12:04.142984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:12:04.144697 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:12:04.157084 kernel: GPT:disk_guids don't match. Aug 13 07:12:04.157155 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:12:04.157168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:12:04.170969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:12:04.174278 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:12:04.174596 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:12:04.175959 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:12:04.178091 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:12:04.182384 kernel: hub 1-0:1.0: USB hub found Aug 13 07:12:04.182648 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:12:05.164984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:12:05.165553 disk-uuid[540]: The operation has completed successfully. Aug 13 07:12:05.221337 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:12:05.221573 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:12:05.232213 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:12:05.248394 sh[562]: Success Aug 13 07:12:05.268009 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:12:05.345028 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:12:05.359201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:12:05.363315 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:12:05.378481 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:12:05.378552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:12:05.378566 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:12:05.378579 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:12:05.378597 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:12:05.385962 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:12:05.387731 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:12:05.400360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:12:05.403253 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:12:05.414993 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:12:05.415136 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:12:05.415162 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:12:05.424006 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:12:05.436641 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:12:05.438987 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:12:05.444417 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:12:05.452000 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:12:05.594759 ignition[635]: Ignition 2.19.0 Aug 13 07:12:05.594772 ignition[635]: Stage: fetch-offline Aug 13 07:12:05.597171 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:12:05.594815 ignition[635]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:05.594826 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:05.594924 ignition[635]: parsed url from cmdline: "" Aug 13 07:12:05.595228 ignition[635]: no config URL provided Aug 13 07:12:05.595239 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:12:05.595251 ignition[635]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:12:05.595258 ignition[635]: failed to fetch config: resource requires networking Aug 13 07:12:05.595502 ignition[635]: Ignition finished successfully Aug 13 07:12:05.645875 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:12:05.654297 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:12:05.693669 systemd-networkd[750]: lo: Link UP Aug 13 07:12:05.693682 systemd-networkd[750]: lo: Gained carrier Aug 13 07:12:05.697475 systemd-networkd[750]: Enumeration completed Aug 13 07:12:05.698007 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:12:05.698012 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:12:05.698998 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:12:05.699002 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:12:05.699254 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:12:05.699790 systemd-networkd[750]: eth0: Link UP Aug 13 07:12:05.699795 systemd-networkd[750]: eth0: Gained carrier Aug 13 07:12:05.699805 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:12:05.700220 systemd[1]: Reached target network.target - Network. Aug 13 07:12:05.705759 systemd-networkd[750]: eth1: Link UP Aug 13 07:12:05.705767 systemd-networkd[750]: eth1: Gained carrier Aug 13 07:12:05.705790 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:12:05.706193 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:12:05.720056 systemd-networkd[750]: eth0: DHCPv4 address 64.23.236.148/20, gateway 64.23.224.1 acquired from 169.254.169.253 Aug 13 07:12:05.726097 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Aug 13 07:12:05.733779 ignition[752]: Ignition 2.19.0 Aug 13 07:12:05.733794 ignition[752]: Stage: fetch Aug 13 07:12:05.734120 ignition[752]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:05.734139 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:05.734316 ignition[752]: parsed url from cmdline: "" Aug 13 07:12:05.734320 ignition[752]: no config URL provided Aug 13 07:12:05.734326 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:12:05.734340 ignition[752]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:12:05.734368 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:12:05.767882 ignition[752]: GET result: OK Aug 13 07:12:05.768173 ignition[752]: parsing config with SHA512: c8fc239ce88da9b91510eefee8d018cb5fe049c560719642388ba9f2d7bf823e7ef489751da88584ed21c36c9f08b62fb6e0f7bb7ba7158af9e4bac5adc79036 Aug 13 07:12:05.772949 unknown[752]: fetched base config from "system" Aug 13 07:12:05.772961 unknown[752]: fetched base config from "system" Aug 13 07:12:05.773388 ignition[752]: fetch: fetch complete Aug 13 07:12:05.772969 unknown[752]: fetched user config from "digitalocean" Aug 13 07:12:05.773395 ignition[752]: fetch: fetch passed Aug 13 07:12:05.775351 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:12:05.773454 ignition[752]: Ignition finished successfully Aug 13 07:12:05.781375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:12:05.810058 ignition[760]: Ignition 2.19.0 Aug 13 07:12:05.810073 ignition[760]: Stage: kargs Aug 13 07:12:05.810395 ignition[760]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:05.810409 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:05.811537 ignition[760]: kargs: kargs passed Aug 13 07:12:05.811603 ignition[760]: Ignition finished successfully Aug 13 07:12:05.813281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:12:05.834351 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:12:05.855067 ignition[766]: Ignition 2.19.0 Aug 13 07:12:05.855088 ignition[766]: Stage: disks Aug 13 07:12:05.855372 ignition[766]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:05.855391 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:05.859297 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:12:05.857026 ignition[766]: disks: disks passed Aug 13 07:12:05.857102 ignition[766]: Ignition finished successfully Aug 13 07:12:05.860768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:12:05.861706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:12:05.862586 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:12:05.863362 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:12:05.864137 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:12:05.870184 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:12:05.886111 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:12:05.889851 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:12:05.896109 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:12:06.024219 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:12:06.024879 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:12:06.026056 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:12:06.033243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:12:06.037196 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:12:06.040014 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:12:06.046978 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (782) Aug 13 07:12:06.051967 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:12:06.052054 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:12:06.053292 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:12:06.053046 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:12:06.054618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:12:06.054656 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:12:06.059728 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:12:06.072599 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:12:06.069405 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:12:06.074143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:12:06.159970 coreos-metadata[784]: Aug 13 07:12:06.158 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:12:06.171998 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:12:06.173053 coreos-metadata[784]: Aug 13 07:12:06.172 INFO Fetch successful Aug 13 07:12:06.178054 coreos-metadata[785]: Aug 13 07:12:06.177 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:12:06.180233 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:12:06.180409 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:12:06.185911 coreos-metadata[785]: Aug 13 07:12:06.185 INFO Fetch successful Aug 13 07:12:06.186625 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:12:06.194450 coreos-metadata[785]: Aug 13 07:12:06.193 INFO wrote hostname ci-4081.3.5-f-9f59ec6646 to /sysroot/etc/hostname Aug 13 07:12:06.197551 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:12:06.199854 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:12:06.208964 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:12:06.339090 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:12:06.347161 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:12:06.350069 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:12:06.363973 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:12:06.375235 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:12:06.397696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:12:06.411739 ignition[905]: INFO : Ignition 2.19.0 Aug 13 07:12:06.412897 ignition[905]: INFO : Stage: mount Aug 13 07:12:06.412897 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:06.412897 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:06.414214 ignition[905]: INFO : mount: mount passed Aug 13 07:12:06.414674 ignition[905]: INFO : Ignition finished successfully Aug 13 07:12:06.416200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:12:06.423166 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:12:06.443278 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:12:06.454086 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Aug 13 07:12:06.454173 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:12:06.455052 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:12:06.456313 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:12:06.459972 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:12:06.462172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:12:06.496287 ignition[935]: INFO : Ignition 2.19.0 Aug 13 07:12:06.496287 ignition[935]: INFO : Stage: files Aug 13 07:12:06.497610 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:06.497610 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:06.498621 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:12:06.499116 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:12:06.499116 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:12:06.502591 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:12:06.503472 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:12:06.504714 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:12:06.504294 unknown[935]: wrote ssh authorized keys file for user: core Aug 13 07:12:06.508443 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:12:06.508443 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:12:06.508443 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:12:06.508443 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:12:06.740638 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:12:06.895071 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:12:06.895071 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:12:06.895071 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 07:12:07.115094 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 07:12:07.174972 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:12:07.176681 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:12:07.185518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:12:07.185518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:12:07.185518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:12:07.185518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:12:07.185518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:12:07.185518 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:12:07.331299 systemd-networkd[750]: eth0: Gained IPv6LL Aug 13 07:12:07.393130 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 13 07:12:07.651160 systemd-networkd[750]: eth1: Gained IPv6LL Aug 13 07:12:07.716862 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:12:07.716862 ignition[935]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:12:07.721046 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:12:07.721046 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:12:07.721046 ignition[935]: INFO : files: files passed Aug 13 07:12:07.721046 ignition[935]: INFO : Ignition finished successfully Aug 13 07:12:07.722195 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:12:07.731320 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:12:07.739301 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:12:07.747757 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:12:07.747989 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:12:07.766508 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:12:07.766508 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:12:07.768516 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:12:07.769376 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:12:07.771349 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:12:07.778414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:12:07.854680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:12:07.854830 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:12:07.856164 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:12:07.856941 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:12:07.857363 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:12:07.874493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:12:07.891794 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:12:07.896214 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:12:07.919857 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:12:07.920387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:12:07.920891 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:12:07.923238 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:12:07.923410 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:12:07.924812 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:12:07.926267 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:12:07.927039 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:12:07.927637 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:12:07.928367 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:12:07.929100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:12:07.929849 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:12:07.930576 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:12:07.931245 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:12:07.931879 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:12:07.932386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:12:07.932546 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:12:07.933295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:12:07.934106 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:12:07.934816 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:12:07.934976 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:12:07.935599 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:12:07.935748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:12:07.936550 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:12:07.936660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:12:07.937397 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:12:07.937491 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:12:07.938177 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:12:07.938331 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:12:07.946426 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:12:07.949230 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:12:07.949632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:12:07.950114 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:12:07.956705 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:12:07.956873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:12:07.963236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:12:07.963374 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:12:07.978957 ignition[987]: INFO : Ignition 2.19.0 Aug 13 07:12:07.978957 ignition[987]: INFO : Stage: umount Aug 13 07:12:07.978957 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:12:07.978957 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:12:07.978957 ignition[987]: INFO : umount: umount passed Aug 13 07:12:07.983173 ignition[987]: INFO : Ignition finished successfully Aug 13 07:12:07.985019 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:12:07.985142 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:12:07.986876 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:12:07.987377 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:12:07.987425 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:12:07.987813 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:12:07.987857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:12:07.988222 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:12:07.988261 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:12:07.999131 systemd[1]: Stopped target network.target - Network. Aug 13 07:12:07.999711 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:12:07.999782 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:12:08.000394 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:12:08.010094 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:12:08.011543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:12:08.012646 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:12:08.012987 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:12:08.013310 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:12:08.013374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:12:08.013714 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:12:08.013779 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:12:08.015088 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:12:08.015172 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:12:08.017352 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:12:08.017412 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:12:08.022545 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:12:08.022973 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:12:08.025130 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:12:08.025224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:12:08.025716 systemd-networkd[750]: eth1: DHCPv6 lease lost Aug 13 07:12:08.026734 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:12:08.026878 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:12:08.029051 systemd-networkd[750]: eth0: DHCPv6 lease lost Aug 13 07:12:08.030542 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:12:08.030675 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:12:08.031910 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:12:08.032085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:12:08.036265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:12:08.036330 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:12:08.041059 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:12:08.041901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:12:08.042433 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:12:08.043405 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:12:08.043455 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:12:08.044647 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:12:08.044700 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:12:08.045058 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:12:08.045094 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:12:08.048118 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:12:08.068338 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:12:08.069118 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:12:08.069816 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:12:08.070974 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:12:08.072468 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:12:08.073016 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:12:08.073395 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:12:08.073428 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:12:08.073744 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:12:08.073795 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:12:08.074596 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:12:08.074649 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:12:08.075329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:12:08.075377 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:12:08.082182 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:12:08.082669 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:12:08.082741 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:12:08.083225 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:12:08.083271 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:12:08.083636 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:12:08.083673 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:12:08.086175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:12:08.086255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:12:08.089719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:12:08.090941 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:12:08.092160 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:12:08.097180 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:12:08.108976 systemd[1]: Switching root. Aug 13 07:12:08.163054 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:12:08.163142 systemd-journald[183]: Journal stopped Aug 13 07:12:09.282030 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:12:09.282122 kernel: SELinux: policy capability open_perms=1 Aug 13 07:12:09.282136 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:12:09.282149 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:12:09.282160 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:12:09.282172 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:12:09.282374 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:12:09.282404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:12:09.282429 kernel: audit: type=1403 audit(1755069128.332:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:12:09.282447 systemd[1]: Successfully loaded SELinux policy in 44.423ms. Aug 13 07:12:09.282475 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.619ms. Aug 13 07:12:09.282490 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:12:09.282503 systemd[1]: Detected virtualization kvm. Aug 13 07:12:09.282515 systemd[1]: Detected architecture x86-64. Aug 13 07:12:09.282528 systemd[1]: Detected first boot. Aug 13 07:12:09.282546 systemd[1]: Hostname set to . Aug 13 07:12:09.282561 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:12:09.282574 zram_generator::config[1048]: No configuration found. Aug 13 07:12:09.282592 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:12:09.282604 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:12:09.282617 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:12:09.282631 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:12:09.282644 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:12:09.282657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:12:09.282673 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:12:09.282686 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:12:09.282699 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:12:09.282712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:12:09.282725 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:12:09.282737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:12:09.282751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:12:09.282764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:12:09.282776 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:12:09.282792 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:12:09.282805 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:12:09.282817 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:12:09.282829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:12:09.282842 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:12:09.282854 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:12:09.282867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:12:09.282879 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:12:09.282895 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:12:09.282908 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:12:09.282920 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:12:09.294798 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:12:09.294829 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:12:09.294843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:12:09.294856 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:12:09.294868 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:12:09.294890 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:12:09.294903 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:12:09.294922 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:12:09.294946 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:12:09.294960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:09.294995 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:12:09.295008 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:12:09.295021 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:12:09.295035 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:12:09.295051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:12:09.295064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:12:09.295076 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:12:09.295088 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:12:09.295101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:12:09.295127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:12:09.295141 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:12:09.295154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:12:09.295170 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:12:09.295183 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:12:09.295197 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:12:09.295209 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:12:09.295222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:12:09.295234 kernel: loop: module loaded Aug 13 07:12:09.295247 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:12:09.295260 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:12:09.295272 kernel: fuse: init (API version 7.39) Aug 13 07:12:09.295286 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:12:09.295300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:09.295312 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:12:09.295325 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:12:09.295338 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:12:09.295387 systemd-journald[1134]: Collecting audit messages is disabled. Aug 13 07:12:09.295418 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:12:09.295431 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:12:09.295442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:12:09.295455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:12:09.295468 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:12:09.295480 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:12:09.295492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:12:09.295508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:12:09.295520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:12:09.295532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:12:09.295545 kernel: ACPI: bus type drm_connector registered Aug 13 07:12:09.295556 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:12:09.295568 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:12:09.295583 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:12:09.295599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:12:09.295612 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:12:09.295625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:12:09.295637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:12:09.295649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:12:09.295665 systemd-journald[1134]: Journal started Aug 13 07:12:09.295689 systemd-journald[1134]: Runtime Journal (/run/log/journal/147d769d175f4616b653b81cc00d01e1) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:12:09.299058 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:12:09.301729 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:12:09.317742 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:12:09.325068 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:12:09.333186 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:12:09.333781 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:12:09.348199 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:12:09.361142 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:12:09.361580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:12:09.364519 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:12:09.367090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:12:09.369819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:12:09.380367 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:12:09.389052 systemd-journald[1134]: Time spent on flushing to /var/log/journal/147d769d175f4616b653b81cc00d01e1 is 49.132ms for 979 entries. Aug 13 07:12:09.389052 systemd-journald[1134]: System Journal (/var/log/journal/147d769d175f4616b653b81cc00d01e1) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:12:09.454967 systemd-journald[1134]: Received client request to flush runtime journal. Aug 13 07:12:09.390281 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:12:09.393643 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:12:09.394158 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:12:09.396507 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:12:09.406191 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:12:09.430456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:12:09.442007 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:12:09.458855 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:12:09.486498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:12:09.495327 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:12:09.497128 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 07:12:09.497977 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 07:12:09.508389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:12:09.519216 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:12:09.550214 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:12:09.555346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:12:09.582994 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 13 07:12:09.583015 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 13 07:12:09.591477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:12:10.255208 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:12:10.263244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:12:10.294781 systemd-udevd[1217]: Using default interface naming scheme 'v255'. Aug 13 07:12:10.315061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:12:10.323285 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:12:10.351385 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:12:10.425394 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:12:10.426068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:10.426349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:12:10.438179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:12:10.447101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:12:10.466136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:12:10.467017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1231) Aug 13 07:12:10.468560 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:12:10.468640 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:12:10.468724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:10.493211 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:12:10.494115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:12:10.494406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:12:10.528625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:12:10.528921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:12:10.531154 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:12:10.533116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:12:10.570283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:12:10.570336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:12:10.605988 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:12:10.608955 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:12:10.625269 systemd-networkd[1220]: lo: Link UP Aug 13 07:12:10.625284 systemd-networkd[1220]: lo: Gained carrier Aug 13 07:12:10.627818 systemd-networkd[1220]: Enumeration completed Aug 13 07:12:10.628079 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:12:10.628445 systemd-networkd[1220]: eth0: Configuring with /run/systemd/network/10-7e:cb:5f:96:5c:d7.network. Aug 13 07:12:10.628974 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:12:10.629573 systemd-networkd[1220]: eth1: Configuring with /run/systemd/network/10-42:ce:e5:dc:6b:aa.network. Aug 13 07:12:10.632013 systemd-networkd[1220]: eth0: Link UP Aug 13 07:12:10.632022 systemd-networkd[1220]: eth0: Gained carrier Aug 13 07:12:10.637428 systemd-networkd[1220]: eth1: Link UP Aug 13 07:12:10.637439 systemd-networkd[1220]: eth1: Gained carrier Aug 13 07:12:10.637736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:12:10.658973 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:12:10.693192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:12:10.729140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:12:10.734981 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:12:10.735050 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:12:10.736164 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:12:10.739092 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:12:10.740011 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:12:10.740062 kernel: [drm] features: -context_init Aug 13 07:12:10.741083 kernel: [drm] number of scanouts: 1 Aug 13 07:12:10.741130 kernel: [drm] number of cap sets: 0 Aug 13 07:12:10.743981 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:12:10.753373 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:12:10.753462 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:12:10.771951 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:12:10.808623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:12:10.808877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:12:10.813351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:12:10.915006 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:12:10.934755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:12:10.940600 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:12:10.948242 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:12:10.964695 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:12:10.995258 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:12:10.995667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:12:11.003166 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:12:11.010568 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:12:11.042494 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:12:11.042893 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:12:11.051118 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:12:11.051272 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:12:11.051311 systemd[1]: Reached target machines.target - Containers. Aug 13 07:12:11.053446 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:12:11.069972 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:12:11.072140 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:12:11.073144 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:12:11.074753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:12:11.081242 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:12:11.084304 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:12:11.085467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:12:11.092291 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:12:11.096698 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:12:11.113154 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:12:11.118641 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:12:11.133110 kernel: loop0: detected capacity change from 0 to 142488 Aug 13 07:12:11.141591 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:12:11.149372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:12:11.187635 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:12:11.213643 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 07:12:11.258384 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:12:11.294845 kernel: loop3: detected capacity change from 0 to 8 Aug 13 07:12:11.318580 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:12:11.349141 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:12:11.362599 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:12:11.393025 kernel: loop7: detected capacity change from 0 to 8 Aug 13 07:12:11.393685 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:12:11.394308 (sd-merge)[1308]: Merged extensions into '/usr'. Aug 13 07:12:11.408196 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:12:11.408216 systemd[1]: Reloading... Aug 13 07:12:11.478204 zram_generator::config[1333]: No configuration found. Aug 13 07:12:11.708972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:12:11.713948 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:12:11.778657 systemd[1]: Reloading finished in 369 ms. Aug 13 07:12:11.800138 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:12:11.803586 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:12:11.815185 systemd[1]: Starting ensure-sysext.service... Aug 13 07:12:11.830214 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:12:11.837679 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:12:11.837698 systemd[1]: Reloading... Aug 13 07:12:11.882450 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:12:11.882795 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:12:11.884417 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:12:11.884817 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Aug 13 07:12:11.884883 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Aug 13 07:12:11.889136 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:12:11.889152 systemd-tmpfiles[1387]: Skipping /boot Aug 13 07:12:11.904470 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:12:11.904640 systemd-tmpfiles[1387]: Skipping /boot Aug 13 07:12:11.924992 zram_generator::config[1415]: No configuration found. Aug 13 07:12:12.082485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:12:12.160550 systemd[1]: Reloading finished in 322 ms. Aug 13 07:12:12.190668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:12:12.198187 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:12:12.215551 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:12:12.220178 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:12:12.232828 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:12:12.238581 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:12:12.257224 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:12.257442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:12:12.259149 systemd-networkd[1220]: eth0: Gained IPv6LL Aug 13 07:12:12.276461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:12:12.292302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:12:12.308286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:12:12.308864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:12:12.309014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:12.315109 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:12:12.322433 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:12:12.336174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:12:12.337148 augenrules[1493]: No rules Aug 13 07:12:12.344977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:12:12.345189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:12:12.352309 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:12:12.352522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:12:12.353834 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:12:12.355509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:12:12.370453 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:12.370705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:12:12.386027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:12:12.389578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:12:12.402072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:12:12.403836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:12:12.405887 systemd-resolved[1471]: Positive Trust Anchors: Aug 13 07:12:12.405903 systemd-resolved[1471]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:12:12.406397 systemd-resolved[1471]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:12:12.407195 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:12:12.407349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:12.412156 systemd-resolved[1471]: Using system hostname 'ci-4081.3.5-f-9f59ec6646'. Aug 13 07:12:12.413201 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:12:12.417053 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:12:12.418106 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:12:12.422264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:12:12.422464 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:12:12.423596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:12:12.423761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:12:12.426358 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:12:12.426541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:12:12.438480 systemd[1]: Reached target network.target - Network. Aug 13 07:12:12.439756 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:12:12.441993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:12:12.442687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:12.443265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:12:12.449428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:12:12.453080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:12:12.464867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:12:12.474336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:12:12.475020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:12:12.480738 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:12:12.481265 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:12:12.481435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:12:12.484820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:12:12.485213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:12:12.487346 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:12:12.487618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:12:12.494734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:12:12.495105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:12:12.498225 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:12:12.498478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:12:12.508753 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:12:12.516405 systemd[1]: Finished ensure-sysext.service. Aug 13 07:12:12.521133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:12:12.521265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:12:12.528199 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:12:12.602030 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:12:12.602882 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:12:12.604272 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:12:12.605487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:12:12.605923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:12:12.606456 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:12:12.606494 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:12:12.606859 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:12:12.609209 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:12:12.611038 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:12:12.612478 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:12:12.619218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:12:12.622783 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:12:12.628422 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:12:12.630915 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:12:12.633169 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:12:12.633570 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:12:12.634707 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:12:12.634764 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:12:12.634787 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:12:12.643122 systemd-networkd[1220]: eth1: Gained IPv6LL Aug 13 07:12:12.645103 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:12:12.651215 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:12:12.657195 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:12:12.680230 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:12:12.688215 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:12:12.691710 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:12:12.692333 coreos-metadata[1546]: Aug 13 07:12:12.691 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:12:12.696245 jq[1550]: false Aug 13 07:12:12.703692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:12.707807 coreos-metadata[1546]: Aug 13 07:12:12.707 INFO Fetch successful Aug 13 07:12:12.712557 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:12:12.731330 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:12:12.743186 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:12:12.751962 extend-filesystems[1551]: Found loop4 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found loop5 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found loop6 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found loop7 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda1 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda2 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda3 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found usr Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda4 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda6 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda7 Aug 13 07:12:12.751962 extend-filesystems[1551]: Found vda9 Aug 13 07:12:12.751962 extend-filesystems[1551]: Checking size of /dev/vda9 Aug 13 07:12:12.757471 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:12:12.766627 dbus-daemon[1547]: [system] SELinux support is enabled Aug 13 07:12:12.773664 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:12:12.795232 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:12:12.796966 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:12:12.805120 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:12:12.821918 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:12:12.824900 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:12:12.844036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1230) Aug 13 07:12:12.851144 systemd-timesyncd[1540]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Aug 13 07:12:12.856983 systemd-timesyncd[1540]: Initial clock synchronization to Wed 2025-08-13 07:12:13.166244 UTC. Aug 13 07:12:12.861007 extend-filesystems[1551]: Resized partition /dev/vda9 Aug 13 07:12:12.868900 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:12:12.870420 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:12:12.888881 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:12:12.892854 jq[1577]: true Aug 13 07:12:12.895498 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:12:12.904910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:12:12.918095 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:12:12.905752 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:12:12.928621 update_engine[1574]: I20250813 07:12:12.920756 1574 main.cc:92] Flatcar Update Engine starting Aug 13 07:12:12.944999 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:12:12.945880 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:12:12.960548 update_engine[1574]: I20250813 07:12:12.958436 1574 update_check_scheduler.cc:74] Next update check in 7m34s Aug 13 07:12:12.978041 jq[1592]: true Aug 13 07:12:12.987902 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:12:13.013611 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:12:13.036585 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:12:13.042285 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:12:13.050189 tar[1588]: linux-amd64/helm Aug 13 07:12:13.042712 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:12:13.042749 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:12:13.043279 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:12:13.043369 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:12:13.043388 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:12:13.045200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:12:13.053742 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:12:13.100072 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:12:13.204002 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:12:13.217584 systemd-logind[1571]: New seat seat0. Aug 13 07:12:13.233592 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:12:13.233592 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:12:13.233592 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:12:13.237216 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Aug 13 07:12:13.237216 extend-filesystems[1551]: Found vdb Aug 13 07:12:13.239939 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:12:13.268509 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:12:13.240250 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:12:13.241343 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:12:13.253138 systemd-logind[1571]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:12:13.253161 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:12:13.269244 systemd[1]: Starting sshkeys.service... Aug 13 07:12:13.280997 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:12:13.281928 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:12:13.300539 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:12:13.342101 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:12:13.352410 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:12:13.369569 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:12:13.369994 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:12:13.387637 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:12:13.406941 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:12:13.424692 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:12:13.443535 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:12:13.448632 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:12:13.488282 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:12:13.500305 coreos-metadata[1655]: Aug 13 07:12:13.499 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:12:13.516002 containerd[1590]: time="2025-08-13T07:12:13.515204209Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:12:13.525443 coreos-metadata[1655]: Aug 13 07:12:13.525 INFO Fetch successful Aug 13 07:12:13.535511 unknown[1655]: wrote ssh authorized keys file for user: core Aug 13 07:12:13.576842 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:12:13.579577 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:12:13.588458 containerd[1590]: time="2025-08-13T07:12:13.588355394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.591062 containerd[1590]: time="2025-08-13T07:12:13.591001953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:12:13.591233 containerd[1590]: time="2025-08-13T07:12:13.591212310Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:12:13.591318 containerd[1590]: time="2025-08-13T07:12:13.591301253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:12:13.591604 containerd[1590]: time="2025-08-13T07:12:13.591580959Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:12:13.591735 containerd[1590]: time="2025-08-13T07:12:13.591714794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.591903 containerd[1590]: time="2025-08-13T07:12:13.591879922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:12:13.592005 containerd[1590]: time="2025-08-13T07:12:13.591988778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.592413 containerd[1590]: time="2025-08-13T07:12:13.592376762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:12:13.592524 containerd[1590]: time="2025-08-13T07:12:13.592504452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.592645 containerd[1590]: time="2025-08-13T07:12:13.592623426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:12:13.592743 containerd[1590]: time="2025-08-13T07:12:13.592725007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.592950 containerd[1590]: time="2025-08-13T07:12:13.592925576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.593451 containerd[1590]: time="2025-08-13T07:12:13.593420054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:12:13.593800 containerd[1590]: time="2025-08-13T07:12:13.593776295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:12:13.593871 containerd[1590]: time="2025-08-13T07:12:13.593859252Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:12:13.594025 containerd[1590]: time="2025-08-13T07:12:13.594010485Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:12:13.594163 containerd[1590]: time="2025-08-13T07:12:13.594146847Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:12:13.596892 systemd[1]: Finished sshkeys.service. Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.604934981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.605068377Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.605110913Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.605133284Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.605157310Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.605423172Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:12:13.606379 containerd[1590]: time="2025-08-13T07:12:13.606067850Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607514769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607550344Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607581058Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607595543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607609376Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607623818Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607651969Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.607672 containerd[1590]: time="2025-08-13T07:12:13.607674616Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607698166Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607729131Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607746573Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607776640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607810838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607827770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607847989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607889765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607908592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607921399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607963327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.607978955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.608003739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608120 containerd[1590]: time="2025-08-13T07:12:13.608028060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608648 containerd[1590]: time="2025-08-13T07:12:13.608041040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608648 containerd[1590]: time="2025-08-13T07:12:13.608053878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608648 containerd[1590]: time="2025-08-13T07:12:13.608069588Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:12:13.608763 containerd[1590]: time="2025-08-13T07:12:13.608656996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608763 containerd[1590]: time="2025-08-13T07:12:13.608676855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.608763 containerd[1590]: time="2025-08-13T07:12:13.608690334Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:12:13.608890 containerd[1590]: time="2025-08-13T07:12:13.608789057Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:12:13.608890 containerd[1590]: time="2025-08-13T07:12:13.608827977Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:12:13.608890 containerd[1590]: time="2025-08-13T07:12:13.608840882Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:12:13.608890 containerd[1590]: time="2025-08-13T07:12:13.608854941Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:12:13.608890 containerd[1590]: time="2025-08-13T07:12:13.608865412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.609119 containerd[1590]: time="2025-08-13T07:12:13.609013756Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:12:13.609119 containerd[1590]: time="2025-08-13T07:12:13.609029903Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:12:13.609119 containerd[1590]: time="2025-08-13T07:12:13.609058444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:12:13.610572 containerd[1590]: time="2025-08-13T07:12:13.609498903Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:12:13.610572 containerd[1590]: time="2025-08-13T07:12:13.609579658Z" level=info msg="Connect containerd service" Aug 13 07:12:13.610572 containerd[1590]: time="2025-08-13T07:12:13.609632843Z" level=info msg="using legacy CRI server" Aug 13 07:12:13.610572 containerd[1590]: time="2025-08-13T07:12:13.609642176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:12:13.610572 containerd[1590]: time="2025-08-13T07:12:13.609758010Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:12:13.612259 containerd[1590]: time="2025-08-13T07:12:13.611474710Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:12:13.614600 containerd[1590]: time="2025-08-13T07:12:13.614051679Z" level=info msg="Start subscribing containerd event" Aug 13 07:12:13.614600 containerd[1590]: time="2025-08-13T07:12:13.614211798Z" level=info msg="Start recovering state" Aug 13 07:12:13.616038 containerd[1590]: time="2025-08-13T07:12:13.616006373Z" level=info msg="Start event monitor" Aug 13 07:12:13.617754 containerd[1590]: time="2025-08-13T07:12:13.616355026Z" level=info msg="Start snapshots syncer" Aug 13 07:12:13.617754 containerd[1590]: time="2025-08-13T07:12:13.616389099Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:12:13.617754 containerd[1590]: time="2025-08-13T07:12:13.616402437Z" level=info msg="Start streaming server" Aug 13 07:12:13.617754 containerd[1590]: time="2025-08-13T07:12:13.616404351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:12:13.617754 containerd[1590]: time="2025-08-13T07:12:13.616483975Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:12:13.616714 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:12:13.620145 containerd[1590]: time="2025-08-13T07:12:13.620059805Z" level=info msg="containerd successfully booted in 0.106528s" Aug 13 07:12:14.013656 tar[1588]: linux-amd64/LICENSE Aug 13 07:12:14.013656 tar[1588]: linux-amd64/README.md Aug 13 07:12:14.028526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:12:14.675309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:14.678032 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:12:14.682647 systemd[1]: Startup finished in 6.790s (kernel) + 6.392s (userspace) = 13.182s. Aug 13 07:12:14.694423 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:12:15.453934 kubelet[1695]: E0813 07:12:15.453758 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:12:15.457573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:12:15.459346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:12:16.444614 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:12:16.452771 systemd[1]: Started sshd@0-64.23.236.148:22-139.178.89.65:58354.service - OpenSSH per-connection server daemon (139.178.89.65:58354). Aug 13 07:12:16.544576 sshd[1707]: Accepted publickey for core from 139.178.89.65 port 58354 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:16.549623 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:16.568154 systemd-logind[1571]: New session 1 of user core. Aug 13 07:12:16.568940 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:12:16.581824 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:12:16.609762 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:12:16.622031 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:12:16.642430 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:12:16.760552 systemd[1713]: Queued start job for default target default.target. Aug 13 07:12:16.761056 systemd[1713]: Created slice app.slice - User Application Slice. Aug 13 07:12:16.761082 systemd[1713]: Reached target paths.target - Paths. Aug 13 07:12:16.761096 systemd[1713]: Reached target timers.target - Timers. Aug 13 07:12:16.766121 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:12:16.792714 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:12:16.792787 systemd[1713]: Reached target sockets.target - Sockets. Aug 13 07:12:16.792802 systemd[1713]: Reached target basic.target - Basic System. Aug 13 07:12:16.792864 systemd[1713]: Reached target default.target - Main User Target. Aug 13 07:12:16.792896 systemd[1713]: Startup finished in 140ms. Aug 13 07:12:16.793514 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:12:16.802462 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:12:16.868371 systemd[1]: Started sshd@1-64.23.236.148:22-139.178.89.65:58362.service - OpenSSH per-connection server daemon (139.178.89.65:58362). Aug 13 07:12:16.934860 sshd[1725]: Accepted publickey for core from 139.178.89.65 port 58362 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:16.937079 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:16.943388 systemd-logind[1571]: New session 2 of user core. Aug 13 07:12:16.949357 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:12:17.015262 sshd[1725]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:17.026515 systemd[1]: Started sshd@2-64.23.236.148:22-139.178.89.65:58378.service - OpenSSH per-connection server daemon (139.178.89.65:58378). Aug 13 07:12:17.027384 systemd[1]: sshd@1-64.23.236.148:22-139.178.89.65:58362.service: Deactivated successfully. Aug 13 07:12:17.031522 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:12:17.034090 systemd-logind[1571]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:12:17.035690 systemd-logind[1571]: Removed session 2. Aug 13 07:12:17.076432 sshd[1730]: Accepted publickey for core from 139.178.89.65 port 58378 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:17.078855 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:17.085295 systemd-logind[1571]: New session 3 of user core. Aug 13 07:12:17.091510 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:12:17.155426 sshd[1730]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:17.170434 systemd[1]: Started sshd@3-64.23.236.148:22-139.178.89.65:58382.service - OpenSSH per-connection server daemon (139.178.89.65:58382). Aug 13 07:12:17.171319 systemd[1]: sshd@2-64.23.236.148:22-139.178.89.65:58378.service: Deactivated successfully. Aug 13 07:12:17.177304 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:12:17.178579 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:12:17.182856 systemd-logind[1571]: Removed session 3. Aug 13 07:12:17.226253 sshd[1738]: Accepted publickey for core from 139.178.89.65 port 58382 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:17.228619 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:17.235221 systemd-logind[1571]: New session 4 of user core. Aug 13 07:12:17.253521 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:12:17.319272 sshd[1738]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:17.329501 systemd[1]: Started sshd@4-64.23.236.148:22-139.178.89.65:58396.service - OpenSSH per-connection server daemon (139.178.89.65:58396). Aug 13 07:12:17.330279 systemd[1]: sshd@3-64.23.236.148:22-139.178.89.65:58382.service: Deactivated successfully. Aug 13 07:12:17.341357 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:12:17.342036 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:12:17.345480 systemd-logind[1571]: Removed session 4. Aug 13 07:12:17.368642 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 58396 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:17.371057 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:17.378183 systemd-logind[1571]: New session 5 of user core. Aug 13 07:12:17.388505 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:12:17.463590 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:12:17.464106 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:12:17.477277 sudo[1753]: pam_unix(sudo:session): session closed for user root Aug 13 07:12:17.480791 sshd[1746]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:17.492502 systemd[1]: Started sshd@5-64.23.236.148:22-139.178.89.65:58400.service - OpenSSH per-connection server daemon (139.178.89.65:58400). Aug 13 07:12:17.493321 systemd[1]: sshd@4-64.23.236.148:22-139.178.89.65:58396.service: Deactivated successfully. Aug 13 07:12:17.498195 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:12:17.499108 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:12:17.501792 systemd-logind[1571]: Removed session 5. Aug 13 07:12:17.542583 sshd[1755]: Accepted publickey for core from 139.178.89.65 port 58400 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:17.545726 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:17.552636 systemd-logind[1571]: New session 6 of user core. Aug 13 07:12:17.561703 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:12:17.629277 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:12:17.629822 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:12:17.636561 sudo[1763]: pam_unix(sudo:session): session closed for user root Aug 13 07:12:17.645676 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:12:17.646265 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:12:17.668461 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:12:17.672299 auditctl[1766]: No rules Aug 13 07:12:17.672877 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:12:17.673235 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:12:17.682060 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:12:17.738813 augenrules[1785]: No rules Aug 13 07:12:17.739912 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:12:17.742509 sudo[1762]: pam_unix(sudo:session): session closed for user root Aug 13 07:12:17.747286 sshd[1755]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:17.757319 systemd[1]: Started sshd@6-64.23.236.148:22-139.178.89.65:58408.service - OpenSSH per-connection server daemon (139.178.89.65:58408). Aug 13 07:12:17.758029 systemd[1]: sshd@5-64.23.236.148:22-139.178.89.65:58400.service: Deactivated successfully. Aug 13 07:12:17.761690 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:12:17.764074 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:12:17.766098 systemd-logind[1571]: Removed session 6. Aug 13 07:12:17.807559 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 58408 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:12:17.809564 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:12:17.817773 systemd-logind[1571]: New session 7 of user core. Aug 13 07:12:17.824510 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:12:17.888737 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:12:17.889261 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:12:18.399543 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:12:18.401312 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:12:18.878078 dockerd[1813]: time="2025-08-13T07:12:18.877197319Z" level=info msg="Starting up" Aug 13 07:12:19.106376 dockerd[1813]: time="2025-08-13T07:12:19.106099729Z" level=info msg="Loading containers: start." Aug 13 07:12:19.226976 kernel: Initializing XFRM netlink socket Aug 13 07:12:19.318751 systemd-networkd[1220]: docker0: Link UP Aug 13 07:12:19.335386 dockerd[1813]: time="2025-08-13T07:12:19.335328142Z" level=info msg="Loading containers: done." Aug 13 07:12:19.356823 dockerd[1813]: time="2025-08-13T07:12:19.356212178Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:12:19.356823 dockerd[1813]: time="2025-08-13T07:12:19.356377633Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:12:19.356823 dockerd[1813]: time="2025-08-13T07:12:19.356519626Z" level=info msg="Daemon has completed initialization" Aug 13 07:12:19.389132 dockerd[1813]: time="2025-08-13T07:12:19.389042493Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:12:19.390148 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:12:20.264911 containerd[1590]: time="2025-08-13T07:12:20.264819591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:12:20.890120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1237807244.mount: Deactivated successfully. Aug 13 07:12:21.955152 containerd[1590]: time="2025-08-13T07:12:21.954022019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:21.956128 containerd[1590]: time="2025-08-13T07:12:21.956069249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:12:21.957433 containerd[1590]: time="2025-08-13T07:12:21.957397088Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:21.960083 containerd[1590]: time="2025-08-13T07:12:21.960042500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:21.961397 containerd[1590]: time="2025-08-13T07:12:21.961366006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.696480228s" Aug 13 07:12:21.961525 containerd[1590]: time="2025-08-13T07:12:21.961506301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:12:21.962305 containerd[1590]: time="2025-08-13T07:12:21.962270685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:12:23.370850 containerd[1590]: time="2025-08-13T07:12:23.370679325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:23.370850 containerd[1590]: time="2025-08-13T07:12:23.370766762Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:12:23.372478 containerd[1590]: time="2025-08-13T07:12:23.372407272Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:23.375981 containerd[1590]: time="2025-08-13T07:12:23.375776961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:23.377232 containerd[1590]: time="2025-08-13T07:12:23.377049000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.414637049s" Aug 13 07:12:23.377232 containerd[1590]: time="2025-08-13T07:12:23.377100810Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:12:23.378106 containerd[1590]: time="2025-08-13T07:12:23.377830153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:12:24.605392 containerd[1590]: time="2025-08-13T07:12:24.605318736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:24.606784 containerd[1590]: time="2025-08-13T07:12:24.606487381Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:12:24.607982 containerd[1590]: time="2025-08-13T07:12:24.607431754Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:24.611595 containerd[1590]: time="2025-08-13T07:12:24.611539988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:24.612829 containerd[1590]: time="2025-08-13T07:12:24.612701523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.234835591s" Aug 13 07:12:24.612964 containerd[1590]: time="2025-08-13T07:12:24.612860404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:12:24.613606 containerd[1590]: time="2025-08-13T07:12:24.613573436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:12:25.708192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:12:25.718303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:25.743098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362341380.mount: Deactivated successfully. Aug 13 07:12:25.980557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:25.990039 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:12:26.054953 kubelet[2039]: E0813 07:12:26.054876 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:12:26.058015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:12:26.058394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:12:26.745478 containerd[1590]: time="2025-08-13T07:12:26.745125068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:26.746876 containerd[1590]: time="2025-08-13T07:12:26.746804552Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:12:26.747697 containerd[1590]: time="2025-08-13T07:12:26.747651964Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:26.749710 containerd[1590]: time="2025-08-13T07:12:26.749654840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:26.750644 containerd[1590]: time="2025-08-13T07:12:26.750478643Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.136866437s" Aug 13 07:12:26.750644 containerd[1590]: time="2025-08-13T07:12:26.750513904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:12:26.751464 containerd[1590]: time="2025-08-13T07:12:26.751424886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:12:26.753788 systemd-resolved[1471]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:12:27.292906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130666865.mount: Deactivated successfully. Aug 13 07:12:28.095277 containerd[1590]: time="2025-08-13T07:12:28.095213122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.096631 containerd[1590]: time="2025-08-13T07:12:28.096542166Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:12:28.097170 containerd[1590]: time="2025-08-13T07:12:28.097143611Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.100834 containerd[1590]: time="2025-08-13T07:12:28.100327529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.101688 containerd[1590]: time="2025-08-13T07:12:28.101647165Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.350032077s" Aug 13 07:12:28.101688 containerd[1590]: time="2025-08-13T07:12:28.101686683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:12:28.102987 containerd[1590]: time="2025-08-13T07:12:28.102954413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:12:28.593366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320393203.mount: Deactivated successfully. Aug 13 07:12:28.597417 containerd[1590]: time="2025-08-13T07:12:28.597355559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.598437 containerd[1590]: time="2025-08-13T07:12:28.598384708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:12:28.599124 containerd[1590]: time="2025-08-13T07:12:28.599054726Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.602111 containerd[1590]: time="2025-08-13T07:12:28.601249745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:28.602111 containerd[1590]: time="2025-08-13T07:12:28.601988948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 498.99911ms" Aug 13 07:12:28.602111 containerd[1590]: time="2025-08-13T07:12:28.602017890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:12:28.603157 containerd[1590]: time="2025-08-13T07:12:28.603133166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:12:29.138670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062331078.mount: Deactivated successfully. Aug 13 07:12:29.859125 systemd-resolved[1471]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:12:30.896850 containerd[1590]: time="2025-08-13T07:12:30.895649218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.897599 containerd[1590]: time="2025-08-13T07:12:30.897553288Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:12:30.898223 containerd[1590]: time="2025-08-13T07:12:30.898196080Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.904956 containerd[1590]: time="2025-08-13T07:12:30.904896056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.908385 containerd[1590]: time="2025-08-13T07:12:30.908309308Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.305042575s" Aug 13 07:12:30.908385 containerd[1590]: time="2025-08-13T07:12:30.908385366Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:12:33.963204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:33.979260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:34.025876 systemd[1]: Reloading requested from client PID 2186 ('systemctl') (unit session-7.scope)... Aug 13 07:12:34.025898 systemd[1]: Reloading... Aug 13 07:12:34.141957 zram_generator::config[2225]: No configuration found. Aug 13 07:12:34.291196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:12:34.376370 systemd[1]: Reloading finished in 349 ms. Aug 13 07:12:34.448180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:34.454538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:34.458951 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:12:34.459633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:34.470404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:34.607194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:34.620668 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:12:34.686825 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:34.686825 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:12:34.686825 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:34.687476 kubelet[2294]: I0813 07:12:34.686843 2294 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:12:35.056892 kubelet[2294]: I0813 07:12:35.055134 2294 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:12:35.056892 kubelet[2294]: I0813 07:12:35.055177 2294 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:12:35.056892 kubelet[2294]: I0813 07:12:35.055642 2294 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:12:35.089235 kubelet[2294]: I0813 07:12:35.089180 2294 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:12:35.089858 kubelet[2294]: E0813 07:12:35.089815 2294 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.236.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:35.104362 kubelet[2294]: E0813 07:12:35.104327 2294 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:12:35.104566 kubelet[2294]: I0813 07:12:35.104554 2294 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:12:35.109245 kubelet[2294]: I0813 07:12:35.109210 2294 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:12:35.110361 kubelet[2294]: I0813 07:12:35.110331 2294 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:12:35.110681 kubelet[2294]: I0813 07:12:35.110639 2294 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:12:35.111022 kubelet[2294]: I0813 07:12:35.110765 2294 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-f-9f59ec6646","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:12:35.111216 kubelet[2294]: I0813 07:12:35.111201 2294 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:12:35.111271 kubelet[2294]: I0813 07:12:35.111264 2294 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:12:35.111441 kubelet[2294]: I0813 07:12:35.111432 2294 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:35.113910 kubelet[2294]: I0813 07:12:35.113883 2294 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:12:35.114276 kubelet[2294]: I0813 07:12:35.114038 2294 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:12:35.114276 kubelet[2294]: I0813 07:12:35.114082 2294 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:12:35.114276 kubelet[2294]: I0813 07:12:35.114109 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:12:35.120622 kubelet[2294]: W0813 07:12:35.120376 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.236.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-f-9f59ec6646&limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:35.120622 kubelet[2294]: E0813 07:12:35.120486 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.236.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-f-9f59ec6646&limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:35.122813 kubelet[2294]: I0813 07:12:35.122683 2294 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:12:35.127192 kubelet[2294]: I0813 07:12:35.127154 2294 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:12:35.127791 kubelet[2294]: W0813 07:12:35.127213 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.236.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:35.127791 kubelet[2294]: E0813 07:12:35.127474 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.236.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:35.128509 kubelet[2294]: W0813 07:12:35.128206 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:12:35.128874 kubelet[2294]: I0813 07:12:35.128844 2294 server.go:1274] "Started kubelet" Aug 13 07:12:35.135357 kubelet[2294]: I0813 07:12:35.135317 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:12:35.137977 kubelet[2294]: E0813 07:12:35.135824 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.236.148:6443/api/v1/namespaces/default/events\": dial tcp 64.23.236.148:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-f-9f59ec6646.185b42190bb2aaba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-f-9f59ec6646,UID:ci-4081.3.5-f-9f59ec6646,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-f-9f59ec6646,},FirstTimestamp:2025-08-13 07:12:35.128814266 +0000 UTC m=+0.503148301,LastTimestamp:2025-08-13 07:12:35.128814266 +0000 UTC m=+0.503148301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-f-9f59ec6646,}" Aug 13 07:12:35.142671 kubelet[2294]: I0813 07:12:35.142607 2294 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:12:35.144704 kubelet[2294]: I0813 07:12:35.144672 2294 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:12:35.145866 kubelet[2294]: E0813 07:12:35.145173 2294 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-f-9f59ec6646\" not found" Aug 13 07:12:35.146394 kubelet[2294]: I0813 07:12:35.145643 2294 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:12:35.153131 kubelet[2294]: I0813 07:12:35.146083 2294 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:12:35.153131 kubelet[2294]: I0813 07:12:35.147174 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:12:35.154508 kubelet[2294]: I0813 07:12:35.153532 2294 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:12:35.154508 kubelet[2294]: I0813 07:12:35.148174 2294 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:12:35.156189 kubelet[2294]: W0813 07:12:35.148289 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.236.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:35.156274 kubelet[2294]: E0813 07:12:35.156223 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.236.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:35.156274 kubelet[2294]: I0813 07:12:35.148708 2294 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:12:35.156367 kubelet[2294]: I0813 07:12:35.156350 2294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:12:35.156662 kubelet[2294]: I0813 07:12:35.147651 2294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:12:35.156742 kubelet[2294]: E0813 07:12:35.148386 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.236.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-f-9f59ec6646?timeout=10s\": dial tcp 64.23.236.148:6443: connect: connection refused" interval="200ms" Aug 13 07:12:35.157630 kubelet[2294]: E0813 07:12:35.157605 2294 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:12:35.158415 kubelet[2294]: I0813 07:12:35.158396 2294 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:12:35.165691 kubelet[2294]: I0813 07:12:35.165645 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:12:35.167773 kubelet[2294]: I0813 07:12:35.167745 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:12:35.167912 kubelet[2294]: I0813 07:12:35.167904 2294 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:12:35.167996 kubelet[2294]: I0813 07:12:35.167988 2294 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:12:35.168109 kubelet[2294]: E0813 07:12:35.168083 2294 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:12:35.180718 kubelet[2294]: W0813 07:12:35.180653 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.236.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:35.180828 kubelet[2294]: E0813 07:12:35.180735 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.236.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:35.187731 kubelet[2294]: I0813 07:12:35.187700 2294 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:12:35.188188 kubelet[2294]: I0813 07:12:35.187898 2294 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:12:35.188188 kubelet[2294]: I0813 07:12:35.187926 2294 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:35.189915 kubelet[2294]: I0813 07:12:35.189776 2294 policy_none.go:49] "None policy: Start" Aug 13 07:12:35.190588 kubelet[2294]: I0813 07:12:35.190569 2294 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:12:35.190823 kubelet[2294]: I0813 07:12:35.190747 2294 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:12:35.196983 kubelet[2294]: I0813 07:12:35.196792 2294 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:12:35.197231 kubelet[2294]: I0813 07:12:35.197216 2294 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:12:35.197328 kubelet[2294]: I0813 07:12:35.197296 2294 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:12:35.200995 kubelet[2294]: I0813 07:12:35.198758 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:12:35.202049 kubelet[2294]: E0813 07:12:35.202017 2294 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-f-9f59ec6646\" not found" Aug 13 07:12:35.299315 kubelet[2294]: I0813 07:12:35.299268 2294 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.299924 kubelet[2294]: E0813 07:12:35.299894 2294 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.236.148:6443/api/v1/nodes\": dial tcp 64.23.236.148:6443: connect: connection refused" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.354017 kubelet[2294]: I0813 07:12:35.353749 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7199567d8b71d9f1605ce53d0d5061b2-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" (UID: \"7199567d8b71d9f1605ce53d0d5061b2\") " pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.354017 kubelet[2294]: I0813 07:12:35.353807 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7199567d8b71d9f1605ce53d0d5061b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" (UID: \"7199567d8b71d9f1605ce53d0d5061b2\") " pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.354017 kubelet[2294]: I0813 07:12:35.353841 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.354017 kubelet[2294]: I0813 07:12:35.353883 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0989dfd24bc56b9ef86c8cfea9a02c9f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-f-9f59ec6646\" (UID: \"0989dfd24bc56b9ef86c8cfea9a02c9f\") " pod="kube-system/kube-scheduler-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.354017 kubelet[2294]: I0813 07:12:35.353958 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7199567d8b71d9f1605ce53d0d5061b2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" (UID: \"7199567d8b71d9f1605ce53d0d5061b2\") " pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.355076 kubelet[2294]: I0813 07:12:35.354570 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.355076 kubelet[2294]: I0813 07:12:35.354637 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.355076 kubelet[2294]: I0813 07:12:35.354734 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.355076 kubelet[2294]: I0813 07:12:35.354812 2294 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.357572 kubelet[2294]: E0813 07:12:35.357512 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.236.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-f-9f59ec6646?timeout=10s\": dial tcp 64.23.236.148:6443: connect: connection refused" interval="400ms" Aug 13 07:12:35.501638 kubelet[2294]: I0813 07:12:35.501249 2294 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.501638 kubelet[2294]: E0813 07:12:35.501577 2294 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.236.148:6443/api/v1/nodes\": dial tcp 64.23.236.148:6443: connect: connection refused" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.575118 kubelet[2294]: E0813 07:12:35.575072 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:35.576013 containerd[1590]: time="2025-08-13T07:12:35.575886542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-f-9f59ec6646,Uid:7199567d8b71d9f1605ce53d0d5061b2,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:35.577956 systemd-resolved[1471]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 07:12:35.579354 kubelet[2294]: E0813 07:12:35.578491 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:35.579354 kubelet[2294]: E0813 07:12:35.578822 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:35.583188 containerd[1590]: time="2025-08-13T07:12:35.582859538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-f-9f59ec6646,Uid:0989dfd24bc56b9ef86c8cfea9a02c9f,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:35.583188 containerd[1590]: time="2025-08-13T07:12:35.583082998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-f-9f59ec6646,Uid:5ad50c9eca1b2cb2f41260c24288012c,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:35.758200 kubelet[2294]: E0813 07:12:35.758042 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.236.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-f-9f59ec6646?timeout=10s\": dial tcp 64.23.236.148:6443: connect: connection refused" interval="800ms" Aug 13 07:12:35.903180 kubelet[2294]: I0813 07:12:35.903111 2294 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.903481 kubelet[2294]: E0813 07:12:35.903449 2294 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.236.148:6443/api/v1/nodes\": dial tcp 64.23.236.148:6443: connect: connection refused" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:35.950436 kubelet[2294]: W0813 07:12:35.950330 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.236.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:35.950436 kubelet[2294]: E0813 07:12:35.950407 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.236.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:36.015115 kubelet[2294]: W0813 07:12:36.014883 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.236.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-f-9f59ec6646&limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:36.015115 kubelet[2294]: E0813 07:12:36.014994 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.236.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-f-9f59ec6646&limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:36.050082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891767563.mount: Deactivated successfully. Aug 13 07:12:36.052599 containerd[1590]: time="2025-08-13T07:12:36.052525613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:36.053922 containerd[1590]: time="2025-08-13T07:12:36.053878336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:36.054310 containerd[1590]: time="2025-08-13T07:12:36.054190023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:12:36.055951 containerd[1590]: time="2025-08-13T07:12:36.055027834Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:36.055951 containerd[1590]: time="2025-08-13T07:12:36.055447647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:12:36.055951 containerd[1590]: time="2025-08-13T07:12:36.055766949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:12:36.057460 containerd[1590]: time="2025-08-13T07:12:36.057420377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:36.060986 containerd[1590]: time="2025-08-13T07:12:36.060133312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.097548ms" Aug 13 07:12:36.060986 containerd[1590]: time="2025-08-13T07:12:36.060730909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:36.062488 containerd[1590]: time="2025-08-13T07:12:36.062458593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.302577ms" Aug 13 07:12:36.068617 containerd[1590]: time="2025-08-13T07:12:36.068558955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.594583ms" Aug 13 07:12:36.154605 kubelet[2294]: W0813 07:12:36.154472 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.236.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:36.154605 kubelet[2294]: E0813 07:12:36.154546 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.236.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:36.242690 containerd[1590]: time="2025-08-13T07:12:36.241096621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:36.242690 containerd[1590]: time="2025-08-13T07:12:36.241151953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:36.242690 containerd[1590]: time="2025-08-13T07:12:36.241167764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:36.242690 containerd[1590]: time="2025-08-13T07:12:36.241280148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.251879821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.252411429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.252439515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.252032814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.252098426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.252114506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:36.252801 containerd[1590]: time="2025-08-13T07:12:36.252265607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:36.258912 containerd[1590]: time="2025-08-13T07:12:36.258029729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:36.308876 kubelet[2294]: W0813 07:12:36.307170 2294 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.236.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.236.148:6443: connect: connection refused Aug 13 07:12:36.308876 kubelet[2294]: E0813 07:12:36.308084 2294 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.236.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.236.148:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:36.376121 containerd[1590]: time="2025-08-13T07:12:36.375916312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-f-9f59ec6646,Uid:5ad50c9eca1b2cb2f41260c24288012c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5b1393319d677ac8c83fdafb336373bdf54abbba7fd77b7e67510cafb63d06c\"" Aug 13 07:12:36.376791 containerd[1590]: time="2025-08-13T07:12:36.376768802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-f-9f59ec6646,Uid:7199567d8b71d9f1605ce53d0d5061b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab9903b9ac5fd967c5cf48dd4885c98ceda5007556e582788602b842aa0264f\"" Aug 13 07:12:36.378298 kubelet[2294]: E0813 07:12:36.377917 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:36.380542 kubelet[2294]: E0813 07:12:36.380378 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:36.388309 containerd[1590]: time="2025-08-13T07:12:36.388274715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-f-9f59ec6646,Uid:0989dfd24bc56b9ef86c8cfea9a02c9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a73455e81c989ab07e39bc16046e4cb247c17cbee0e31464a26c15909ead842\"" Aug 13 07:12:36.388597 containerd[1590]: time="2025-08-13T07:12:36.388308782Z" level=info msg="CreateContainer within sandbox \"d5b1393319d677ac8c83fdafb336373bdf54abbba7fd77b7e67510cafb63d06c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:12:36.389036 containerd[1590]: time="2025-08-13T07:12:36.388442963Z" level=info msg="CreateContainer within sandbox \"aab9903b9ac5fd967c5cf48dd4885c98ceda5007556e582788602b842aa0264f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:12:36.390710 kubelet[2294]: E0813 07:12:36.390663 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:36.394405 containerd[1590]: time="2025-08-13T07:12:36.394269246Z" level=info msg="CreateContainer within sandbox \"4a73455e81c989ab07e39bc16046e4cb247c17cbee0e31464a26c15909ead842\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:12:36.402099 containerd[1590]: time="2025-08-13T07:12:36.402027082Z" level=info msg="CreateContainer within sandbox \"aab9903b9ac5fd967c5cf48dd4885c98ceda5007556e582788602b842aa0264f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd3e8c22ad60c07a72b4f6b1e7a0cefda0cd0618d8d835ac1906451c92bc84eb\"" Aug 13 07:12:36.402862 containerd[1590]: time="2025-08-13T07:12:36.402790499Z" level=info msg="StartContainer for \"dd3e8c22ad60c07a72b4f6b1e7a0cefda0cd0618d8d835ac1906451c92bc84eb\"" Aug 13 07:12:36.404722 containerd[1590]: time="2025-08-13T07:12:36.404693897Z" level=info msg="CreateContainer within sandbox \"d5b1393319d677ac8c83fdafb336373bdf54abbba7fd77b7e67510cafb63d06c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"837651dcf8e2815848b5acfa56c44d0beb2536efa7ee4dd4629a0116458e1b0f\"" Aug 13 07:12:36.407572 containerd[1590]: time="2025-08-13T07:12:36.406557439Z" level=info msg="StartContainer for \"837651dcf8e2815848b5acfa56c44d0beb2536efa7ee4dd4629a0116458e1b0f\"" Aug 13 07:12:36.412565 containerd[1590]: time="2025-08-13T07:12:36.412511550Z" level=info msg="CreateContainer within sandbox \"4a73455e81c989ab07e39bc16046e4cb247c17cbee0e31464a26c15909ead842\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"543dfb480324b09b2f8ba9af7b1760cd950b1a6e313bdafa674527067e25c860\"" Aug 13 07:12:36.414163 containerd[1590]: time="2025-08-13T07:12:36.414133459Z" level=info msg="StartContainer for \"543dfb480324b09b2f8ba9af7b1760cd950b1a6e313bdafa674527067e25c860\"" Aug 13 07:12:36.539956 containerd[1590]: time="2025-08-13T07:12:36.539358476Z" level=info msg="StartContainer for \"dd3e8c22ad60c07a72b4f6b1e7a0cefda0cd0618d8d835ac1906451c92bc84eb\" returns successfully" Aug 13 07:12:36.551158 containerd[1590]: time="2025-08-13T07:12:36.551099080Z" level=info msg="StartContainer for \"543dfb480324b09b2f8ba9af7b1760cd950b1a6e313bdafa674527067e25c860\" returns successfully" Aug 13 07:12:36.561978 kubelet[2294]: E0813 07:12:36.560316 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.236.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-f-9f59ec6646?timeout=10s\": dial tcp 64.23.236.148:6443: connect: connection refused" interval="1.6s" Aug 13 07:12:36.572864 containerd[1590]: time="2025-08-13T07:12:36.572443411Z" level=info msg="StartContainer for \"837651dcf8e2815848b5acfa56c44d0beb2536efa7ee4dd4629a0116458e1b0f\" returns successfully" Aug 13 07:12:36.706110 kubelet[2294]: I0813 07:12:36.704615 2294 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:36.706110 kubelet[2294]: E0813 07:12:36.704986 2294 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.236.148:6443/api/v1/nodes\": dial tcp 64.23.236.148:6443: connect: connection refused" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:37.196668 kubelet[2294]: E0813 07:12:37.196553 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:37.201005 kubelet[2294]: E0813 07:12:37.199858 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:37.205540 kubelet[2294]: E0813 07:12:37.205407 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:38.209999 kubelet[2294]: E0813 07:12:38.209850 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:38.316851 kubelet[2294]: I0813 07:12:38.312613 2294 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:38.409972 kubelet[2294]: E0813 07:12:38.408078 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-f-9f59ec6646\" not found" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:38.452924 kubelet[2294]: I0813 07:12:38.452864 2294 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:38.453090 kubelet[2294]: E0813 07:12:38.452922 2294 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-f-9f59ec6646\": node \"ci-4081.3.5-f-9f59ec6646\" not found" Aug 13 07:12:38.462726 kubelet[2294]: E0813 07:12:38.462119 2294 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-f-9f59ec6646\" not found" Aug 13 07:12:38.562346 kubelet[2294]: E0813 07:12:38.562273 2294 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-f-9f59ec6646\" not found" Aug 13 07:12:38.663378 kubelet[2294]: E0813 07:12:38.663303 2294 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-f-9f59ec6646\" not found" Aug 13 07:12:39.129472 kubelet[2294]: I0813 07:12:39.129400 2294 apiserver.go:52] "Watching apiserver" Aug 13 07:12:39.153231 kubelet[2294]: I0813 07:12:39.153174 2294 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:12:40.562355 systemd[1]: Reloading requested from client PID 2567 ('systemctl') (unit session-7.scope)... Aug 13 07:12:40.562373 systemd[1]: Reloading... Aug 13 07:12:40.652019 zram_generator::config[2606]: No configuration found. Aug 13 07:12:40.795196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:12:40.879256 systemd[1]: Reloading finished in 316 ms. Aug 13 07:12:40.913916 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:40.926099 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:12:40.926484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:40.936049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:41.071166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:41.086484 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:12:41.159016 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:41.159016 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:12:41.159016 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:41.159016 kubelet[2667]: I0813 07:12:41.157707 2667 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:12:41.167996 kubelet[2667]: I0813 07:12:41.167865 2667 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:12:41.169012 kubelet[2667]: I0813 07:12:41.168249 2667 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:12:41.169012 kubelet[2667]: I0813 07:12:41.168608 2667 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:12:41.170734 kubelet[2667]: I0813 07:12:41.170708 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:12:41.174693 kubelet[2667]: I0813 07:12:41.174665 2667 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:12:41.178895 kubelet[2667]: E0813 07:12:41.178864 2667 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:12:41.179124 kubelet[2667]: I0813 07:12:41.179105 2667 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:12:41.182865 kubelet[2667]: I0813 07:12:41.182826 2667 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:12:41.183500 kubelet[2667]: I0813 07:12:41.183477 2667 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:12:41.183625 kubelet[2667]: I0813 07:12:41.183599 2667 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:12:41.183866 kubelet[2667]: I0813 07:12:41.183629 2667 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-f-9f59ec6646","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:12:41.184079 kubelet[2667]: I0813 07:12:41.183874 2667 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:12:41.184079 kubelet[2667]: I0813 07:12:41.183885 2667 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:12:41.184079 kubelet[2667]: I0813 07:12:41.183915 2667 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:41.184229 kubelet[2667]: I0813 07:12:41.184103 2667 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:12:41.184229 kubelet[2667]: I0813 07:12:41.184121 2667 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:12:41.184229 kubelet[2667]: I0813 07:12:41.184157 2667 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:12:41.184229 kubelet[2667]: I0813 07:12:41.184168 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:12:41.186432 kubelet[2667]: I0813 07:12:41.186308 2667 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:12:41.187007 kubelet[2667]: I0813 07:12:41.186965 2667 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:12:41.188559 kubelet[2667]: I0813 07:12:41.188537 2667 server.go:1274] "Started kubelet" Aug 13 07:12:41.197346 kubelet[2667]: I0813 07:12:41.197172 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:12:41.209151 kubelet[2667]: I0813 07:12:41.209103 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:12:41.210957 kubelet[2667]: I0813 07:12:41.210433 2667 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:12:41.212780 kubelet[2667]: I0813 07:12:41.212542 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:12:41.212780 kubelet[2667]: I0813 07:12:41.212729 2667 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:12:41.213364 kubelet[2667]: I0813 07:12:41.213011 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:12:41.215046 kubelet[2667]: I0813 07:12:41.215027 2667 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:12:41.218333 kubelet[2667]: I0813 07:12:41.217469 2667 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:12:41.218333 kubelet[2667]: I0813 07:12:41.217599 2667 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:12:41.223132 kubelet[2667]: I0813 07:12:41.223097 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:12:41.226236 kubelet[2667]: I0813 07:12:41.224337 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:12:41.226236 kubelet[2667]: I0813 07:12:41.224370 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:12:41.226236 kubelet[2667]: I0813 07:12:41.224387 2667 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:12:41.226236 kubelet[2667]: E0813 07:12:41.224432 2667 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:12:41.230322 kubelet[2667]: E0813 07:12:41.230289 2667 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:12:41.233711 kubelet[2667]: I0813 07:12:41.233679 2667 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:12:41.233711 kubelet[2667]: I0813 07:12:41.233713 2667 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:12:41.233875 kubelet[2667]: I0813 07:12:41.233844 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:12:41.309822 kubelet[2667]: I0813 07:12:41.309514 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:12:41.309822 kubelet[2667]: I0813 07:12:41.309533 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:12:41.309822 kubelet[2667]: I0813 07:12:41.309554 2667 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:41.309822 kubelet[2667]: I0813 07:12:41.309706 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:12:41.309822 kubelet[2667]: I0813 07:12:41.309716 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:12:41.309822 kubelet[2667]: I0813 07:12:41.309734 2667 policy_none.go:49] "None policy: Start" Aug 13 07:12:41.311274 kubelet[2667]: I0813 07:12:41.311254 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:12:41.311274 kubelet[2667]: I0813 07:12:41.311281 2667 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:12:41.311482 kubelet[2667]: I0813 07:12:41.311470 2667 state_mem.go:75] "Updated machine memory state" Aug 13 07:12:41.313725 kubelet[2667]: I0813 07:12:41.312716 2667 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:12:41.313725 kubelet[2667]: I0813 07:12:41.312884 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:12:41.313725 kubelet[2667]: I0813 07:12:41.312896 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:12:41.313725 kubelet[2667]: I0813 07:12:41.313510 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:12:41.341639 kubelet[2667]: W0813 07:12:41.341184 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:41.341639 kubelet[2667]: W0813 07:12:41.341508 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:41.342215 kubelet[2667]: W0813 07:12:41.342111 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:41.421030 kubelet[2667]: I0813 07:12:41.420541 2667 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.429156 kubelet[2667]: I0813 07:12:41.429104 2667 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.429309 kubelet[2667]: I0813 07:12:41.429223 2667 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519372 kubelet[2667]: I0813 07:12:41.519323 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7199567d8b71d9f1605ce53d0d5061b2-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" (UID: \"7199567d8b71d9f1605ce53d0d5061b2\") " pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519372 kubelet[2667]: I0813 07:12:41.519369 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7199567d8b71d9f1605ce53d0d5061b2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" (UID: \"7199567d8b71d9f1605ce53d0d5061b2\") " pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519372 kubelet[2667]: I0813 07:12:41.519390 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7199567d8b71d9f1605ce53d0d5061b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" (UID: \"7199567d8b71d9f1605ce53d0d5061b2\") " pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519372 kubelet[2667]: I0813 07:12:41.519412 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519683 kubelet[2667]: I0813 07:12:41.519430 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0989dfd24bc56b9ef86c8cfea9a02c9f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-f-9f59ec6646\" (UID: \"0989dfd24bc56b9ef86c8cfea9a02c9f\") " pod="kube-system/kube-scheduler-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519683 kubelet[2667]: I0813 07:12:41.519445 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519683 kubelet[2667]: I0813 07:12:41.519461 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519683 kubelet[2667]: I0813 07:12:41.519476 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.519683 kubelet[2667]: I0813 07:12:41.519491 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ad50c9eca1b2cb2f41260c24288012c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-f-9f59ec6646\" (UID: \"5ad50c9eca1b2cb2f41260c24288012c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:41.573264 sudo[2699]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:12:41.573636 sudo[2699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:12:41.643158 kubelet[2667]: E0813 07:12:41.643116 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:41.644145 kubelet[2667]: E0813 07:12:41.643954 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:41.644145 kubelet[2667]: E0813 07:12:41.644091 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:42.195058 kubelet[2667]: I0813 07:12:42.194252 2667 apiserver.go:52] "Watching apiserver" Aug 13 07:12:42.218051 kubelet[2667]: I0813 07:12:42.217954 2667 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:12:42.234631 sudo[2699]: pam_unix(sudo:session): session closed for user root Aug 13 07:12:42.270090 kubelet[2667]: E0813 07:12:42.269609 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:42.270090 kubelet[2667]: E0813 07:12:42.269944 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:42.277904 kubelet[2667]: W0813 07:12:42.277760 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:42.278226 kubelet[2667]: E0813 07:12:42.278207 2667 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-f-9f59ec6646\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" Aug 13 07:12:42.278574 kubelet[2667]: E0813 07:12:42.278499 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:42.304453 kubelet[2667]: I0813 07:12:42.303728 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-f-9f59ec6646" podStartSLOduration=1.30370952 podStartE2EDuration="1.30370952s" podCreationTimestamp="2025-08-13 07:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:42.303198995 +0000 UTC m=+1.209996566" watchObservedRunningTime="2025-08-13 07:12:42.30370952 +0000 UTC m=+1.210507155" Aug 13 07:12:42.338327 kubelet[2667]: I0813 07:12:42.338089 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-f-9f59ec6646" podStartSLOduration=1.338068529 podStartE2EDuration="1.338068529s" podCreationTimestamp="2025-08-13 07:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:42.323510231 +0000 UTC m=+1.230307803" watchObservedRunningTime="2025-08-13 07:12:42.338068529 +0000 UTC m=+1.244866097" Aug 13 07:12:42.338327 kubelet[2667]: I0813 07:12:42.338171 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-f-9f59ec6646" podStartSLOduration=1.338166578 podStartE2EDuration="1.338166578s" podCreationTimestamp="2025-08-13 07:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:42.337796671 +0000 UTC m=+1.244594245" watchObservedRunningTime="2025-08-13 07:12:42.338166578 +0000 UTC m=+1.244964206" Aug 13 07:12:43.271522 kubelet[2667]: E0813 07:12:43.271482 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:43.887502 sudo[1798]: pam_unix(sudo:session): session closed for user root Aug 13 07:12:43.892940 sshd[1791]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:43.896829 systemd[1]: sshd@6-64.23.236.148:22-139.178.89.65:58408.service: Deactivated successfully. Aug 13 07:12:43.902443 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:12:43.903720 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:12:43.905393 systemd-logind[1571]: Removed session 7. Aug 13 07:12:44.273084 kubelet[2667]: E0813 07:12:44.272856 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:44.456121 kubelet[2667]: E0813 07:12:44.456076 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:46.896635 kubelet[2667]: I0813 07:12:46.896392 2667 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:12:46.897467 containerd[1590]: time="2025-08-13T07:12:46.897222156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:12:46.897752 kubelet[2667]: I0813 07:12:46.897458 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:12:47.759955 kubelet[2667]: I0813 07:12:47.759534 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-clustermesh-secrets\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.760901 kubelet[2667]: I0813 07:12:47.760744 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5790e06d-343a-4f96-8b0f-e903ac5914c5-lib-modules\") pod \"kube-proxy-8djwv\" (UID: \"5790e06d-343a-4f96-8b0f-e903ac5914c5\") " pod="kube-system/kube-proxy-8djwv" Aug 13 07:12:47.760901 kubelet[2667]: I0813 07:12:47.760813 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-config-path\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.763197 kubelet[2667]: I0813 07:12:47.763064 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-xtables-lock\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.763197 kubelet[2667]: I0813 07:12:47.763149 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5790e06d-343a-4f96-8b0f-e903ac5914c5-kube-proxy\") pod \"kube-proxy-8djwv\" (UID: \"5790e06d-343a-4f96-8b0f-e903ac5914c5\") " pod="kube-system/kube-proxy-8djwv" Aug 13 07:12:47.763456 kubelet[2667]: I0813 07:12:47.763346 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-bpf-maps\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.763456 kubelet[2667]: I0813 07:12:47.763376 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hubble-tls\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.763456 kubelet[2667]: I0813 07:12:47.763410 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-run\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.763456 kubelet[2667]: I0813 07:12:47.763433 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hostproc\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764720 kubelet[2667]: I0813 07:12:47.763614 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-cgroup\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764720 kubelet[2667]: I0813 07:12:47.763637 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-lib-modules\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764720 kubelet[2667]: I0813 07:12:47.763668 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq59z\" (UniqueName: \"kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-kube-api-access-tq59z\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764720 kubelet[2667]: I0813 07:12:47.763688 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-net\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764720 kubelet[2667]: I0813 07:12:47.763705 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-kernel\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764997 kubelet[2667]: I0813 07:12:47.763727 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-etc-cni-netd\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764997 kubelet[2667]: I0813 07:12:47.763747 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cni-path\") pod \"cilium-sxgdb\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " pod="kube-system/cilium-sxgdb" Aug 13 07:12:47.764997 kubelet[2667]: I0813 07:12:47.763772 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5790e06d-343a-4f96-8b0f-e903ac5914c5-xtables-lock\") pod \"kube-proxy-8djwv\" (UID: \"5790e06d-343a-4f96-8b0f-e903ac5914c5\") " pod="kube-system/kube-proxy-8djwv" Aug 13 07:12:47.764997 kubelet[2667]: I0813 07:12:47.763791 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r246\" (UniqueName: \"kubernetes.io/projected/5790e06d-343a-4f96-8b0f-e903ac5914c5-kube-api-access-7r246\") pod \"kube-proxy-8djwv\" (UID: \"5790e06d-343a-4f96-8b0f-e903ac5914c5\") " pod="kube-system/kube-proxy-8djwv" Aug 13 07:12:48.008622 kubelet[2667]: E0813 07:12:48.008309 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:48.010186 containerd[1590]: time="2025-08-13T07:12:48.009217692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sxgdb,Uid:a66e4b1d-5ec0-4d2f-ba97-d9185807fad7,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:48.026982 kubelet[2667]: E0813 07:12:48.026752 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:48.031757 containerd[1590]: time="2025-08-13T07:12:48.031609401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8djwv,Uid:5790e06d-343a-4f96-8b0f-e903ac5914c5,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:48.093215 containerd[1590]: time="2025-08-13T07:12:48.092409897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:48.093215 containerd[1590]: time="2025-08-13T07:12:48.092465259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:48.093215 containerd[1590]: time="2025-08-13T07:12:48.092479920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:48.093215 containerd[1590]: time="2025-08-13T07:12:48.092561396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:48.111289 containerd[1590]: time="2025-08-13T07:12:48.110270808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:48.111289 containerd[1590]: time="2025-08-13T07:12:48.110377999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:48.111289 containerd[1590]: time="2025-08-13T07:12:48.110391550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:48.113950 containerd[1590]: time="2025-08-13T07:12:48.113245387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:48.167357 kubelet[2667]: I0813 07:12:48.165858 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-cilium-config-path\") pod \"cilium-operator-5d85765b45-mbwwz\" (UID: \"b6c4aa7a-c38a-4eea-895b-54b8ff720f39\") " pod="kube-system/cilium-operator-5d85765b45-mbwwz" Aug 13 07:12:48.167357 kubelet[2667]: I0813 07:12:48.165894 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bs9\" (UniqueName: \"kubernetes.io/projected/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-kube-api-access-84bs9\") pod \"cilium-operator-5d85765b45-mbwwz\" (UID: \"b6c4aa7a-c38a-4eea-895b-54b8ff720f39\") " pod="kube-system/cilium-operator-5d85765b45-mbwwz" Aug 13 07:12:48.182713 containerd[1590]: time="2025-08-13T07:12:48.182668936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sxgdb,Uid:a66e4b1d-5ec0-4d2f-ba97-d9185807fad7,Namespace:kube-system,Attempt:0,} returns sandbox id \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\"" Aug 13 07:12:48.183610 kubelet[2667]: E0813 07:12:48.183584 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:48.191496 containerd[1590]: time="2025-08-13T07:12:48.191452621Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:12:48.203964 containerd[1590]: time="2025-08-13T07:12:48.203829526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8djwv,Uid:5790e06d-343a-4f96-8b0f-e903ac5914c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fbae668d0c2be13b331510146241f0b8c585ac336d7589fa67f8ffbdfcac8c2\"" Aug 13 07:12:48.205007 kubelet[2667]: E0813 07:12:48.204772 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:48.207917 containerd[1590]: time="2025-08-13T07:12:48.207778666Z" level=info msg="CreateContainer within sandbox \"9fbae668d0c2be13b331510146241f0b8c585ac336d7589fa67f8ffbdfcac8c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:12:48.222587 containerd[1590]: time="2025-08-13T07:12:48.222436194Z" level=info msg="CreateContainer within sandbox \"9fbae668d0c2be13b331510146241f0b8c585ac336d7589fa67f8ffbdfcac8c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cbfb94a8051058891e60678ad48a72d153849d281567980502684bf195a60625\"" Aug 13 07:12:48.225140 containerd[1590]: time="2025-08-13T07:12:48.224563581Z" level=info msg="StartContainer for \"cbfb94a8051058891e60678ad48a72d153849d281567980502684bf195a60625\"" Aug 13 07:12:48.294458 containerd[1590]: time="2025-08-13T07:12:48.294197401Z" level=info msg="StartContainer for \"cbfb94a8051058891e60678ad48a72d153849d281567980502684bf195a60625\" returns successfully" Aug 13 07:12:48.347352 kubelet[2667]: E0813 07:12:48.346637 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:48.415122 kubelet[2667]: E0813 07:12:48.415080 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:48.417026 containerd[1590]: time="2025-08-13T07:12:48.416696807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mbwwz,Uid:b6c4aa7a-c38a-4eea-895b-54b8ff720f39,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:48.456910 containerd[1590]: time="2025-08-13T07:12:48.456803467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:48.457204 containerd[1590]: time="2025-08-13T07:12:48.456881009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:48.457204 containerd[1590]: time="2025-08-13T07:12:48.456897331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:48.457204 containerd[1590]: time="2025-08-13T07:12:48.457051598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:48.538726 containerd[1590]: time="2025-08-13T07:12:48.538523499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mbwwz,Uid:b6c4aa7a-c38a-4eea-895b-54b8ff720f39,Namespace:kube-system,Attempt:0,} returns sandbox id \"75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5\"" Aug 13 07:12:48.539366 kubelet[2667]: E0813 07:12:48.539343 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:49.302061 kubelet[2667]: E0813 07:12:49.301511 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:49.304316 kubelet[2667]: E0813 07:12:49.303072 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:49.332079 kubelet[2667]: I0813 07:12:49.330474 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8djwv" podStartSLOduration=2.330453349 podStartE2EDuration="2.330453349s" podCreationTimestamp="2025-08-13 07:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:49.324709811 +0000 UTC m=+8.231507382" watchObservedRunningTime="2025-08-13 07:12:49.330453349 +0000 UTC m=+8.237250921" Aug 13 07:12:50.303487 kubelet[2667]: E0813 07:12:50.303445 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:52.103340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328139795.mount: Deactivated successfully. Aug 13 07:12:53.510042 kubelet[2667]: E0813 07:12:53.509993 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:54.116014 containerd[1590]: time="2025-08-13T07:12:54.115944412Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:54.118960 containerd[1590]: time="2025-08-13T07:12:54.118355497Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 07:12:54.122676 containerd[1590]: time="2025-08-13T07:12:54.122606220Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:54.127359 containerd[1590]: time="2025-08-13T07:12:54.127308576Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.935804311s" Aug 13 07:12:54.127634 containerd[1590]: time="2025-08-13T07:12:54.127364056Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 07:12:54.131065 containerd[1590]: time="2025-08-13T07:12:54.130465946Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:12:54.138466 containerd[1590]: time="2025-08-13T07:12:54.138318962Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:12:54.218213 containerd[1590]: time="2025-08-13T07:12:54.217896437Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\"" Aug 13 07:12:54.219396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314052493.mount: Deactivated successfully. Aug 13 07:12:54.222191 containerd[1590]: time="2025-08-13T07:12:54.221531693Z" level=info msg="StartContainer for \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\"" Aug 13 07:12:54.347836 systemd[1]: run-containerd-runc-k8s.io-4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56-runc.SE8Cuh.mount: Deactivated successfully. Aug 13 07:12:54.388907 containerd[1590]: time="2025-08-13T07:12:54.387989826Z" level=info msg="StartContainer for \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\" returns successfully" Aug 13 07:12:54.477579 kubelet[2667]: E0813 07:12:54.475674 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:54.595844 containerd[1590]: time="2025-08-13T07:12:54.573708291Z" level=info msg="shim disconnected" id=4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56 namespace=k8s.io Aug 13 07:12:54.595844 containerd[1590]: time="2025-08-13T07:12:54.595626288Z" level=warning msg="cleaning up after shim disconnected" id=4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56 namespace=k8s.io Aug 13 07:12:54.595844 containerd[1590]: time="2025-08-13T07:12:54.595650292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:55.210530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56-rootfs.mount: Deactivated successfully. Aug 13 07:12:55.308847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046520901.mount: Deactivated successfully. Aug 13 07:12:55.328602 kubelet[2667]: E0813 07:12:55.328467 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:55.334752 containerd[1590]: time="2025-08-13T07:12:55.334398899Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:12:55.393831 containerd[1590]: time="2025-08-13T07:12:55.393790365Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\"" Aug 13 07:12:55.396331 containerd[1590]: time="2025-08-13T07:12:55.396288482Z" level=info msg="StartContainer for \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\"" Aug 13 07:12:55.501349 containerd[1590]: time="2025-08-13T07:12:55.500791177Z" level=info msg="StartContainer for \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\" returns successfully" Aug 13 07:12:55.515346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:12:55.516373 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:12:55.516456 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:12:55.527438 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:12:55.570419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:12:55.580028 containerd[1590]: time="2025-08-13T07:12:55.579602239Z" level=info msg="shim disconnected" id=e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39 namespace=k8s.io Aug 13 07:12:55.580505 containerd[1590]: time="2025-08-13T07:12:55.580278295Z" level=warning msg="cleaning up after shim disconnected" id=e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39 namespace=k8s.io Aug 13 07:12:55.580505 containerd[1590]: time="2025-08-13T07:12:55.580314383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:55.939593 containerd[1590]: time="2025-08-13T07:12:55.939462697Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:55.940416 containerd[1590]: time="2025-08-13T07:12:55.940365448Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 07:12:55.941372 containerd[1590]: time="2025-08-13T07:12:55.940659920Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:55.942323 containerd[1590]: time="2025-08-13T07:12:55.942196473Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.811678015s" Aug 13 07:12:55.942323 containerd[1590]: time="2025-08-13T07:12:55.942233423Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 07:12:55.953381 containerd[1590]: time="2025-08-13T07:12:55.953246263Z" level=info msg="CreateContainer within sandbox \"75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:12:55.969815 containerd[1590]: time="2025-08-13T07:12:55.969524158Z" level=info msg="CreateContainer within sandbox \"75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\"" Aug 13 07:12:55.970421 containerd[1590]: time="2025-08-13T07:12:55.970332571Z" level=info msg="StartContainer for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\"" Aug 13 07:12:56.039214 containerd[1590]: time="2025-08-13T07:12:56.039144960Z" level=info msg="StartContainer for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" returns successfully" Aug 13 07:12:56.344308 kubelet[2667]: E0813 07:12:56.342790 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:56.353505 containerd[1590]: time="2025-08-13T07:12:56.352811117Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:12:56.356683 kubelet[2667]: E0813 07:12:56.356623 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:56.417846 containerd[1590]: time="2025-08-13T07:12:56.417747742Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\"" Aug 13 07:12:56.422209 containerd[1590]: time="2025-08-13T07:12:56.420869217Z" level=info msg="StartContainer for \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\"" Aug 13 07:12:56.522872 containerd[1590]: time="2025-08-13T07:12:56.522821448Z" level=info msg="StartContainer for \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\" returns successfully" Aug 13 07:12:56.562285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae-rootfs.mount: Deactivated successfully. Aug 13 07:12:56.578848 containerd[1590]: time="2025-08-13T07:12:56.578286383Z" level=info msg="shim disconnected" id=0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae namespace=k8s.io Aug 13 07:12:56.578848 containerd[1590]: time="2025-08-13T07:12:56.578347791Z" level=warning msg="cleaning up after shim disconnected" id=0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae namespace=k8s.io Aug 13 07:12:56.578848 containerd[1590]: time="2025-08-13T07:12:56.578374676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:56.596045 containerd[1590]: time="2025-08-13T07:12:56.595031775Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:12:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:12:57.357649 kubelet[2667]: E0813 07:12:57.357607 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:57.358336 kubelet[2667]: E0813 07:12:57.358316 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:57.372746 containerd[1590]: time="2025-08-13T07:12:57.372704541Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:12:57.385041 kubelet[2667]: I0813 07:12:57.382733 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mbwwz" podStartSLOduration=1.972748484 podStartE2EDuration="9.382710672s" podCreationTimestamp="2025-08-13 07:12:48 +0000 UTC" firstStartedPulling="2025-08-13 07:12:48.5403514 +0000 UTC m=+7.447148963" lastFinishedPulling="2025-08-13 07:12:55.950313588 +0000 UTC m=+14.857111151" observedRunningTime="2025-08-13 07:12:56.48091811 +0000 UTC m=+15.387715688" watchObservedRunningTime="2025-08-13 07:12:57.382710672 +0000 UTC m=+16.289508243" Aug 13 07:12:57.399944 containerd[1590]: time="2025-08-13T07:12:57.399726859Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\"" Aug 13 07:12:57.403040 containerd[1590]: time="2025-08-13T07:12:57.402294766Z" level=info msg="StartContainer for \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\"" Aug 13 07:12:57.470026 containerd[1590]: time="2025-08-13T07:12:57.469925053Z" level=info msg="StartContainer for \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\" returns successfully" Aug 13 07:12:57.493197 containerd[1590]: time="2025-08-13T07:12:57.492986005Z" level=info msg="shim disconnected" id=b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6 namespace=k8s.io Aug 13 07:12:57.493197 containerd[1590]: time="2025-08-13T07:12:57.493034506Z" level=warning msg="cleaning up after shim disconnected" id=b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6 namespace=k8s.io Aug 13 07:12:57.493197 containerd[1590]: time="2025-08-13T07:12:57.493044878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:58.364171 kubelet[2667]: E0813 07:12:58.362958 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:58.378963 containerd[1590]: time="2025-08-13T07:12:58.378346120Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:12:58.397539 containerd[1590]: time="2025-08-13T07:12:58.396966389Z" level=info msg="CreateContainer within sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\"" Aug 13 07:12:58.397987 containerd[1590]: time="2025-08-13T07:12:58.397826686Z" level=info msg="StartContainer for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\"" Aug 13 07:12:58.401006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6-rootfs.mount: Deactivated successfully. Aug 13 07:12:58.472225 containerd[1590]: time="2025-08-13T07:12:58.469093629Z" level=info msg="StartContainer for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" returns successfully" Aug 13 07:12:58.523003 update_engine[1574]: I20250813 07:12:58.522038 1574 update_attempter.cc:509] Updating boot flags... Aug 13 07:12:58.610967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3414) Aug 13 07:12:58.725974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3415) Aug 13 07:12:58.773436 kubelet[2667]: I0813 07:12:58.772546 2667 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:12:58.848233 kubelet[2667]: I0813 07:12:58.847994 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d660b1a-17ec-4188-868e-4131b938b3de-config-volume\") pod \"coredns-7c65d6cfc9-4jrww\" (UID: \"3d660b1a-17ec-4188-868e-4131b938b3de\") " pod="kube-system/coredns-7c65d6cfc9-4jrww" Aug 13 07:12:58.848233 kubelet[2667]: I0813 07:12:58.848063 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9pl\" (UniqueName: \"kubernetes.io/projected/3d660b1a-17ec-4188-868e-4131b938b3de-kube-api-access-fp9pl\") pod \"coredns-7c65d6cfc9-4jrww\" (UID: \"3d660b1a-17ec-4188-868e-4131b938b3de\") " pod="kube-system/coredns-7c65d6cfc9-4jrww" Aug 13 07:12:58.848233 kubelet[2667]: I0813 07:12:58.848094 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzl4t\" (UniqueName: \"kubernetes.io/projected/e089f882-2e1c-4f7d-bc94-5255f44c59db-kube-api-access-fzl4t\") pod \"coredns-7c65d6cfc9-6kxjg\" (UID: \"e089f882-2e1c-4f7d-bc94-5255f44c59db\") " pod="kube-system/coredns-7c65d6cfc9-6kxjg" Aug 13 07:12:58.848233 kubelet[2667]: I0813 07:12:58.848124 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e089f882-2e1c-4f7d-bc94-5255f44c59db-config-volume\") pod \"coredns-7c65d6cfc9-6kxjg\" (UID: \"e089f882-2e1c-4f7d-bc94-5255f44c59db\") " pod="kube-system/coredns-7c65d6cfc9-6kxjg" Aug 13 07:12:59.120412 kubelet[2667]: E0813 07:12:59.119208 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:59.120412 kubelet[2667]: E0813 07:12:59.120221 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:59.122125 containerd[1590]: time="2025-08-13T07:12:59.122082171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6kxjg,Uid:e089f882-2e1c-4f7d-bc94-5255f44c59db,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:59.122954 containerd[1590]: time="2025-08-13T07:12:59.122918338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4jrww,Uid:3d660b1a-17ec-4188-868e-4131b938b3de,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:59.369259 kubelet[2667]: E0813 07:12:59.369184 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:12:59.394197 kubelet[2667]: I0813 07:12:59.392014 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sxgdb" podStartSLOduration=6.450393978 podStartE2EDuration="12.391989425s" podCreationTimestamp="2025-08-13 07:12:47 +0000 UTC" firstStartedPulling="2025-08-13 07:12:48.187946733 +0000 UTC m=+7.094744296" lastFinishedPulling="2025-08-13 07:12:54.12954218 +0000 UTC m=+13.036339743" observedRunningTime="2025-08-13 07:12:59.390055169 +0000 UTC m=+18.296852741" watchObservedRunningTime="2025-08-13 07:12:59.391989425 +0000 UTC m=+18.298786996" Aug 13 07:13:00.370923 kubelet[2667]: E0813 07:13:00.370797 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:00.757955 systemd-networkd[1220]: cilium_host: Link UP Aug 13 07:13:00.763332 systemd-networkd[1220]: cilium_net: Link UP Aug 13 07:13:00.763341 systemd-networkd[1220]: cilium_net: Gained carrier Aug 13 07:13:00.764721 systemd-networkd[1220]: cilium_host: Gained carrier Aug 13 07:13:00.920354 systemd-networkd[1220]: cilium_vxlan: Link UP Aug 13 07:13:00.920367 systemd-networkd[1220]: cilium_vxlan: Gained carrier Aug 13 07:13:00.979165 systemd-networkd[1220]: cilium_net: Gained IPv6LL Aug 13 07:13:01.372721 kubelet[2667]: E0813 07:13:01.372651 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:01.415132 kernel: NET: Registered PF_ALG protocol family Aug 13 07:13:01.475191 systemd-networkd[1220]: cilium_host: Gained IPv6LL Aug 13 07:13:02.179268 systemd-networkd[1220]: cilium_vxlan: Gained IPv6LL Aug 13 07:13:02.300947 systemd-networkd[1220]: lxc_health: Link UP Aug 13 07:13:02.309791 systemd-networkd[1220]: lxc_health: Gained carrier Aug 13 07:13:02.581097 kubelet[2667]: E0813 07:13:02.580817 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:02.689983 systemd-networkd[1220]: lxc3960ef97ca56: Link UP Aug 13 07:13:02.697472 kernel: eth0: renamed from tmp67f2f Aug 13 07:13:02.706092 systemd-networkd[1220]: lxc3960ef97ca56: Gained carrier Aug 13 07:13:02.737596 systemd-networkd[1220]: lxc12d1770659c1: Link UP Aug 13 07:13:02.746151 kernel: eth0: renamed from tmp135db Aug 13 07:13:02.752580 systemd-networkd[1220]: lxc12d1770659c1: Gained carrier Aug 13 07:13:03.907159 systemd-networkd[1220]: lxc12d1770659c1: Gained IPv6LL Aug 13 07:13:04.010676 kubelet[2667]: E0813 07:13:04.010438 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:04.227724 systemd-networkd[1220]: lxc_health: Gained IPv6LL Aug 13 07:13:04.392383 kubelet[2667]: E0813 07:13:04.392348 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:04.675783 systemd-networkd[1220]: lxc3960ef97ca56: Gained IPv6LL Aug 13 07:13:06.915971 containerd[1590]: time="2025-08-13T07:13:06.913977135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:06.915971 containerd[1590]: time="2025-08-13T07:13:06.914043244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:06.915971 containerd[1590]: time="2025-08-13T07:13:06.914055109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:06.915971 containerd[1590]: time="2025-08-13T07:13:06.914145554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:07.012977 containerd[1590]: time="2025-08-13T07:13:07.012177479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:07.012977 containerd[1590]: time="2025-08-13T07:13:07.012234906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:07.012977 containerd[1590]: time="2025-08-13T07:13:07.012245437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:07.012977 containerd[1590]: time="2025-08-13T07:13:07.012890742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4jrww,Uid:3d660b1a-17ec-4188-868e-4131b938b3de,Namespace:kube-system,Attempt:0,} returns sandbox id \"67f2fec4b98f1eea64c1bab534838193ee339dab3e82de24267384b0672b074f\"" Aug 13 07:13:07.017317 kubelet[2667]: E0813 07:13:07.017286 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:07.021752 containerd[1590]: time="2025-08-13T07:13:07.018103203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:07.029771 containerd[1590]: time="2025-08-13T07:13:07.028107336Z" level=info msg="CreateContainer within sandbox \"67f2fec4b98f1eea64c1bab534838193ee339dab3e82de24267384b0672b074f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:13:07.085482 containerd[1590]: time="2025-08-13T07:13:07.084871720Z" level=info msg="CreateContainer within sandbox \"67f2fec4b98f1eea64c1bab534838193ee339dab3e82de24267384b0672b074f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3073dffb68aa3d671648ba07fb1393e217add8edb9f64df034c7d2b5edefa72a\"" Aug 13 07:13:07.086525 containerd[1590]: time="2025-08-13T07:13:07.086475289Z" level=info msg="StartContainer for \"3073dffb68aa3d671648ba07fb1393e217add8edb9f64df034c7d2b5edefa72a\"" Aug 13 07:13:07.144788 containerd[1590]: time="2025-08-13T07:13:07.144697964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6kxjg,Uid:e089f882-2e1c-4f7d-bc94-5255f44c59db,Namespace:kube-system,Attempt:0,} returns sandbox id \"135db2c21bf08e58a2f4a8d58c8e46710373469fd7ccbceea44bbccff5256428\"" Aug 13 07:13:07.146184 kubelet[2667]: E0813 07:13:07.146148 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:07.153506 containerd[1590]: time="2025-08-13T07:13:07.151972038Z" level=info msg="CreateContainer within sandbox \"135db2c21bf08e58a2f4a8d58c8e46710373469fd7ccbceea44bbccff5256428\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:13:07.178505 containerd[1590]: time="2025-08-13T07:13:07.177924984Z" level=info msg="CreateContainer within sandbox \"135db2c21bf08e58a2f4a8d58c8e46710373469fd7ccbceea44bbccff5256428\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06873e0ddd803e2f37511b83b02ae593d813b081fec75d31166f98d087de1dcb\"" Aug 13 07:13:07.179700 containerd[1590]: time="2025-08-13T07:13:07.179586113Z" level=info msg="StartContainer for \"06873e0ddd803e2f37511b83b02ae593d813b081fec75d31166f98d087de1dcb\"" Aug 13 07:13:07.270749 containerd[1590]: time="2025-08-13T07:13:07.270691415Z" level=info msg="StartContainer for \"3073dffb68aa3d671648ba07fb1393e217add8edb9f64df034c7d2b5edefa72a\" returns successfully" Aug 13 07:13:07.300417 containerd[1590]: time="2025-08-13T07:13:07.300294910Z" level=info msg="StartContainer for \"06873e0ddd803e2f37511b83b02ae593d813b081fec75d31166f98d087de1dcb\" returns successfully" Aug 13 07:13:07.418573 kubelet[2667]: E0813 07:13:07.417123 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:07.430597 kubelet[2667]: E0813 07:13:07.430216 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:07.448907 kubelet[2667]: I0813 07:13:07.448834 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6kxjg" podStartSLOduration=19.448810229 podStartE2EDuration="19.448810229s" podCreationTimestamp="2025-08-13 07:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:13:07.445173877 +0000 UTC m=+26.351971448" watchObservedRunningTime="2025-08-13 07:13:07.448810229 +0000 UTC m=+26.355607794" Aug 13 07:13:07.924946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156749894.mount: Deactivated successfully. Aug 13 07:13:08.432707 kubelet[2667]: E0813 07:13:08.431866 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:08.432707 kubelet[2667]: E0813 07:13:08.432340 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:08.448766 kubelet[2667]: I0813 07:13:08.448683 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4jrww" podStartSLOduration=20.448661526 podStartE2EDuration="20.448661526s" podCreationTimestamp="2025-08-13 07:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:13:07.483494076 +0000 UTC m=+26.390291648" watchObservedRunningTime="2025-08-13 07:13:08.448661526 +0000 UTC m=+27.355459097" Aug 13 07:13:09.433753 kubelet[2667]: E0813 07:13:09.433266 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:09.433753 kubelet[2667]: E0813 07:13:09.433694 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:13:19.613283 systemd[1]: Started sshd@7-64.23.236.148:22-139.178.89.65:60102.service - OpenSSH per-connection server daemon (139.178.89.65:60102). Aug 13 07:13:19.673881 sshd[4048]: Accepted publickey for core from 139.178.89.65 port 60102 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:19.676401 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:19.682773 systemd-logind[1571]: New session 8 of user core. Aug 13 07:13:19.692306 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:13:20.268354 sshd[4048]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:20.272388 systemd[1]: sshd@7-64.23.236.148:22-139.178.89.65:60102.service: Deactivated successfully. Aug 13 07:13:20.272805 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:13:20.277763 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:13:20.279713 systemd-logind[1571]: Removed session 8. Aug 13 07:13:25.279895 systemd[1]: Started sshd@8-64.23.236.148:22-139.178.89.65:60106.service - OpenSSH per-connection server daemon (139.178.89.65:60106). Aug 13 07:13:25.342895 sshd[4063]: Accepted publickey for core from 139.178.89.65 port 60106 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:25.344770 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:25.350244 systemd-logind[1571]: New session 9 of user core. Aug 13 07:13:25.355490 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:13:25.500879 sshd[4063]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:25.504404 systemd[1]: sshd@8-64.23.236.148:22-139.178.89.65:60106.service: Deactivated successfully. Aug 13 07:13:25.510001 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:13:25.510747 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:13:25.512479 systemd-logind[1571]: Removed session 9. Aug 13 07:13:30.511293 systemd[1]: Started sshd@9-64.23.236.148:22-139.178.89.65:44714.service - OpenSSH per-connection server daemon (139.178.89.65:44714). Aug 13 07:13:30.563850 sshd[4078]: Accepted publickey for core from 139.178.89.65 port 44714 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:30.565773 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:30.571443 systemd-logind[1571]: New session 10 of user core. Aug 13 07:13:30.578452 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:13:30.714673 sshd[4078]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:30.718499 systemd[1]: sshd@9-64.23.236.148:22-139.178.89.65:44714.service: Deactivated successfully. Aug 13 07:13:30.723616 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:13:30.724852 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:13:30.726405 systemd-logind[1571]: Removed session 10. Aug 13 07:13:35.727352 systemd[1]: Started sshd@10-64.23.236.148:22-139.178.89.65:44730.service - OpenSSH per-connection server daemon (139.178.89.65:44730). Aug 13 07:13:35.776593 sshd[4092]: Accepted publickey for core from 139.178.89.65 port 44730 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:35.779175 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:35.786226 systemd-logind[1571]: New session 11 of user core. Aug 13 07:13:35.790755 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:13:35.932855 sshd[4092]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:35.937513 systemd[1]: sshd@10-64.23.236.148:22-139.178.89.65:44730.service: Deactivated successfully. Aug 13 07:13:35.940536 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:13:35.942471 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:13:35.946263 systemd[1]: Started sshd@11-64.23.236.148:22-139.178.89.65:44734.service - OpenSSH per-connection server daemon (139.178.89.65:44734). Aug 13 07:13:35.947166 systemd-logind[1571]: Removed session 11. Aug 13 07:13:36.008381 sshd[4106]: Accepted publickey for core from 139.178.89.65 port 44734 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:36.010150 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:36.016216 systemd-logind[1571]: New session 12 of user core. Aug 13 07:13:36.021261 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:13:36.216590 sshd[4106]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:36.234565 systemd[1]: Started sshd@12-64.23.236.148:22-139.178.89.65:44742.service - OpenSSH per-connection server daemon (139.178.89.65:44742). Aug 13 07:13:36.235049 systemd[1]: sshd@11-64.23.236.148:22-139.178.89.65:44734.service: Deactivated successfully. Aug 13 07:13:36.247074 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:13:36.250241 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:13:36.261635 systemd-logind[1571]: Removed session 12. Aug 13 07:13:36.315367 sshd[4116]: Accepted publickey for core from 139.178.89.65 port 44742 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:36.317313 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:36.323288 systemd-logind[1571]: New session 13 of user core. Aug 13 07:13:36.328264 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:13:36.474762 sshd[4116]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:36.479184 systemd[1]: sshd@12-64.23.236.148:22-139.178.89.65:44742.service: Deactivated successfully. Aug 13 07:13:36.483238 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:13:36.483692 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:13:36.484999 systemd-logind[1571]: Removed session 13. Aug 13 07:13:41.486283 systemd[1]: Started sshd@13-64.23.236.148:22-139.178.89.65:46570.service - OpenSSH per-connection server daemon (139.178.89.65:46570). Aug 13 07:13:41.529657 sshd[4135]: Accepted publickey for core from 139.178.89.65 port 46570 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:41.531523 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:41.539333 systemd-logind[1571]: New session 14 of user core. Aug 13 07:13:41.546915 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:13:41.680121 sshd[4135]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:41.683506 systemd[1]: sshd@13-64.23.236.148:22-139.178.89.65:46570.service: Deactivated successfully. Aug 13 07:13:41.687250 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:13:41.687774 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:13:41.689616 systemd-logind[1571]: Removed session 14. Aug 13 07:13:46.689625 systemd[1]: Started sshd@14-64.23.236.148:22-139.178.89.65:46584.service - OpenSSH per-connection server daemon (139.178.89.65:46584). Aug 13 07:13:46.749149 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 46584 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:46.751350 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:46.759611 systemd-logind[1571]: New session 15 of user core. Aug 13 07:13:46.763373 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:13:46.901545 sshd[4149]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:46.907338 systemd[1]: sshd@14-64.23.236.148:22-139.178.89.65:46584.service: Deactivated successfully. Aug 13 07:13:46.919327 systemd[1]: Started sshd@15-64.23.236.148:22-139.178.89.65:46592.service - OpenSSH per-connection server daemon (139.178.89.65:46592). Aug 13 07:13:46.921472 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:13:46.924035 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:13:46.925998 systemd-logind[1571]: Removed session 15. Aug 13 07:13:46.985704 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 46592 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:46.987857 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:46.994753 systemd-logind[1571]: New session 16 of user core. Aug 13 07:13:47.002528 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:13:47.335313 sshd[4163]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:47.343797 systemd[1]: Started sshd@16-64.23.236.148:22-139.178.89.65:46598.service - OpenSSH per-connection server daemon (139.178.89.65:46598). Aug 13 07:13:47.344388 systemd[1]: sshd@15-64.23.236.148:22-139.178.89.65:46592.service: Deactivated successfully. Aug 13 07:13:47.352621 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:13:47.352699 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:13:47.355797 systemd-logind[1571]: Removed session 16. Aug 13 07:13:47.402846 sshd[4171]: Accepted publickey for core from 139.178.89.65 port 46598 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:47.404726 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:47.410824 systemd-logind[1571]: New session 17 of user core. Aug 13 07:13:47.415278 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:13:48.966021 sshd[4171]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:48.983447 systemd[1]: Started sshd@17-64.23.236.148:22-139.178.89.65:39014.service - OpenSSH per-connection server daemon (139.178.89.65:39014). Aug 13 07:13:48.983955 systemd[1]: sshd@16-64.23.236.148:22-139.178.89.65:46598.service: Deactivated successfully. Aug 13 07:13:48.999263 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:13:49.000163 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:13:49.007915 systemd-logind[1571]: Removed session 17. Aug 13 07:13:49.061627 sshd[4192]: Accepted publickey for core from 139.178.89.65 port 39014 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:49.063395 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:49.068924 systemd-logind[1571]: New session 18 of user core. Aug 13 07:13:49.076426 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:13:49.390499 sshd[4192]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:49.404082 systemd[1]: Started sshd@18-64.23.236.148:22-139.178.89.65:39018.service - OpenSSH per-connection server daemon (139.178.89.65:39018). Aug 13 07:13:49.405502 systemd[1]: sshd@17-64.23.236.148:22-139.178.89.65:39014.service: Deactivated successfully. Aug 13 07:13:49.412105 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:13:49.419841 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:13:49.422195 systemd-logind[1571]: Removed session 18. Aug 13 07:13:49.462563 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 39018 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:49.464287 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:49.470302 systemd-logind[1571]: New session 19 of user core. Aug 13 07:13:49.476306 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:13:49.602683 sshd[4204]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:49.606885 systemd[1]: sshd@18-64.23.236.148:22-139.178.89.65:39018.service: Deactivated successfully. Aug 13 07:13:49.611728 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:13:49.613539 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:13:49.614675 systemd-logind[1571]: Removed session 19. Aug 13 07:13:54.613277 systemd[1]: Started sshd@19-64.23.236.148:22-139.178.89.65:39024.service - OpenSSH per-connection server daemon (139.178.89.65:39024). Aug 13 07:13:54.658505 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 39024 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:54.660708 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:54.668652 systemd-logind[1571]: New session 20 of user core. Aug 13 07:13:54.673411 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:13:54.808634 sshd[4223]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:54.813681 systemd[1]: sshd@19-64.23.236.148:22-139.178.89.65:39024.service: Deactivated successfully. Aug 13 07:13:54.817890 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:13:54.818689 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:13:54.820296 systemd-logind[1571]: Removed session 20. Aug 13 07:13:59.817247 systemd[1]: Started sshd@20-64.23.236.148:22-139.178.89.65:35998.service - OpenSSH per-connection server daemon (139.178.89.65:35998). Aug 13 07:13:59.861593 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 35998 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:59.863517 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:59.869463 systemd-logind[1571]: New session 21 of user core. Aug 13 07:13:59.876492 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:14:00.006278 sshd[4237]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:00.011274 systemd[1]: sshd@20-64.23.236.148:22-139.178.89.65:35998.service: Deactivated successfully. Aug 13 07:14:00.016460 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:14:00.016659 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:14:00.018875 systemd-logind[1571]: Removed session 21. Aug 13 07:14:05.016442 systemd[1]: Started sshd@21-64.23.236.148:22-139.178.89.65:36010.service - OpenSSH per-connection server daemon (139.178.89.65:36010). Aug 13 07:14:05.063289 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 36010 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:05.065219 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:05.070607 systemd-logind[1571]: New session 22 of user core. Aug 13 07:14:05.074460 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:14:05.208190 sshd[4250]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:05.212759 systemd[1]: sshd@21-64.23.236.148:22-139.178.89.65:36010.service: Deactivated successfully. Aug 13 07:14:05.216669 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:14:05.216798 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:14:05.219264 systemd-logind[1571]: Removed session 22. Aug 13 07:14:06.229032 kubelet[2667]: E0813 07:14:06.228591 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:10.217344 systemd[1]: Started sshd@22-64.23.236.148:22-139.178.89.65:37506.service - OpenSSH per-connection server daemon (139.178.89.65:37506). Aug 13 07:14:10.263842 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 37506 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:10.266464 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:10.272179 systemd-logind[1571]: New session 23 of user core. Aug 13 07:14:10.275325 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:14:10.408748 sshd[4263]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:10.418313 systemd[1]: Started sshd@23-64.23.236.148:22-139.178.89.65:37520.service - OpenSSH per-connection server daemon (139.178.89.65:37520). Aug 13 07:14:10.418803 systemd[1]: sshd@22-64.23.236.148:22-139.178.89.65:37506.service: Deactivated successfully. Aug 13 07:14:10.429420 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:14:10.432385 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:14:10.434291 systemd-logind[1571]: Removed session 23. Aug 13 07:14:10.463078 sshd[4273]: Accepted publickey for core from 139.178.89.65 port 37520 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:10.465185 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:10.471187 systemd-logind[1571]: New session 24 of user core. Aug 13 07:14:10.479373 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:14:12.091715 systemd[1]: run-containerd-runc-k8s.io-6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384-runc.jZ8xdD.mount: Deactivated successfully. Aug 13 07:14:12.112856 containerd[1590]: time="2025-08-13T07:14:12.112793992Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:14:12.147330 containerd[1590]: time="2025-08-13T07:14:12.147274168Z" level=info msg="StopContainer for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" with timeout 30 (s)" Aug 13 07:14:12.147585 containerd[1590]: time="2025-08-13T07:14:12.147527897Z" level=info msg="StopContainer for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" with timeout 2 (s)" Aug 13 07:14:12.148945 containerd[1590]: time="2025-08-13T07:14:12.148896237Z" level=info msg="Stop container \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" with signal terminated" Aug 13 07:14:12.149149 containerd[1590]: time="2025-08-13T07:14:12.148896880Z" level=info msg="Stop container \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" with signal terminated" Aug 13 07:14:12.160145 systemd-networkd[1220]: lxc_health: Link DOWN Aug 13 07:14:12.160153 systemd-networkd[1220]: lxc_health: Lost carrier Aug 13 07:14:12.209611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3-rootfs.mount: Deactivated successfully. Aug 13 07:14:12.216685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384-rootfs.mount: Deactivated successfully. Aug 13 07:14:12.221591 containerd[1590]: time="2025-08-13T07:14:12.221482615Z" level=info msg="shim disconnected" id=6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384 namespace=k8s.io Aug 13 07:14:12.221591 containerd[1590]: time="2025-08-13T07:14:12.221557949Z" level=warning msg="cleaning up after shim disconnected" id=6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384 namespace=k8s.io Aug 13 07:14:12.221591 containerd[1590]: time="2025-08-13T07:14:12.221566591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:12.223412 containerd[1590]: time="2025-08-13T07:14:12.223284656Z" level=info msg="shim disconnected" id=2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3 namespace=k8s.io Aug 13 07:14:12.223604 containerd[1590]: time="2025-08-13T07:14:12.223484367Z" level=warning msg="cleaning up after shim disconnected" id=2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3 namespace=k8s.io Aug 13 07:14:12.223604 containerd[1590]: time="2025-08-13T07:14:12.223501983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:12.248162 containerd[1590]: time="2025-08-13T07:14:12.248114603Z" level=info msg="StopContainer for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" returns successfully" Aug 13 07:14:12.249185 containerd[1590]: time="2025-08-13T07:14:12.249148071Z" level=info msg="StopPodSandbox for \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\"" Aug 13 07:14:12.249295 containerd[1590]: time="2025-08-13T07:14:12.249198326Z" level=info msg="Container to stop \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:14:12.249295 containerd[1590]: time="2025-08-13T07:14:12.249211158Z" level=info msg="Container to stop \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:14:12.249295 containerd[1590]: time="2025-08-13T07:14:12.249223320Z" level=info msg="Container to stop \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:14:12.249295 containerd[1590]: time="2025-08-13T07:14:12.249232958Z" level=info msg="Container to stop \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:14:12.249295 containerd[1590]: time="2025-08-13T07:14:12.249242291Z" level=info msg="Container to stop \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:14:12.253015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db-shm.mount: Deactivated successfully. Aug 13 07:14:12.259307 containerd[1590]: time="2025-08-13T07:14:12.258223761Z" level=info msg="StopContainer for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" returns successfully" Aug 13 07:14:12.259307 containerd[1590]: time="2025-08-13T07:14:12.258823855Z" level=info msg="StopPodSandbox for \"75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5\"" Aug 13 07:14:12.259307 containerd[1590]: time="2025-08-13T07:14:12.258870129Z" level=info msg="Container to stop \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:14:12.311068 containerd[1590]: time="2025-08-13T07:14:12.310763511Z" level=info msg="shim disconnected" id=75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5 namespace=k8s.io Aug 13 07:14:12.311068 containerd[1590]: time="2025-08-13T07:14:12.310849100Z" level=warning msg="cleaning up after shim disconnected" id=75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5 namespace=k8s.io Aug 13 07:14:12.311068 containerd[1590]: time="2025-08-13T07:14:12.310858446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:12.311999 containerd[1590]: time="2025-08-13T07:14:12.311525732Z" level=info msg="shim disconnected" id=811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db namespace=k8s.io Aug 13 07:14:12.311999 containerd[1590]: time="2025-08-13T07:14:12.311569577Z" level=warning msg="cleaning up after shim disconnected" id=811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db namespace=k8s.io Aug 13 07:14:12.311999 containerd[1590]: time="2025-08-13T07:14:12.311577793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:12.333496 containerd[1590]: time="2025-08-13T07:14:12.333443730Z" level=info msg="TearDown network for sandbox \"75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5\" successfully" Aug 13 07:14:12.333677 containerd[1590]: time="2025-08-13T07:14:12.333663107Z" level=info msg="StopPodSandbox for \"75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5\" returns successfully" Aug 13 07:14:12.340033 containerd[1590]: time="2025-08-13T07:14:12.339986678Z" level=info msg="TearDown network for sandbox \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" successfully" Aug 13 07:14:12.340212 containerd[1590]: time="2025-08-13T07:14:12.340197311Z" level=info msg="StopPodSandbox for \"811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db\" returns successfully" Aug 13 07:14:12.367693 kubelet[2667]: I0813 07:14:12.367532 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-xtables-lock\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.367693 kubelet[2667]: I0813 07:14:12.367580 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-run\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.368482 kubelet[2667]: I0813 07:14:12.367790 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.368482 kubelet[2667]: I0813 07:14:12.368032 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-cilium-config-path\") pod \"b6c4aa7a-c38a-4eea-895b-54b8ff720f39\" (UID: \"b6c4aa7a-c38a-4eea-895b-54b8ff720f39\") " Aug 13 07:14:12.368482 kubelet[2667]: I0813 07:14:12.368062 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-bpf-maps\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.368482 kubelet[2667]: I0813 07:14:12.368087 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hubble-tls\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.368482 kubelet[2667]: I0813 07:14:12.368106 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hostproc\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.368482 kubelet[2667]: I0813 07:14:12.368121 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-lib-modules\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.369086 kubelet[2667]: I0813 07:14:12.368139 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq59z\" (UniqueName: \"kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-kube-api-access-tq59z\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.369086 kubelet[2667]: I0813 07:14:12.368153 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-kernel\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.369086 kubelet[2667]: I0813 07:14:12.368170 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-clustermesh-secrets\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.369086 kubelet[2667]: I0813 07:14:12.368185 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-net\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.369086 kubelet[2667]: I0813 07:14:12.368200 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-cgroup\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.369086 kubelet[2667]: I0813 07:14:12.368215 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-etc-cni-netd\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.370163 kubelet[2667]: I0813 07:14:12.368230 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cni-path\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.370163 kubelet[2667]: I0813 07:14:12.368247 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84bs9\" (UniqueName: \"kubernetes.io/projected/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-kube-api-access-84bs9\") pod \"b6c4aa7a-c38a-4eea-895b-54b8ff720f39\" (UID: \"b6c4aa7a-c38a-4eea-895b-54b8ff720f39\") " Aug 13 07:14:12.370163 kubelet[2667]: I0813 07:14:12.368265 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-config-path\") pod \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\" (UID: \"a66e4b1d-5ec0-4d2f-ba97-d9185807fad7\") " Aug 13 07:14:12.373645 kubelet[2667]: I0813 07:14:12.373597 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.376100 kubelet[2667]: I0813 07:14:12.375994 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.378408 kubelet[2667]: I0813 07:14:12.378019 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.378761 kubelet[2667]: I0813 07:14:12.378731 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hostproc" (OuterVolumeSpecName: "hostproc") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.379023 kubelet[2667]: I0813 07:14:12.379008 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.380148 kubelet[2667]: I0813 07:14:12.380120 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.380225 kubelet[2667]: I0813 07:14:12.380164 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.380225 kubelet[2667]: I0813 07:14:12.380180 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.380225 kubelet[2667]: I0813 07:14:12.380196 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cni-path" (OuterVolumeSpecName: "cni-path") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:14:12.381214 kubelet[2667]: I0813 07:14:12.380882 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-kube-api-access-tq59z" (OuterVolumeSpecName: "kube-api-access-tq59z") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "kube-api-access-tq59z". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:14:12.383589 kubelet[2667]: I0813 07:14:12.383553 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:14:12.384924 kubelet[2667]: I0813 07:14:12.384886 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:14:12.386556 kubelet[2667]: I0813 07:14:12.385186 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-kube-api-access-84bs9" (OuterVolumeSpecName: "kube-api-access-84bs9") pod "b6c4aa7a-c38a-4eea-895b-54b8ff720f39" (UID: "b6c4aa7a-c38a-4eea-895b-54b8ff720f39"). InnerVolumeSpecName "kube-api-access-84bs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:14:12.387269 kubelet[2667]: I0813 07:14:12.387206 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" (UID: "a66e4b1d-5ec0-4d2f-ba97-d9185807fad7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:14:12.387449 kubelet[2667]: I0813 07:14:12.387418 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6c4aa7a-c38a-4eea-895b-54b8ff720f39" (UID: "b6c4aa7a-c38a-4eea-895b-54b8ff720f39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468773 2667 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hostproc\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468812 2667 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-lib-modules\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468828 2667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq59z\" (UniqueName: \"kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-kube-api-access-tq59z\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468837 2667 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-kernel\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468848 2667 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-clustermesh-secrets\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468856 2667 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-host-proc-sys-net\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468867 2667 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-cgroup\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469163 kubelet[2667]: I0813 07:14:12.468876 2667 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-etc-cni-netd\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468885 2667 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cni-path\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468893 2667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84bs9\" (UniqueName: \"kubernetes.io/projected/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-kube-api-access-84bs9\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468903 2667 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-config-path\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468914 2667 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-xtables-lock\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468950 2667 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-cilium-run\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468967 2667 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6c4aa7a-c38a-4eea-895b-54b8ff720f39-cilium-config-path\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468979 2667 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-bpf-maps\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.469660 kubelet[2667]: I0813 07:14:12.468991 2667 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7-hubble-tls\") on node \"ci-4081.3.5-f-9f59ec6646\" DevicePath \"\"" Aug 13 07:14:12.574948 kubelet[2667]: I0813 07:14:12.574378 2667 scope.go:117] "RemoveContainer" containerID="2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3" Aug 13 07:14:12.589055 containerd[1590]: time="2025-08-13T07:14:12.588311205Z" level=info msg="RemoveContainer for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\"" Aug 13 07:14:12.594242 containerd[1590]: time="2025-08-13T07:14:12.594192195Z" level=info msg="RemoveContainer for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" returns successfully" Aug 13 07:14:12.608054 kubelet[2667]: I0813 07:14:12.607983 2667 scope.go:117] "RemoveContainer" containerID="2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3" Aug 13 07:14:12.625546 containerd[1590]: time="2025-08-13T07:14:12.610160804Z" level=error msg="ContainerStatus for \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\": not found" Aug 13 07:14:12.625706 kubelet[2667]: E0813 07:14:12.625345 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\": not found" containerID="2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3" Aug 13 07:14:12.639884 kubelet[2667]: I0813 07:14:12.625405 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3"} err="failed to get container status \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d56c72a17868ffa4d5117b065b883be9ed47b835e2baedf707ee5f0fe306ad3\": not found" Aug 13 07:14:12.639884 kubelet[2667]: I0813 07:14:12.639765 2667 scope.go:117] "RemoveContainer" containerID="6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384" Aug 13 07:14:12.642272 containerd[1590]: time="2025-08-13T07:14:12.642208467Z" level=info msg="RemoveContainer for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\"" Aug 13 07:14:12.650159 containerd[1590]: time="2025-08-13T07:14:12.649877816Z" level=info msg="RemoveContainer for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" returns successfully" Aug 13 07:14:12.651031 kubelet[2667]: I0813 07:14:12.650905 2667 scope.go:117] "RemoveContainer" containerID="b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6" Aug 13 07:14:12.653379 containerd[1590]: time="2025-08-13T07:14:12.653035970Z" level=info msg="RemoveContainer for \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\"" Aug 13 07:14:12.658057 containerd[1590]: time="2025-08-13T07:14:12.658018423Z" level=info msg="RemoveContainer for \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\" returns successfully" Aug 13 07:14:12.658557 kubelet[2667]: I0813 07:14:12.658428 2667 scope.go:117] "RemoveContainer" containerID="0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae" Aug 13 07:14:12.659799 containerd[1590]: time="2025-08-13T07:14:12.659771923Z" level=info msg="RemoveContainer for \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\"" Aug 13 07:14:12.662375 containerd[1590]: time="2025-08-13T07:14:12.662335752Z" level=info msg="RemoveContainer for \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\" returns successfully" Aug 13 07:14:12.662625 kubelet[2667]: I0813 07:14:12.662601 2667 scope.go:117] "RemoveContainer" containerID="e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39" Aug 13 07:14:12.664024 containerd[1590]: time="2025-08-13T07:14:12.663905852Z" level=info msg="RemoveContainer for \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\"" Aug 13 07:14:12.666182 containerd[1590]: time="2025-08-13T07:14:12.666100353Z" level=info msg="RemoveContainer for \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\" returns successfully" Aug 13 07:14:12.666628 kubelet[2667]: I0813 07:14:12.666466 2667 scope.go:117] "RemoveContainer" containerID="4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56" Aug 13 07:14:12.668462 containerd[1590]: time="2025-08-13T07:14:12.668137876Z" level=info msg="RemoveContainer for \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\"" Aug 13 07:14:12.675993 containerd[1590]: time="2025-08-13T07:14:12.675856167Z" level=info msg="RemoveContainer for \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\" returns successfully" Aug 13 07:14:12.676491 kubelet[2667]: I0813 07:14:12.676435 2667 scope.go:117] "RemoveContainer" containerID="6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384" Aug 13 07:14:12.676824 containerd[1590]: time="2025-08-13T07:14:12.676691132Z" level=error msg="ContainerStatus for \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\": not found" Aug 13 07:14:12.676887 kubelet[2667]: E0813 07:14:12.676814 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\": not found" containerID="6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384" Aug 13 07:14:12.676887 kubelet[2667]: I0813 07:14:12.676860 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384"} err="failed to get container status \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\": rpc error: code = NotFound desc = an error occurred when try to find container \"6aa42342e88857e5d4ebde77339d1d376e9109abf929d959948322aa68440384\": not found" Aug 13 07:14:12.676974 kubelet[2667]: I0813 07:14:12.676887 2667 scope.go:117] "RemoveContainer" containerID="b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6" Aug 13 07:14:12.677181 containerd[1590]: time="2025-08-13T07:14:12.677102747Z" level=error msg="ContainerStatus for \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\": not found" Aug 13 07:14:12.677234 kubelet[2667]: E0813 07:14:12.677207 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\": not found" containerID="b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6" Aug 13 07:14:12.677275 kubelet[2667]: I0813 07:14:12.677229 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6"} err="failed to get container status \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5b60979165260b269ea77c21cb49f133b6f515c4286f7a3e4329a0dc83dbec6\": not found" Aug 13 07:14:12.677275 kubelet[2667]: I0813 07:14:12.677242 2667 scope.go:117] "RemoveContainer" containerID="0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae" Aug 13 07:14:12.677560 containerd[1590]: time="2025-08-13T07:14:12.677497972Z" level=error msg="ContainerStatus for \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\": not found" Aug 13 07:14:12.677888 kubelet[2667]: E0813 07:14:12.677864 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\": not found" containerID="0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae" Aug 13 07:14:12.677983 kubelet[2667]: I0813 07:14:12.677895 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae"} err="failed to get container status \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d03151fa2d6258dc5dd45f6409459ba5c3c37a3f853e8d906220f9957dd4bae\": not found" Aug 13 07:14:12.677983 kubelet[2667]: I0813 07:14:12.677913 2667 scope.go:117] "RemoveContainer" containerID="e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39" Aug 13 07:14:12.678225 containerd[1590]: time="2025-08-13T07:14:12.678145843Z" level=error msg="ContainerStatus for \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\": not found" Aug 13 07:14:12.678435 kubelet[2667]: E0813 07:14:12.678401 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\": not found" containerID="e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39" Aug 13 07:14:12.678495 kubelet[2667]: I0813 07:14:12.678438 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39"} err="failed to get container status \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6aedde60909403530b9854f6ef5d1e8a846f73ea5b2a17ff38aef5c60b77f39\": not found" Aug 13 07:14:12.678495 kubelet[2667]: I0813 07:14:12.678455 2667 scope.go:117] "RemoveContainer" containerID="4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56" Aug 13 07:14:12.678858 containerd[1590]: time="2025-08-13T07:14:12.678778181Z" level=error msg="ContainerStatus for \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\": not found" Aug 13 07:14:12.679143 kubelet[2667]: E0813 07:14:12.679056 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\": not found" containerID="4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56" Aug 13 07:14:12.679143 kubelet[2667]: I0813 07:14:12.679099 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56"} err="failed to get container status \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bf8734cfe140e1aba1d458fae6d74e103116ad4492d213a90208fb13f8dcb56\": not found" Aug 13 07:14:13.085305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5-rootfs.mount: Deactivated successfully. Aug 13 07:14:13.085481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75bd86b2a2386992fc9adb1023145d8d5ec0b78761be846d32fc54eda35718c5-shm.mount: Deactivated successfully. Aug 13 07:14:13.085586 systemd[1]: var-lib-kubelet-pods-b6c4aa7a\x2dc38a\x2d4eea\x2d895b\x2d54b8ff720f39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d84bs9.mount: Deactivated successfully. Aug 13 07:14:13.085690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-811d6e10c14aed187069db47970ea3317031354fb84e4d0c70f2a9739ae147db-rootfs.mount: Deactivated successfully. Aug 13 07:14:13.085971 systemd[1]: var-lib-kubelet-pods-a66e4b1d\x2d5ec0\x2d4d2f\x2dba97\x2dd9185807fad7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtq59z.mount: Deactivated successfully. Aug 13 07:14:13.086092 systemd[1]: var-lib-kubelet-pods-a66e4b1d\x2d5ec0\x2d4d2f\x2dba97\x2dd9185807fad7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:14:13.086184 systemd[1]: var-lib-kubelet-pods-a66e4b1d\x2d5ec0\x2d4d2f\x2dba97\x2dd9185807fad7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:14:13.228337 kubelet[2667]: I0813 07:14:13.228289 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" path="/var/lib/kubelet/pods/a66e4b1d-5ec0-4d2f-ba97-d9185807fad7/volumes" Aug 13 07:14:13.229085 kubelet[2667]: I0813 07:14:13.229035 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c4aa7a-c38a-4eea-895b-54b8ff720f39" path="/var/lib/kubelet/pods/b6c4aa7a-c38a-4eea-895b-54b8ff720f39/volumes" Aug 13 07:14:14.023259 sshd[4273]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:14.050374 systemd[1]: Started sshd@24-64.23.236.148:22-139.178.89.65:37532.service - OpenSSH per-connection server daemon (139.178.89.65:37532). Aug 13 07:14:14.051143 systemd[1]: sshd@23-64.23.236.148:22-139.178.89.65:37520.service: Deactivated successfully. Aug 13 07:14:14.061594 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:14:14.064298 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:14:14.068604 systemd-logind[1571]: Removed session 24. Aug 13 07:14:14.094978 sshd[4435]: Accepted publickey for core from 139.178.89.65 port 37532 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:14.096358 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:14.102285 systemd-logind[1571]: New session 25 of user core. Aug 13 07:14:14.107331 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:14:14.228062 kubelet[2667]: E0813 07:14:14.227994 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:14.674868 sshd[4435]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:14.686551 systemd[1]: Started sshd@25-64.23.236.148:22-139.178.89.65:37542.service - OpenSSH per-connection server daemon (139.178.89.65:37542). Aug 13 07:14:14.687138 systemd[1]: sshd@24-64.23.236.148:22-139.178.89.65:37532.service: Deactivated successfully. Aug 13 07:14:14.705224 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:14:14.707387 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:14:14.715486 systemd-logind[1571]: Removed session 25. Aug 13 07:14:14.722960 kubelet[2667]: E0813 07:14:14.720523 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" containerName="mount-cgroup" Aug 13 07:14:14.722960 kubelet[2667]: E0813 07:14:14.720567 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" containerName="apply-sysctl-overwrites" Aug 13 07:14:14.722960 kubelet[2667]: E0813 07:14:14.720576 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" containerName="cilium-agent" Aug 13 07:14:14.722960 kubelet[2667]: E0813 07:14:14.720585 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6c4aa7a-c38a-4eea-895b-54b8ff720f39" containerName="cilium-operator" Aug 13 07:14:14.722960 kubelet[2667]: E0813 07:14:14.720591 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" containerName="mount-bpf-fs" Aug 13 07:14:14.722960 kubelet[2667]: E0813 07:14:14.720598 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" containerName="clean-cilium-state" Aug 13 07:14:14.722960 kubelet[2667]: I0813 07:14:14.722045 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c4aa7a-c38a-4eea-895b-54b8ff720f39" containerName="cilium-operator" Aug 13 07:14:14.722960 kubelet[2667]: I0813 07:14:14.722081 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66e4b1d-5ec0-4d2f-ba97-d9185807fad7" containerName="cilium-agent" Aug 13 07:14:14.791313 kubelet[2667]: I0813 07:14:14.791274 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-hubble-tls\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791313 kubelet[2667]: I0813 07:14:14.791319 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2h69\" (UniqueName: \"kubernetes.io/projected/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-kube-api-access-k2h69\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791526 kubelet[2667]: I0813 07:14:14.791432 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-bpf-maps\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791526 kubelet[2667]: I0813 07:14:14.791453 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-xtables-lock\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791526 kubelet[2667]: I0813 07:14:14.791469 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-host-proc-sys-kernel\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791605 kubelet[2667]: I0813 07:14:14.791570 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-hostproc\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791605 kubelet[2667]: I0813 07:14:14.791588 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-etc-cni-netd\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791660 kubelet[2667]: I0813 07:14:14.791609 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-clustermesh-secrets\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.791660 kubelet[2667]: I0813 07:14:14.791628 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-host-proc-sys-net\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.793954 kubelet[2667]: I0813 07:14:14.792666 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-cilium-run\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.793954 kubelet[2667]: I0813 07:14:14.792704 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-cilium-ipsec-secrets\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.793954 kubelet[2667]: I0813 07:14:14.792725 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-cni-path\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.793954 kubelet[2667]: I0813 07:14:14.792756 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-lib-modules\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.793954 kubelet[2667]: I0813 07:14:14.792773 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-cilium-cgroup\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.793954 kubelet[2667]: I0813 07:14:14.792789 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c6a7b57-a3c9-41f8-b582-abfe52a373b4-cilium-config-path\") pod \"cilium-6jnhm\" (UID: \"3c6a7b57-a3c9-41f8-b582-abfe52a373b4\") " pod="kube-system/cilium-6jnhm" Aug 13 07:14:14.802989 sshd[4448]: Accepted publickey for core from 139.178.89.65 port 37542 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:14.809343 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:14.819290 systemd-logind[1571]: New session 26 of user core. Aug 13 07:14:14.823426 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:14:14.884558 sshd[4448]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:14.893372 systemd[1]: Started sshd@26-64.23.236.148:22-139.178.89.65:37558.service - OpenSSH per-connection server daemon (139.178.89.65:37558). Aug 13 07:14:14.895162 systemd[1]: sshd@25-64.23.236.148:22-139.178.89.65:37542.service: Deactivated successfully. Aug 13 07:14:14.898856 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:14:14.922499 systemd-logind[1571]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:14:14.941410 systemd-logind[1571]: Removed session 26. Aug 13 07:14:14.969915 sshd[4457]: Accepted publickey for core from 139.178.89.65 port 37558 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:14.974484 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:14.983780 systemd-logind[1571]: New session 27 of user core. Aug 13 07:14:14.996466 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:14:15.045689 kubelet[2667]: E0813 07:14:15.045643 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:15.047461 containerd[1590]: time="2025-08-13T07:14:15.047298125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jnhm,Uid:3c6a7b57-a3c9-41f8-b582-abfe52a373b4,Namespace:kube-system,Attempt:0,}" Aug 13 07:14:15.085298 containerd[1590]: time="2025-08-13T07:14:15.085020651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:14:15.085298 containerd[1590]: time="2025-08-13T07:14:15.085124724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:14:15.085298 containerd[1590]: time="2025-08-13T07:14:15.085146344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:15.085683 containerd[1590]: time="2025-08-13T07:14:15.085271642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:14:15.145637 containerd[1590]: time="2025-08-13T07:14:15.145564868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jnhm,Uid:3c6a7b57-a3c9-41f8-b582-abfe52a373b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\"" Aug 13 07:14:15.146856 kubelet[2667]: E0813 07:14:15.146510 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:15.151573 containerd[1590]: time="2025-08-13T07:14:15.151522880Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:14:15.162380 containerd[1590]: time="2025-08-13T07:14:15.162271208Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a3136052b73c8acc670aa7825212ba390112f7e1104af62df3e81c168d95865\"" Aug 13 07:14:15.163895 containerd[1590]: time="2025-08-13T07:14:15.163070749Z" level=info msg="StartContainer for \"6a3136052b73c8acc670aa7825212ba390112f7e1104af62df3e81c168d95865\"" Aug 13 07:14:15.223257 containerd[1590]: time="2025-08-13T07:14:15.223212949Z" level=info msg="StartContainer for \"6a3136052b73c8acc670aa7825212ba390112f7e1104af62df3e81c168d95865\" returns successfully" Aug 13 07:14:15.226296 kubelet[2667]: E0813 07:14:15.226201 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:15.282799 containerd[1590]: time="2025-08-13T07:14:15.282505261Z" level=info msg="shim disconnected" id=6a3136052b73c8acc670aa7825212ba390112f7e1104af62df3e81c168d95865 namespace=k8s.io Aug 13 07:14:15.282799 containerd[1590]: time="2025-08-13T07:14:15.282561277Z" level=warning msg="cleaning up after shim disconnected" id=6a3136052b73c8acc670aa7825212ba390112f7e1104af62df3e81c168d95865 namespace=k8s.io Aug 13 07:14:15.282799 containerd[1590]: time="2025-08-13T07:14:15.282570574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:15.604079 kubelet[2667]: E0813 07:14:15.603246 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:15.608323 containerd[1590]: time="2025-08-13T07:14:15.607180599Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:14:15.620768 containerd[1590]: time="2025-08-13T07:14:15.620710623Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e1602541de339816a520c6350ce5c5e5d7760695bd1ae2182dc8f4508a8e466\"" Aug 13 07:14:15.621781 containerd[1590]: time="2025-08-13T07:14:15.621687136Z" level=info msg="StartContainer for \"1e1602541de339816a520c6350ce5c5e5d7760695bd1ae2182dc8f4508a8e466\"" Aug 13 07:14:15.695585 containerd[1590]: time="2025-08-13T07:14:15.694802925Z" level=info msg="StartContainer for \"1e1602541de339816a520c6350ce5c5e5d7760695bd1ae2182dc8f4508a8e466\" returns successfully" Aug 13 07:14:15.729233 containerd[1590]: time="2025-08-13T07:14:15.729158267Z" level=info msg="shim disconnected" id=1e1602541de339816a520c6350ce5c5e5d7760695bd1ae2182dc8f4508a8e466 namespace=k8s.io Aug 13 07:14:15.729233 containerd[1590]: time="2025-08-13T07:14:15.729234353Z" level=warning msg="cleaning up after shim disconnected" id=1e1602541de339816a520c6350ce5c5e5d7760695bd1ae2182dc8f4508a8e466 namespace=k8s.io Aug 13 07:14:15.729233 containerd[1590]: time="2025-08-13T07:14:15.729246503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:16.348312 kubelet[2667]: E0813 07:14:16.348237 2667 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:14:16.608430 kubelet[2667]: E0813 07:14:16.606439 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:16.610972 containerd[1590]: time="2025-08-13T07:14:16.610831075Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:14:16.642642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037699557.mount: Deactivated successfully. Aug 13 07:14:16.649925 containerd[1590]: time="2025-08-13T07:14:16.649879464Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"496eaef294f5e142635f9f3ba4b98bde41d7eb9894eea77f0e90587bc25e9dfc\"" Aug 13 07:14:16.651303 containerd[1590]: time="2025-08-13T07:14:16.651026500Z" level=info msg="StartContainer for \"496eaef294f5e142635f9f3ba4b98bde41d7eb9894eea77f0e90587bc25e9dfc\"" Aug 13 07:14:16.727291 containerd[1590]: time="2025-08-13T07:14:16.727213844Z" level=info msg="StartContainer for \"496eaef294f5e142635f9f3ba4b98bde41d7eb9894eea77f0e90587bc25e9dfc\" returns successfully" Aug 13 07:14:16.754669 containerd[1590]: time="2025-08-13T07:14:16.754565885Z" level=info msg="shim disconnected" id=496eaef294f5e142635f9f3ba4b98bde41d7eb9894eea77f0e90587bc25e9dfc namespace=k8s.io Aug 13 07:14:16.754886 containerd[1590]: time="2025-08-13T07:14:16.754712146Z" level=warning msg="cleaning up after shim disconnected" id=496eaef294f5e142635f9f3ba4b98bde41d7eb9894eea77f0e90587bc25e9dfc namespace=k8s.io Aug 13 07:14:16.754886 containerd[1590]: time="2025-08-13T07:14:16.754723753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:16.913832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-496eaef294f5e142635f9f3ba4b98bde41d7eb9894eea77f0e90587bc25e9dfc-rootfs.mount: Deactivated successfully. Aug 13 07:14:17.612535 kubelet[2667]: E0813 07:14:17.611830 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:17.615875 containerd[1590]: time="2025-08-13T07:14:17.615834034Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:14:17.634558 containerd[1590]: time="2025-08-13T07:14:17.634079770Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c160082cd710b5f854f70e2a92377a8a2dd2127e9e3a8a55f67a502679c65b5\"" Aug 13 07:14:17.635271 containerd[1590]: time="2025-08-13T07:14:17.635150659Z" level=info msg="StartContainer for \"4c160082cd710b5f854f70e2a92377a8a2dd2127e9e3a8a55f67a502679c65b5\"" Aug 13 07:14:17.710157 containerd[1590]: time="2025-08-13T07:14:17.709618415Z" level=info msg="StartContainer for \"4c160082cd710b5f854f70e2a92377a8a2dd2127e9e3a8a55f67a502679c65b5\" returns successfully" Aug 13 07:14:17.738597 containerd[1590]: time="2025-08-13T07:14:17.738506832Z" level=info msg="shim disconnected" id=4c160082cd710b5f854f70e2a92377a8a2dd2127e9e3a8a55f67a502679c65b5 namespace=k8s.io Aug 13 07:14:17.739127 containerd[1590]: time="2025-08-13T07:14:17.738868073Z" level=warning msg="cleaning up after shim disconnected" id=4c160082cd710b5f854f70e2a92377a8a2dd2127e9e3a8a55f67a502679c65b5 namespace=k8s.io Aug 13 07:14:17.739127 containerd[1590]: time="2025-08-13T07:14:17.738885334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:14:17.913484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c160082cd710b5f854f70e2a92377a8a2dd2127e9e3a8a55f67a502679c65b5-rootfs.mount: Deactivated successfully. Aug 13 07:14:18.616593 kubelet[2667]: E0813 07:14:18.616541 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:18.619067 containerd[1590]: time="2025-08-13T07:14:18.618992794Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:14:18.632972 containerd[1590]: time="2025-08-13T07:14:18.632918314Z" level=info msg="CreateContainer within sandbox \"2cf8faeeafba42905e3864b0960884ca198402d0a9f984c702d3435d745319f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aeb4fdc3751a720e92548c412b5ec274fae3e0b064c44e7819805bab45f85f78\"" Aug 13 07:14:18.633721 containerd[1590]: time="2025-08-13T07:14:18.633660876Z" level=info msg="StartContainer for \"aeb4fdc3751a720e92548c412b5ec274fae3e0b064c44e7819805bab45f85f78\"" Aug 13 07:14:18.704699 containerd[1590]: time="2025-08-13T07:14:18.704656280Z" level=info msg="StartContainer for \"aeb4fdc3751a720e92548c412b5ec274fae3e0b064c44e7819805bab45f85f78\" returns successfully" Aug 13 07:14:19.150259 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 07:14:19.623814 kubelet[2667]: E0813 07:14:19.623749 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:19.644876 kubelet[2667]: I0813 07:14:19.644786 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6jnhm" podStartSLOduration=5.642501275 podStartE2EDuration="5.642501275s" podCreationTimestamp="2025-08-13 07:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:14:19.642305621 +0000 UTC m=+98.549103195" watchObservedRunningTime="2025-08-13 07:14:19.642501275 +0000 UTC m=+98.549298860" Aug 13 07:14:20.225979 kubelet[2667]: E0813 07:14:20.225733 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:21.048820 kubelet[2667]: E0813 07:14:21.047709 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:22.438529 systemd-networkd[1220]: lxc_health: Link UP Aug 13 07:14:22.446746 systemd-networkd[1220]: lxc_health: Gained carrier Aug 13 07:14:23.049758 kubelet[2667]: E0813 07:14:23.048610 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:23.635953 kubelet[2667]: E0813 07:14:23.635624 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:23.803636 systemd[1]: run-containerd-runc-k8s.io-aeb4fdc3751a720e92548c412b5ec274fae3e0b064c44e7819805bab45f85f78-runc.K6hr7v.mount: Deactivated successfully. Aug 13 07:14:23.877529 kubelet[2667]: E0813 07:14:23.877479 2667 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46624->127.0.0.1:41619: write tcp 127.0.0.1:46624->127.0.0.1:41619: write: broken pipe Aug 13 07:14:23.972178 systemd-networkd[1220]: lxc_health: Gained IPv6LL Aug 13 07:14:24.637424 kubelet[2667]: E0813 07:14:24.637144 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:14:28.172442 sshd[4457]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:28.178280 systemd-logind[1571]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:14:28.178680 systemd[1]: sshd@26-64.23.236.148:22-139.178.89.65:37558.service: Deactivated successfully. Aug 13 07:14:28.184859 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:14:28.186444 systemd-logind[1571]: Removed session 27.