Aug 13 00:45:40.943685 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:45:40.943716 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:45:40.943726 kernel: BIOS-provided physical RAM map: Aug 13 00:45:40.943733 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:45:40.943739 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:45:40.943745 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:45:40.943753 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 00:45:40.943768 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 00:45:40.943778 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:45:40.943784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:45:40.943791 kernel: NX (Execute Disable) protection: active Aug 13 00:45:40.943798 kernel: APIC: Static calls initialized Aug 13 00:45:40.943805 kernel: SMBIOS 2.8 present. Aug 13 00:45:40.943812 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 00:45:40.943826 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:45:40.943837 kernel: Hypervisor detected: KVM Aug 13 00:45:40.943851 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:45:40.943862 kernel: kvm-clock: using sched offset of 4693935130 cycles Aug 13 00:45:40.943873 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:45:40.943884 kernel: tsc: Detected 2494.140 MHz processor Aug 13 00:45:40.943895 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:45:40.943908 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:45:40.943920 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 00:45:40.943936 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:45:40.943948 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:45:40.943960 kernel: ACPI: Early table checksum verification disabled Aug 13 00:45:40.943971 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 00:45:40.943983 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.943993 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.944005 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.944015 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:45:40.944027 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.944042 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.944054 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.944066 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:45:40.944078 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 00:45:40.944091 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 00:45:40.944103 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:45:40.944115 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 00:45:40.944127 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 00:45:40.944149 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 00:45:40.944161 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 00:45:40.944173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:45:40.944185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:45:40.944198 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Aug 13 00:45:40.944212 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Aug 13 00:45:40.944220 kernel: Zone ranges: Aug 13 00:45:40.944229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:45:40.944237 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 00:45:40.944245 kernel: Normal empty Aug 13 00:45:40.944253 kernel: Device empty Aug 13 00:45:40.944261 kernel: Movable zone start for each node Aug 13 00:45:40.944270 kernel: Early memory node ranges Aug 13 00:45:40.944278 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:45:40.944286 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 00:45:40.944297 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 00:45:40.944305 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:45:40.944313 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:45:40.944322 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 00:45:40.944330 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:45:40.944338 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:45:40.944351 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:45:40.944359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:45:40.944369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:45:40.944380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:45:40.944391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:45:40.944399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:45:40.948498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:45:40.948523 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:45:40.948536 kernel: TSC deadline timer available Aug 13 00:45:40.948549 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:45:40.948574 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:45:40.948590 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:45:40.948608 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:45:40.948617 kernel: CPU topo: Num. cores per package: 2 Aug 13 00:45:40.948626 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:45:40.948634 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:45:40.948643 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:45:40.948651 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 00:45:40.948660 kernel: Booting paravirtualized kernel on KVM Aug 13 00:45:40.948669 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:45:40.948678 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:45:40.948686 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:45:40.948698 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:45:40.948707 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:45:40.948721 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 00:45:40.948733 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:45:40.948742 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:45:40.948750 kernel: random: crng init done Aug 13 00:45:40.948758 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:45:40.948767 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:45:40.948779 kernel: Fallback order for Node 0: 0 Aug 13 00:45:40.948788 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Aug 13 00:45:40.948798 kernel: Policy zone: DMA32 Aug 13 00:45:40.948810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:45:40.948821 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:45:40.948829 kernel: Kernel/User page tables isolation: enabled Aug 13 00:45:40.948838 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:45:40.948846 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:45:40.948855 kernel: Dynamic Preempt: voluntary Aug 13 00:45:40.948867 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:45:40.948877 kernel: rcu: RCU event tracing is enabled. Aug 13 00:45:40.948885 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:45:40.948894 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:45:40.948902 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:45:40.948911 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:45:40.948919 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:45:40.948927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:45:40.948936 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:45:40.948955 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:45:40.948964 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:45:40.948973 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:45:40.948981 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:45:40.948989 kernel: Console: colour VGA+ 80x25 Aug 13 00:45:40.948997 kernel: printk: legacy console [tty0] enabled Aug 13 00:45:40.949006 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:45:40.949014 kernel: ACPI: Core revision 20240827 Aug 13 00:45:40.949023 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:45:40.949043 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:45:40.949051 kernel: x2apic enabled Aug 13 00:45:40.949063 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:45:40.949072 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:45:40.949084 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:45:40.949094 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 00:45:40.949102 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:45:40.949111 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:45:40.949120 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:45:40.949132 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:45:40.949140 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:45:40.949149 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:45:40.949158 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:45:40.949167 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:45:40.949176 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:45:40.949185 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:45:40.949196 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:45:40.949205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:45:40.949214 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:45:40.949223 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:45:40.949232 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:45:40.949241 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:45:40.949250 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:45:40.949259 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:45:40.949267 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:45:40.949279 kernel: landlock: Up and running. Aug 13 00:45:40.949288 kernel: SELinux: Initializing. Aug 13 00:45:40.949296 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:45:40.949305 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:45:40.949314 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 00:45:40.949323 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 00:45:40.949332 kernel: signal: max sigframe size: 1776 Aug 13 00:45:40.949341 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:45:40.949349 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:45:40.949365 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:45:40.949376 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:45:40.949385 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:45:40.949394 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:45:40.949418 kernel: .... node #0, CPUs: #1 Aug 13 00:45:40.949427 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:45:40.949436 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 00:45:40.949445 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 125140K reserved, 0K cma-reserved) Aug 13 00:45:40.949454 kernel: devtmpfs: initialized Aug 13 00:45:40.949467 kernel: x86/mm: Memory block size: 128MB Aug 13 00:45:40.949476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:45:40.949485 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:45:40.949494 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:45:40.949503 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:45:40.949512 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:45:40.949529 kernel: audit: type=2000 audit(1755045937.468:1): state=initialized audit_enabled=0 res=1 Aug 13 00:45:40.949539 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:45:40.949548 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:45:40.949560 kernel: cpuidle: using governor menu Aug 13 00:45:40.949569 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:45:40.949578 kernel: dca service started, version 1.12.1 Aug 13 00:45:40.949587 kernel: PCI: Using configuration type 1 for base access Aug 13 00:45:40.949596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:45:40.949605 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:45:40.949614 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:45:40.949622 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:45:40.949631 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:45:40.949643 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:45:40.949652 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:45:40.949661 kernel: ACPI: Interpreter enabled Aug 13 00:45:40.949690 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:45:40.949703 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:45:40.949715 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:45:40.949723 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:45:40.949732 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:45:40.949741 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:45:40.950016 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:45:40.950179 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 00:45:40.950327 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 00:45:40.950345 kernel: acpiphp: Slot [3] registered Aug 13 00:45:40.950358 kernel: acpiphp: Slot [4] registered Aug 13 00:45:40.950371 kernel: acpiphp: Slot [5] registered Aug 13 00:45:40.950383 kernel: acpiphp: Slot [6] registered Aug 13 00:45:40.950402 kernel: acpiphp: Slot [7] registered Aug 13 00:45:40.954488 kernel: acpiphp: Slot [8] registered Aug 13 00:45:40.954500 kernel: acpiphp: Slot [9] registered Aug 13 00:45:40.954510 kernel: acpiphp: Slot [10] registered Aug 13 00:45:40.954519 kernel: acpiphp: Slot [11] registered Aug 13 00:45:40.954528 kernel: acpiphp: Slot [12] registered Aug 13 00:45:40.954537 kernel: acpiphp: Slot [13] registered Aug 13 00:45:40.954546 kernel: acpiphp: Slot [14] registered Aug 13 00:45:40.954554 kernel: acpiphp: Slot [15] registered Aug 13 00:45:40.954570 kernel: acpiphp: Slot [16] registered Aug 13 00:45:40.954579 kernel: acpiphp: Slot [17] registered Aug 13 00:45:40.954589 kernel: acpiphp: Slot [18] registered Aug 13 00:45:40.954598 kernel: acpiphp: Slot [19] registered Aug 13 00:45:40.954606 kernel: acpiphp: Slot [20] registered Aug 13 00:45:40.954615 kernel: acpiphp: Slot [21] registered Aug 13 00:45:40.954624 kernel: acpiphp: Slot [22] registered Aug 13 00:45:40.954633 kernel: acpiphp: Slot [23] registered Aug 13 00:45:40.954642 kernel: acpiphp: Slot [24] registered Aug 13 00:45:40.954651 kernel: acpiphp: Slot [25] registered Aug 13 00:45:40.954663 kernel: acpiphp: Slot [26] registered Aug 13 00:45:40.954672 kernel: acpiphp: Slot [27] registered Aug 13 00:45:40.954681 kernel: acpiphp: Slot [28] registered Aug 13 00:45:40.954689 kernel: acpiphp: Slot [29] registered Aug 13 00:45:40.954698 kernel: acpiphp: Slot [30] registered Aug 13 00:45:40.954707 kernel: acpiphp: Slot [31] registered Aug 13 00:45:40.954715 kernel: PCI host bridge to bus 0000:00 Aug 13 00:45:40.954880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:45:40.955011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:45:40.955134 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:45:40.955230 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:45:40.955316 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 00:45:40.955400 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:45:40.955729 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:45:40.955874 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:45:40.956016 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Aug 13 00:45:40.956112 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Aug 13 00:45:40.956243 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Aug 13 00:45:40.956380 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Aug 13 00:45:40.956592 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Aug 13 00:45:40.956688 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Aug 13 00:45:40.956810 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Aug 13 00:45:40.956917 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Aug 13 00:45:40.957038 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Aug 13 00:45:40.957131 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 00:45:40.957254 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 00:45:40.957380 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:45:40.957544 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Aug 13 00:45:40.957650 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 00:45:40.957778 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Aug 13 00:45:40.957914 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Aug 13 00:45:40.958027 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:45:40.958137 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:45:40.958241 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Aug 13 00:45:40.958348 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Aug 13 00:45:40.958993 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 00:45:40.959135 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:45:40.959235 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Aug 13 00:45:40.959330 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Aug 13 00:45:40.959470 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 00:45:40.959585 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:45:40.959686 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Aug 13 00:45:40.959780 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Aug 13 00:45:40.959873 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 00:45:40.959997 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:45:40.960125 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Aug 13 00:45:40.960216 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Aug 13 00:45:40.960307 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 00:45:40.960425 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:45:40.960544 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Aug 13 00:45:40.960655 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Aug 13 00:45:40.960751 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 00:45:40.960864 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:45:40.960979 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Aug 13 00:45:40.961135 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 00:45:40.961154 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:45:40.961165 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:45:40.961179 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:45:40.961193 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:45:40.961205 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:45:40.961217 kernel: iommu: Default domain type: Translated Aug 13 00:45:40.961230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:45:40.961242 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:45:40.961262 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:45:40.961275 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:45:40.961289 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 00:45:40.963368 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 00:45:40.963523 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 00:45:40.963629 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:45:40.963644 kernel: vgaarb: loaded Aug 13 00:45:40.963655 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:45:40.963664 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:45:40.963679 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:45:40.963689 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:45:40.963699 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:45:40.963708 kernel: pnp: PnP ACPI init Aug 13 00:45:40.963717 kernel: pnp: PnP ACPI: found 4 devices Aug 13 00:45:40.963726 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:45:40.963736 kernel: NET: Registered PF_INET protocol family Aug 13 00:45:40.963746 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:45:40.963758 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:45:40.963767 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:45:40.963776 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:45:40.963785 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:45:40.963795 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:45:40.963804 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:45:40.963813 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:45:40.963822 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:45:40.963831 kernel: NET: Registered PF_XDP protocol family Aug 13 00:45:40.963944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:45:40.964049 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:45:40.964155 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:45:40.964239 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:45:40.964330 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 00:45:40.966534 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 00:45:40.966694 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:45:40.966711 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:45:40.966821 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29233 usecs Aug 13 00:45:40.966834 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:45:40.966843 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:45:40.966853 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:45:40.966862 kernel: Initialise system trusted keyrings Aug 13 00:45:40.966872 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:45:40.966881 kernel: Key type asymmetric registered Aug 13 00:45:40.966890 kernel: Asymmetric key parser 'x509' registered Aug 13 00:45:40.966898 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:45:40.966912 kernel: io scheduler mq-deadline registered Aug 13 00:45:40.966921 kernel: io scheduler kyber registered Aug 13 00:45:40.966930 kernel: io scheduler bfq registered Aug 13 00:45:40.966940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:45:40.966952 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 00:45:40.966967 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:45:40.966980 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:45:40.966993 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:45:40.967006 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:45:40.967025 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:45:40.967036 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:45:40.967049 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:45:40.967209 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:45:40.967226 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:45:40.967313 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:45:40.968485 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:45:40 UTC (1755045940) Aug 13 00:45:40.968653 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:45:40.968677 kernel: intel_pstate: CPU model not supported Aug 13 00:45:40.968687 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:45:40.968697 kernel: Segment Routing with IPv6 Aug 13 00:45:40.968706 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:45:40.968715 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:45:40.968725 kernel: Key type dns_resolver registered Aug 13 00:45:40.968734 kernel: IPI shorthand broadcast: enabled Aug 13 00:45:40.968760 kernel: sched_clock: Marking stable (3658004083, 124872894)->(3911365038, -128488061) Aug 13 00:45:40.968769 kernel: registered taskstats version 1 Aug 13 00:45:40.968781 kernel: Loading compiled-in X.509 certificates Aug 13 00:45:40.968790 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:45:40.968799 kernel: Demotion targets for Node 0: null Aug 13 00:45:40.968808 kernel: Key type .fscrypt registered Aug 13 00:45:40.968816 kernel: Key type fscrypt-provisioning registered Aug 13 00:45:40.968828 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:45:40.968852 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:45:40.968864 kernel: ima: No architecture policies found Aug 13 00:45:40.968876 kernel: clk: Disabling unused clocks Aug 13 00:45:40.968886 kernel: Warning: unable to open an initial console. Aug 13 00:45:40.968896 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:45:40.968905 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:45:40.968915 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:45:40.968924 kernel: Run /init as init process Aug 13 00:45:40.968933 kernel: with arguments: Aug 13 00:45:40.968943 kernel: /init Aug 13 00:45:40.968952 kernel: with environment: Aug 13 00:45:40.968963 kernel: HOME=/ Aug 13 00:45:40.968972 kernel: TERM=linux Aug 13 00:45:40.968981 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:45:40.968993 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:45:40.969012 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:45:40.969025 systemd[1]: Detected virtualization kvm. Aug 13 00:45:40.969034 systemd[1]: Detected architecture x86-64. Aug 13 00:45:40.969044 systemd[1]: Running in initrd. Aug 13 00:45:40.969056 systemd[1]: No hostname configured, using default hostname. Aug 13 00:45:40.969066 systemd[1]: Hostname set to . Aug 13 00:45:40.969076 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:45:40.969085 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:45:40.969095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:45:40.969105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:45:40.969115 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:45:40.969125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:45:40.969137 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:45:40.969147 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:45:40.969159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:45:40.969173 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:45:40.969183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:45:40.969193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:45:40.969202 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:45:40.969212 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:45:40.969222 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:45:40.969231 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:45:40.969241 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:45:40.969251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:45:40.969264 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:45:40.969273 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:45:40.969283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:45:40.969293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:45:40.969303 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:45:40.969318 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:45:40.969332 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:45:40.969346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:45:40.969361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:45:40.969381 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:45:40.969397 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:45:40.969412 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:45:40.971477 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:45:40.971497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:45:40.971511 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:45:40.971536 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:45:40.971556 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:45:40.971572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:45:40.971627 systemd-journald[210]: Collecting audit messages is disabled. Aug 13 00:45:40.971668 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:45:40.971685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:45:40.971702 systemd-journald[210]: Journal started Aug 13 00:45:40.971732 systemd-journald[210]: Runtime Journal (/run/log/journal/25b7f0adc3914ff4ba685ff630e923ce) is 4.9M, max 39.5M, 34.6M free. Aug 13 00:45:40.944653 systemd-modules-load[212]: Inserted module 'overlay' Aug 13 00:45:40.974491 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:45:40.982674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:45:41.014469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:45:41.014523 kernel: Bridge firewalling registered Aug 13 00:45:40.994458 systemd-modules-load[212]: Inserted module 'br_netfilter' Aug 13 00:45:41.016780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:45:41.023682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:45:41.024375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:45:41.031606 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:45:41.034634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:45:41.037395 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:45:41.050338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:45:41.064526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:45:41.069075 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:45:41.078881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:45:41.088401 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:45:41.138760 systemd-resolved[245]: Positive Trust Anchors: Aug 13 00:45:41.138779 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:45:41.138840 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:45:41.144219 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 00:45:41.144762 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:45:41.147345 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:45:41.148635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:45:41.273493 kernel: SCSI subsystem initialized Aug 13 00:45:41.287481 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:45:41.304460 kernel: iscsi: registered transport (tcp) Aug 13 00:45:41.337493 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:45:41.337654 kernel: QLogic iSCSI HBA Driver Aug 13 00:45:41.369571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:45:41.400092 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:45:41.403811 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:45:41.478569 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:45:41.482652 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:45:41.560519 kernel: raid6: avx2x4 gen() 15005 MB/s Aug 13 00:45:41.577481 kernel: raid6: avx2x2 gen() 15536 MB/s Aug 13 00:45:41.594513 kernel: raid6: avx2x1 gen() 11622 MB/s Aug 13 00:45:41.594594 kernel: raid6: using algorithm avx2x2 gen() 15536 MB/s Aug 13 00:45:41.612790 kernel: raid6: .... xor() 14838 MB/s, rmw enabled Aug 13 00:45:41.612894 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:45:41.640478 kernel: xor: automatically using best checksumming function avx Aug 13 00:45:41.881476 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:45:41.893491 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:45:41.898350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:45:41.941799 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 13 00:45:41.950898 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:45:41.954930 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:45:41.993522 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Aug 13 00:45:42.038695 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:45:42.041995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:45:42.123462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:45:42.127685 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:45:42.253457 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 00:45:42.256656 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:45:42.260802 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 00:45:42.264469 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Aug 13 00:45:42.277502 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:45:42.294724 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:45:42.294822 kernel: GPT:9289727 != 125829119 Aug 13 00:45:42.294837 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:45:42.294849 kernel: GPT:9289727 != 125829119 Aug 13 00:45:42.295484 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:45:42.296728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:42.299649 kernel: AES CTR mode by8 optimization enabled Aug 13 00:45:42.313441 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:45:42.315516 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 00:45:42.318051 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 13 00:45:42.346163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:45:42.347094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:45:42.350109 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:45:42.354590 kernel: libata version 3.00 loaded. Aug 13 00:45:42.353939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:45:42.361713 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:45:42.366495 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 00:45:42.381581 kernel: scsi host1: ata_piix Aug 13 00:45:42.384436 kernel: scsi host2: ata_piix Aug 13 00:45:42.388105 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Aug 13 00:45:42.388767 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Aug 13 00:45:42.390704 kernel: ACPI: bus type USB registered Aug 13 00:45:42.390798 kernel: usbcore: registered new interface driver usbfs Aug 13 00:45:42.398433 kernel: usbcore: registered new interface driver hub Aug 13 00:45:42.398515 kernel: usbcore: registered new device driver usb Aug 13 00:45:42.479183 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:45:42.495592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:45:42.523711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:45:42.539769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:45:42.557399 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:45:42.558157 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:45:42.570721 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:45:42.592374 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 00:45:42.592918 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 00:45:42.593131 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 00:45:42.595452 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 00:45:42.595807 kernel: hub 1-0:1.0: USB hub found Aug 13 00:45:42.597450 kernel: hub 1-0:1.0: 2 ports detected Aug 13 00:45:42.607464 disk-uuid[612]: Primary Header is updated. Aug 13 00:45:42.607464 disk-uuid[612]: Secondary Entries is updated. Aug 13 00:45:42.607464 disk-uuid[612]: Secondary Header is updated. Aug 13 00:45:42.621473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:43.304997 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:45:43.307228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:45:43.307699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:45:43.308692 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:45:43.311323 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:45:43.345953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:45:43.634939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:45:43.635072 disk-uuid[615]: The operation has completed successfully. Aug 13 00:45:43.696686 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:45:43.697363 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:45:43.730512 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:45:43.753306 sh[642]: Success Aug 13 00:45:43.775530 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:45:43.775629 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:45:43.776892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:45:43.788460 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 13 00:45:43.866255 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:45:43.867684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:45:43.884498 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:45:43.901480 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:45:43.903465 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (654) Aug 13 00:45:43.906165 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:45:43.906233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:45:43.906248 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:45:43.916670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:45:43.918059 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:45:43.918680 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:45:43.919784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:45:43.924626 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:45:43.954444 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (678) Aug 13 00:45:43.956956 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:45:43.957035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:45:43.957058 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:45:43.968483 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:45:43.969832 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:45:43.972968 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:45:44.119856 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:45:44.131515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:45:44.197851 systemd-networkd[827]: lo: Link UP Aug 13 00:45:44.198776 ignition[720]: Ignition 2.21.0 Aug 13 00:45:44.197869 systemd-networkd[827]: lo: Gained carrier Aug 13 00:45:44.198785 ignition[720]: Stage: fetch-offline Aug 13 00:45:44.198852 ignition[720]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:44.201674 systemd-networkd[827]: Enumeration completed Aug 13 00:45:44.198867 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:44.202142 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:45:44.199008 ignition[720]: parsed url from cmdline: "" Aug 13 00:45:44.202345 systemd-networkd[827]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 00:45:44.199013 ignition[720]: no config URL provided Aug 13 00:45:44.202352 systemd-networkd[827]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 00:45:44.199019 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:45:44.203819 systemd-networkd[827]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:45:44.199029 ignition[720]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:45:44.203826 systemd-networkd[827]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:45:44.199036 ignition[720]: failed to fetch config: resource requires networking Aug 13 00:45:44.204634 systemd-networkd[827]: eth0: Link UP Aug 13 00:45:44.201259 ignition[720]: Ignition finished successfully Aug 13 00:45:44.204971 systemd-networkd[827]: eth1: Link UP Aug 13 00:45:44.205314 systemd-networkd[827]: eth0: Gained carrier Aug 13 00:45:44.205333 systemd-networkd[827]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 00:45:44.207596 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:45:44.209989 systemd[1]: Reached target network.target - Network. Aug 13 00:45:44.211344 systemd-networkd[827]: eth1: Gained carrier Aug 13 00:45:44.211373 systemd-networkd[827]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:45:44.213603 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:45:44.222586 systemd-networkd[827]: eth0: DHCPv4 address 24.144.89.98/20, gateway 24.144.80.1 acquired from 169.254.169.253 Aug 13 00:45:44.236599 systemd-networkd[827]: eth1: DHCPv4 address 10.124.0.31/20 acquired from 169.254.169.253 Aug 13 00:45:44.254914 ignition[831]: Ignition 2.21.0 Aug 13 00:45:44.254935 ignition[831]: Stage: fetch Aug 13 00:45:44.255198 ignition[831]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:44.255226 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:44.255376 ignition[831]: parsed url from cmdline: "" Aug 13 00:45:44.255382 ignition[831]: no config URL provided Aug 13 00:45:44.255390 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:45:44.255402 ignition[831]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:45:44.256776 ignition[831]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 00:45:44.276515 ignition[831]: GET result: OK Aug 13 00:45:44.276710 ignition[831]: parsing config with SHA512: c75f4b687c472dffc83525e8b50cfaa522c3ef46b1aea05c5cc53fb3704db37337c5e97b3a1bc2f794625cf2e846db82b39072e41aa3c1b3f1361f9573428999 Aug 13 00:45:44.287376 unknown[831]: fetched base config from "system" Aug 13 00:45:44.287394 unknown[831]: fetched base config from "system" Aug 13 00:45:44.287818 unknown[831]: fetched user config from "digitalocean" Aug 13 00:45:44.289472 ignition[831]: fetch: fetch complete Aug 13 00:45:44.289516 ignition[831]: fetch: fetch passed Aug 13 00:45:44.289743 ignition[831]: Ignition finished successfully Aug 13 00:45:44.293742 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:45:44.295875 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:45:44.336547 ignition[839]: Ignition 2.21.0 Aug 13 00:45:44.337315 ignition[839]: Stage: kargs Aug 13 00:45:44.338085 ignition[839]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:44.338522 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:44.339597 ignition[839]: kargs: kargs passed Aug 13 00:45:44.339663 ignition[839]: Ignition finished successfully Aug 13 00:45:44.341211 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:45:44.343698 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:45:44.393328 ignition[846]: Ignition 2.21.0 Aug 13 00:45:44.395178 ignition[846]: Stage: disks Aug 13 00:45:44.397009 ignition[846]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:44.397052 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:44.399041 ignition[846]: disks: disks passed Aug 13 00:45:44.399164 ignition[846]: Ignition finished successfully Aug 13 00:45:44.401479 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:45:44.402422 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:45:44.402932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:45:44.404050 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:45:44.404933 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:45:44.405862 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:45:44.408251 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:45:44.444916 systemd-fsck[855]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:45:44.448785 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:45:44.451905 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:45:44.619453 kernel: EXT4-fs (vda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:45:44.621930 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:45:44.624091 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:45:44.627127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:45:44.633585 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:45:44.650717 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Aug 13 00:45:44.656750 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:45:44.659114 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:45:44.660327 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:45:44.663697 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:45:44.667615 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:45:44.671461 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Aug 13 00:45:44.680923 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:45:44.680990 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:45:44.681005 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:45:44.699605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:45:44.789520 initrd-setup-root[894]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:45:44.807695 coreos-metadata[866]: Aug 13 00:45:44.806 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:45:44.809387 initrd-setup-root[901]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:45:44.810660 coreos-metadata[865]: Aug 13 00:45:44.810 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:45:44.816390 initrd-setup-root[908]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:45:44.825858 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:45:44.828775 coreos-metadata[865]: Aug 13 00:45:44.825 INFO Fetch successful Aug 13 00:45:44.829357 coreos-metadata[866]: Aug 13 00:45:44.827 INFO Fetch successful Aug 13 00:45:44.834067 coreos-metadata[866]: Aug 13 00:45:44.833 INFO wrote hostname ci-4372.1.0-a-9a72d3155b to /sysroot/etc/hostname Aug 13 00:45:44.837181 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:45:44.843659 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Aug 13 00:45:44.844854 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Aug 13 00:45:44.992223 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:45:44.995745 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:45:44.999704 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:45:45.018471 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:45:45.019867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:45:45.053680 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:45:45.071168 ignition[984]: INFO : Ignition 2.21.0 Aug 13 00:45:45.071168 ignition[984]: INFO : Stage: mount Aug 13 00:45:45.073518 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:45.073518 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:45.077260 ignition[984]: INFO : mount: mount passed Aug 13 00:45:45.077260 ignition[984]: INFO : Ignition finished successfully Aug 13 00:45:45.080309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:45:45.082564 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:45:45.107386 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:45:45.133462 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Aug 13 00:45:45.136615 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:45:45.136693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:45:45.137858 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:45:45.144322 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:45:45.185712 ignition[1013]: INFO : Ignition 2.21.0 Aug 13 00:45:45.185712 ignition[1013]: INFO : Stage: files Aug 13 00:45:45.187202 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:45.187202 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:45.190882 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:45:45.190882 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:45:45.190882 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:45:45.194991 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:45:45.195667 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:45:45.196417 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:45:45.195911 unknown[1013]: wrote ssh authorized keys file for user: core Aug 13 00:45:45.198950 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:45:45.199841 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:45:45.262567 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:45:45.635942 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:45:45.635942 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:45:45.637955 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:45:45.826861 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:45:45.959707 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:45:45.959707 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:45.966374 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:45:46.141914 systemd-networkd[827]: eth1: Gained IPv6LL Aug 13 00:45:46.269927 systemd-networkd[827]: eth0: Gained IPv6LL Aug 13 00:45:46.364704 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:45:47.025470 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:45:47.025470 ignition[1013]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:45:47.028119 ignition[1013]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:45:47.031396 ignition[1013]: INFO : files: files passed Aug 13 00:45:47.031396 ignition[1013]: INFO : Ignition finished successfully Aug 13 00:45:47.032968 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:45:47.035609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:45:47.039630 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:45:47.058268 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:45:47.058415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:45:47.070555 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:45:47.070555 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:45:47.072379 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:45:47.074730 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:45:47.075771 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:45:47.077742 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:45:47.144140 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:45:47.144277 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:45:47.145487 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:45:47.146021 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:45:47.146914 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:45:47.148036 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:45:47.176007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:45:47.178632 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:45:47.203243 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:45:47.204023 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:45:47.204948 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:45:47.205851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:45:47.206108 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:45:47.207682 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:45:47.208808 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:45:47.209518 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:45:47.210607 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:45:47.211273 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:45:47.211902 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:45:47.212841 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:45:47.213551 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:45:47.214400 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:45:47.215215 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:45:47.215862 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:45:47.216559 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:45:47.216749 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:45:47.217627 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:45:47.218350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:45:47.219133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:45:47.219255 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:45:47.220128 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:45:47.220472 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:45:47.221746 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:45:47.222056 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:45:47.223047 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:45:47.223226 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:45:47.224171 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:45:47.224369 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:45:47.227660 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:45:47.228427 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:45:47.229027 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:45:47.242707 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:45:47.243646 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:45:47.244345 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:45:47.245786 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:45:47.248751 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:45:47.258442 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:45:47.259462 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:45:47.273292 ignition[1068]: INFO : Ignition 2.21.0 Aug 13 00:45:47.274082 ignition[1068]: INFO : Stage: umount Aug 13 00:45:47.275523 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:45:47.275523 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:45:47.282054 ignition[1068]: INFO : umount: umount passed Aug 13 00:45:47.283094 ignition[1068]: INFO : Ignition finished successfully Aug 13 00:45:47.288596 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:45:47.290327 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:45:47.294303 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:45:47.306325 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:45:47.306419 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:45:47.306868 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:45:47.306927 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:45:47.307262 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:45:47.307308 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:45:47.311002 systemd[1]: Stopped target network.target - Network. Aug 13 00:45:47.327596 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:45:47.327694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:45:47.328098 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:45:47.328435 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:45:47.332571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:45:47.333131 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:45:47.334260 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:45:47.335041 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:45:47.335095 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:45:47.335798 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:45:47.335835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:45:47.336552 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:45:47.336634 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:45:47.337312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:45:47.337360 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:45:47.338207 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:45:47.339011 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:45:47.341181 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:45:47.341351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:45:47.342788 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:45:47.342942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:45:47.347546 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:45:47.348024 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:45:47.348201 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:45:47.350589 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:45:47.352623 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:45:47.353890 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:45:47.353976 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:45:47.354641 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:45:47.354717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:45:47.356897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:45:47.358953 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:45:47.359050 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:45:47.359819 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:45:47.359880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:45:47.363775 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:45:47.363872 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:45:47.364675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:45:47.364755 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:45:47.367012 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:45:47.373424 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:45:47.373585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:45:47.381676 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:45:47.381932 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:45:47.383258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:45:47.383331 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:45:47.384009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:45:47.384062 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:45:47.384903 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:45:47.384977 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:45:47.386234 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:45:47.386316 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:45:47.387345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:45:47.387432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:45:47.392095 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:45:47.394331 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:45:47.394548 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:45:47.395461 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:45:47.395557 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:45:47.398348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:45:47.398458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:45:47.403127 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:45:47.403232 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:45:47.403278 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:45:47.406014 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:45:47.406139 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:45:47.414961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:45:47.415120 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:45:47.416202 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:45:47.418152 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:45:47.453123 systemd[1]: Switching root. Aug 13 00:45:47.507434 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Aug 13 00:45:47.507553 systemd-journald[210]: Journal stopped Aug 13 00:45:49.031597 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:45:49.032074 kernel: SELinux: policy capability open_perms=1 Aug 13 00:45:49.032107 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:45:49.032125 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:45:49.032296 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:45:49.032315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:45:49.032328 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:45:49.032342 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:45:49.032363 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:45:49.032382 kernel: audit: type=1403 audit(1755045947.629:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:45:49.032701 systemd[1]: Successfully loaded SELinux policy in 49.350ms. Aug 13 00:45:49.032743 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.938ms. Aug 13 00:45:49.032760 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:45:49.032977 systemd[1]: Detected virtualization kvm. Aug 13 00:45:49.033004 systemd[1]: Detected architecture x86-64. Aug 13 00:45:49.033023 systemd[1]: Detected first boot. Aug 13 00:45:49.033042 systemd[1]: Hostname set to . Aug 13 00:45:49.033061 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:45:49.033077 zram_generator::config[1113]: No configuration found. Aug 13 00:45:49.033092 kernel: Guest personality initialized and is inactive Aug 13 00:45:49.035152 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:45:49.035179 kernel: Initialized host personality Aug 13 00:45:49.035194 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:45:49.035215 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:45:49.035242 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:45:49.035260 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:45:49.035280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:45:49.035303 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:45:49.035324 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:45:49.035352 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:45:49.035379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:45:49.035637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:45:49.036536 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:45:49.036558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:45:49.036574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:45:49.036588 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:45:49.036601 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:45:49.036619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:45:49.036651 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:45:49.036673 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:45:49.036693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:45:49.036720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:45:49.036740 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:45:49.036760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:45:49.036785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:45:49.036805 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:45:49.036833 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:45:49.036847 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:45:49.036860 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:45:49.036873 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:45:49.036892 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:45:49.036906 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:45:49.036919 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:45:49.036936 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:45:49.036949 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:45:49.036962 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:45:49.036976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:45:49.036990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:45:49.037003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:45:49.037017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:45:49.037029 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:45:49.037042 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:45:49.037070 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:45:49.037083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:49.037096 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:45:49.037120 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:45:49.037140 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:45:49.037159 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:45:49.037178 systemd[1]: Reached target machines.target - Containers. Aug 13 00:45:49.037196 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:45:49.037214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:45:49.037237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:45:49.037257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:45:49.037277 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:45:49.037298 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:45:49.037317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:45:49.037337 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:45:49.037357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:45:49.037375 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:45:49.037393 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:45:49.037660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:45:49.037696 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:45:49.037716 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:45:49.037737 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:45:49.037756 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:45:49.037784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:45:49.037803 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:45:49.037821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:45:49.037841 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:45:49.037866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:45:49.037889 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:45:49.038202 systemd[1]: Stopped verity-setup.service. Aug 13 00:45:49.038231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:49.038254 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:45:49.038267 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:45:49.038281 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:45:49.038789 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:45:49.038815 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:45:49.038834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:45:49.038848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:45:49.039743 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:45:49.039779 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:45:49.039802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:49.039824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:45:49.039843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:45:49.039861 kernel: loop: module loaded Aug 13 00:45:49.040148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:45:49.040192 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:45:49.040214 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:49.040419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:45:49.040452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:45:49.040465 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:45:49.040478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:45:49.040492 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:45:49.043629 kernel: fuse: init (API version 7.41) Aug 13 00:45:49.043897 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:45:49.043920 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:45:49.043934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:45:49.043947 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:45:49.043964 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:45:49.043979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:45:49.044094 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:45:49.044123 kernel: ACPI: bus type drm_connector registered Aug 13 00:45:49.044143 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:45:49.044163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:45:49.044189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:45:49.046810 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:45:49.047036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:45:49.047060 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:45:49.047079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:45:49.047101 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:45:49.047125 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:45:49.047495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:45:49.047535 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:45:49.047549 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:45:49.047562 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:45:49.047575 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:45:49.047588 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:45:49.047602 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:45:49.047615 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:45:49.047628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:45:49.047704 systemd-journald[1190]: Collecting audit messages is disabled. Aug 13 00:45:49.047749 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 00:45:49.047772 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:45:49.047794 systemd-journald[1190]: Journal started Aug 13 00:45:49.047820 systemd-journald[1190]: Runtime Journal (/run/log/journal/25b7f0adc3914ff4ba685ff630e923ce) is 4.9M, max 39.5M, 34.6M free. Aug 13 00:45:48.450651 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:45:49.050273 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:45:48.465946 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:45:48.466498 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:45:49.062846 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:45:49.084524 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:45:49.118612 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:45:49.128449 kernel: loop1: detected capacity change from 0 to 8 Aug 13 00:45:49.139706 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:45:49.143680 systemd-journald[1190]: Time spent on flushing to /var/log/journal/25b7f0adc3914ff4ba685ff630e923ce is 44.538ms for 1021 entries. Aug 13 00:45:49.143680 systemd-journald[1190]: System Journal (/var/log/journal/25b7f0adc3914ff4ba685ff630e923ce) is 8M, max 195.6M, 187.6M free. Aug 13 00:45:49.197365 systemd-journald[1190]: Received client request to flush runtime journal. Aug 13 00:45:49.199481 kernel: loop2: detected capacity change from 0 to 221472 Aug 13 00:45:49.164867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:45:49.201628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:45:49.208587 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 00:45:49.260443 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 00:45:49.261746 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 13 00:45:49.261776 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Aug 13 00:45:49.334182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:45:49.345454 kernel: loop5: detected capacity change from 0 to 8 Aug 13 00:45:49.351490 kernel: loop6: detected capacity change from 0 to 221472 Aug 13 00:45:49.375493 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 00:45:49.405128 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 00:45:49.406083 (sd-merge)[1261]: Merged extensions into '/usr'. Aug 13 00:45:49.422071 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:45:49.422096 systemd[1]: Reloading... Aug 13 00:45:49.727267 zram_generator::config[1291]: No configuration found. Aug 13 00:45:49.873213 ldconfig[1212]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:45:49.924819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:45:50.047376 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:45:50.048484 systemd[1]: Reloading finished in 625 ms. Aug 13 00:45:50.065898 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:45:50.069290 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:45:50.083633 systemd[1]: Starting ensure-sysext.service... Aug 13 00:45:50.087689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:45:50.130017 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:45:50.130047 systemd[1]: Reloading... Aug 13 00:45:50.175587 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:45:50.175636 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:45:50.176110 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:45:50.178645 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:45:50.179962 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:45:50.180381 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Aug 13 00:45:50.182630 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Aug 13 00:45:50.190918 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:45:50.190933 systemd-tmpfiles[1332]: Skipping /boot Aug 13 00:45:50.237033 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:45:50.237050 systemd-tmpfiles[1332]: Skipping /boot Aug 13 00:45:50.322443 zram_generator::config[1359]: No configuration found. Aug 13 00:45:50.472162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:45:50.570925 systemd[1]: Reloading finished in 440 ms. Aug 13 00:45:50.583462 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:45:50.590232 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:45:50.598594 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:45:50.602355 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:45:50.606399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:45:50.613402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:45:50.617200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:45:50.621104 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:45:50.627397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:50.629825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:45:50.634751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:45:50.637273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:45:50.643376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:45:50.644034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:45:50.644221 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:45:50.644376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:50.650610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:50.650861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:45:50.651050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:45:50.651144 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:45:50.651258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:50.659027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:50.659447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:45:50.662835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:45:50.664713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:45:50.664930 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:45:50.665152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:45:50.674252 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:45:50.679029 systemd[1]: Finished ensure-sysext.service. Aug 13 00:45:50.693957 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:45:50.702557 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:45:50.710423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:45:50.711998 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:45:50.716946 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:45:50.718553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:45:50.732503 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:45:50.733589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:45:50.735357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:45:50.736658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:45:50.738886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:45:50.746966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:45:50.747702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:45:50.749710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:45:50.759504 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:45:50.765334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:45:50.796129 systemd-udevd[1408]: Using default interface naming scheme 'v255'. Aug 13 00:45:50.830083 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:45:50.844850 augenrules[1446]: No rules Aug 13 00:45:50.848201 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:45:50.850834 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:45:50.859733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:45:50.863000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:45:50.869214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:45:51.062015 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:45:51.063169 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:45:51.067365 systemd-resolved[1407]: Positive Trust Anchors: Aug 13 00:45:51.067386 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:45:51.067479 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:45:51.081344 systemd-resolved[1407]: Using system hostname 'ci-4372.1.0-a-9a72d3155b'. Aug 13 00:45:51.088081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:45:51.089347 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:45:51.090094 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:45:51.091745 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:45:51.092354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:45:51.092930 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:45:51.094103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:45:51.094912 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:45:51.096064 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:45:51.096833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:45:51.096883 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:45:51.097660 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:45:51.101012 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:45:51.105668 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:45:51.113363 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:45:51.114942 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:45:51.115860 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:45:51.125923 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:45:51.127387 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:45:51.137062 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:45:51.145975 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:45:51.146608 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:45:51.147168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:45:51.147206 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:45:51.152722 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:45:51.157678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:45:51.163882 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:45:51.170768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:45:51.175851 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:45:51.176453 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:45:51.184833 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:45:51.189461 systemd-networkd[1457]: lo: Link UP Aug 13 00:45:51.189475 systemd-networkd[1457]: lo: Gained carrier Aug 13 00:45:51.190869 systemd-networkd[1457]: Enumeration completed Aug 13 00:45:51.196668 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:45:51.207126 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:45:51.233411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:45:51.239815 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:45:51.253121 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:45:51.255007 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:45:51.257935 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:45:51.260926 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:45:51.272141 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:45:51.273611 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:45:51.279558 jq[1491]: false Aug 13 00:45:51.276506 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:45:51.281566 systemd[1]: Reached target network.target - Network. Aug 13 00:45:51.293125 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Refreshing passwd entry cache Aug 13 00:45:51.290108 oslogin_cache_refresh[1493]: Refreshing passwd entry cache Aug 13 00:45:51.296191 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:45:51.307474 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Failure getting users, quitting Aug 13 00:45:51.307474 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:45:51.307474 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Refreshing group entry cache Aug 13 00:45:51.305852 oslogin_cache_refresh[1493]: Failure getting users, quitting Aug 13 00:45:51.305884 oslogin_cache_refresh[1493]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:45:51.305960 oslogin_cache_refresh[1493]: Refreshing group entry cache Aug 13 00:45:51.308728 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:45:51.315530 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Failure getting groups, quitting Aug 13 00:45:51.315530 google_oslogin_nss_cache[1493]: oslogin_cache_refresh[1493]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:45:51.313942 oslogin_cache_refresh[1493]: Failure getting groups, quitting Aug 13 00:45:51.313960 oslogin_cache_refresh[1493]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:45:51.319840 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:45:51.322336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:45:51.322679 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:45:51.337574 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:45:51.337962 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:45:51.344117 extend-filesystems[1492]: Found /dev/vda6 Aug 13 00:45:51.345755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:45:51.348479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:45:51.392978 jq[1503]: true Aug 13 00:45:51.404450 extend-filesystems[1492]: Found /dev/vda9 Aug 13 00:45:51.401982 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:45:51.402332 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:45:51.417640 extend-filesystems[1492]: Checking size of /dev/vda9 Aug 13 00:45:51.417340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:45:51.417046 dbus-daemon[1488]: [system] SELinux support is enabled Aug 13 00:45:51.423157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:45:51.423220 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:45:51.425380 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:45:51.425437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:45:51.440945 update_engine[1502]: I20250813 00:45:51.438179 1502 main.cc:92] Flatcar Update Engine starting Aug 13 00:45:51.455549 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:45:51.461455 update_engine[1502]: I20250813 00:45:51.460652 1502 update_check_scheduler.cc:74] Next update check in 2m20s Aug 13 00:45:51.465577 coreos-metadata[1487]: Aug 13 00:45:51.464 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:45:51.466042 coreos-metadata[1487]: Aug 13 00:45:51.465 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Aug 13 00:45:51.471056 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:45:51.478740 tar[1516]: linux-amd64/helm Aug 13 00:45:51.481371 extend-filesystems[1492]: Resized partition /dev/vda9 Aug 13 00:45:51.482406 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:45:51.488665 extend-filesystems[1540]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:45:51.497504 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 00:45:51.497654 jq[1526]: true Aug 13 00:45:51.518360 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:45:51.588103 systemd-logind[1500]: New seat seat0. Aug 13 00:45:51.589065 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:45:51.626276 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 00:45:51.660172 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:45:51.660172 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 00:45:51.660172 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 00:45:51.665999 extend-filesystems[1492]: Resized filesystem in /dev/vda9 Aug 13 00:45:51.661016 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:45:51.661384 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:45:51.730791 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:45:51.735730 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:45:51.746502 systemd[1]: Starting sshkeys.service... Aug 13 00:45:51.817534 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:45:51.886860 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:45:51.892796 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:45:52.024169 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Aug 13 00:45:52.034298 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 00:45:52.036582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:45:52.084449 locksmithd[1534]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:45:52.126366 coreos-metadata[1566]: Aug 13 00:45:52.125 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:45:52.133514 coreos-metadata[1566]: Aug 13 00:45:52.129 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Aug 13 00:45:52.160455 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 00:45:52.167207 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 00:45:52.171363 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 00:45:52.204457 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:45:52.210801 systemd-networkd[1457]: eth1: Configuring with /run/systemd/network/10-0a:9e:df:af:4e:eb.network. Aug 13 00:45:52.229251 systemd-networkd[1457]: eth1: Link UP Aug 13 00:45:52.233175 systemd-networkd[1457]: eth1: Gained carrier Aug 13 00:45:52.246039 systemd-networkd[1457]: eth0: Configuring with /run/systemd/network/10-d6:96:85:0b:69:d4.network. Aug 13 00:45:52.264199 systemd-networkd[1457]: eth0: Link UP Aug 13 00:45:52.267772 systemd-networkd[1457]: eth0: Gained carrier Aug 13 00:45:52.268601 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Aug 13 00:45:52.280803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:45:52.285515 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Aug 13 00:45:52.290777 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:45:52.293584 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Aug 13 00:45:52.301617 containerd[1536]: time="2025-08-13T00:45:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:45:52.317309 containerd[1536]: time="2025-08-13T00:45:52.317240885Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:45:52.368800 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.367357403Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.317µs" Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.369491688Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.369562386Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.369823777Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.369857232Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.369897182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.369986558Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.370006221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.370384415Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.371458084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.371496576Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:45:52.371840 containerd[1536]: time="2025-08-13T00:45:52.371511029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:45:52.372225 containerd[1536]: time="2025-08-13T00:45:52.371684018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:45:52.372225 containerd[1536]: time="2025-08-13T00:45:52.371988675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:45:52.372225 containerd[1536]: time="2025-08-13T00:45:52.372040417Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:45:52.372225 containerd[1536]: time="2025-08-13T00:45:52.372058801Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:45:52.372225 containerd[1536]: time="2025-08-13T00:45:52.372101671Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:45:52.375139 containerd[1536]: time="2025-08-13T00:45:52.372492803Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:45:52.375139 containerd[1536]: time="2025-08-13T00:45:52.372649168Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:45:52.380308 containerd[1536]: time="2025-08-13T00:45:52.380240151Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380344256Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380384528Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380427279Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380448473Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380477550Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380506364Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380526070Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380542739Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380556979Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380571115Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380591349Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380825948Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380870846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:45:52.381594 containerd[1536]: time="2025-08-13T00:45:52.380910988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.380930548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.380947723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.380963264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.380979771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.380994300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.381014651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.381030581Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.381049067Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.381142303Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:45:52.382080 containerd[1536]: time="2025-08-13T00:45:52.381164785Z" level=info msg="Start snapshots syncer" Aug 13 00:45:52.391443 containerd[1536]: time="2025-08-13T00:45:52.388994901Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:45:52.391443 containerd[1536]: time="2025-08-13T00:45:52.389286250Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:45:52.391706 containerd[1536]: time="2025-08-13T00:45:52.389661456Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:45:52.401104 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:45:52.410000 containerd[1536]: time="2025-08-13T00:45:52.409926709Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:45:52.410228 containerd[1536]: time="2025-08-13T00:45:52.410194413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:45:52.410284 containerd[1536]: time="2025-08-13T00:45:52.410256401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:45:52.410317 containerd[1536]: time="2025-08-13T00:45:52.410293119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:45:52.410357 containerd[1536]: time="2025-08-13T00:45:52.410311887Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:45:52.410357 containerd[1536]: time="2025-08-13T00:45:52.410339045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:45:52.410400 containerd[1536]: time="2025-08-13T00:45:52.410361294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:45:52.410400 containerd[1536]: time="2025-08-13T00:45:52.410381045Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:45:52.410476 containerd[1536]: time="2025-08-13T00:45:52.410457443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:45:52.410520 containerd[1536]: time="2025-08-13T00:45:52.410497522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:45:52.410643 containerd[1536]: time="2025-08-13T00:45:52.410532568Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:45:52.411742 containerd[1536]: time="2025-08-13T00:45:52.411698380Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:45:52.411821 containerd[1536]: time="2025-08-13T00:45:52.411754592Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:45:52.411821 containerd[1536]: time="2025-08-13T00:45:52.411769413Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:45:52.411821 containerd[1536]: time="2025-08-13T00:45:52.411783707Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:45:52.411821 containerd[1536]: time="2025-08-13T00:45:52.411794824Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:45:52.411821 containerd[1536]: time="2025-08-13T00:45:52.411808222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:45:52.411940 containerd[1536]: time="2025-08-13T00:45:52.411824925Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:45:52.411940 containerd[1536]: time="2025-08-13T00:45:52.411851465Z" level=info msg="runtime interface created" Aug 13 00:45:52.411940 containerd[1536]: time="2025-08-13T00:45:52.411858721Z" level=info msg="created NRI interface" Aug 13 00:45:52.411940 containerd[1536]: time="2025-08-13T00:45:52.411870215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:45:52.411940 containerd[1536]: time="2025-08-13T00:45:52.411893552Z" level=info msg="Connect containerd service" Aug 13 00:45:52.412037 containerd[1536]: time="2025-08-13T00:45:52.411953536Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:45:52.413924 containerd[1536]: time="2025-08-13T00:45:52.413053171Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:45:52.467624 coreos-metadata[1487]: Aug 13 00:45:52.466 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Aug 13 00:45:52.482682 coreos-metadata[1487]: Aug 13 00:45:52.479 INFO Fetch successful Aug 13 00:45:52.483442 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:45:52.499356 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 00:45:52.504131 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:45:52.542451 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:45:52.542529 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 00:45:52.542577 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 00:45:52.548604 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:45:52.548688 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 00:45:52.548706 kernel: [drm] features: -context_init Aug 13 00:45:52.548720 kernel: [drm] number of scanouts: 1 Aug 13 00:45:52.548734 kernel: [drm] number of cap sets: 0 Aug 13 00:45:52.548748 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Aug 13 00:45:52.545538 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:45:52.551791 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:45:52.574889 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:45:52.576701 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:45:52.630885 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:45:52.631501 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:45:52.635729 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:45:52.724696 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:45:52.733096 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:45:52.740626 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:45:52.742042 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:45:52.768114 containerd[1536]: time="2025-08-13T00:45:52.767833045Z" level=info msg="Start subscribing containerd event" Aug 13 00:45:52.768114 containerd[1536]: time="2025-08-13T00:45:52.767918866Z" level=info msg="Start recovering state" Aug 13 00:45:52.768328 containerd[1536]: time="2025-08-13T00:45:52.768165972Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:45:52.768449 containerd[1536]: time="2025-08-13T00:45:52.768401845Z" level=info msg="Start event monitor" Aug 13 00:45:52.768565 containerd[1536]: time="2025-08-13T00:45:52.768546459Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:45:52.768666 containerd[1536]: time="2025-08-13T00:45:52.768512656Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:45:52.768722 containerd[1536]: time="2025-08-13T00:45:52.768629441Z" level=info msg="Start streaming server" Aug 13 00:45:52.768722 containerd[1536]: time="2025-08-13T00:45:52.768701730Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:45:52.768722 containerd[1536]: time="2025-08-13T00:45:52.768713668Z" level=info msg="runtime interface starting up..." Aug 13 00:45:52.768808 containerd[1536]: time="2025-08-13T00:45:52.768723086Z" level=info msg="starting plugins..." Aug 13 00:45:52.768808 containerd[1536]: time="2025-08-13T00:45:52.768752402Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:45:52.770916 containerd[1536]: time="2025-08-13T00:45:52.768970654Z" level=info msg="containerd successfully booted in 0.472094s" Aug 13 00:45:52.769618 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:45:53.086485 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:45:53.140088 coreos-metadata[1566]: Aug 13 00:45:53.130 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Aug 13 00:45:53.155405 systemd-logind[1500]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:45:53.157500 coreos-metadata[1566]: Aug 13 00:45:53.157 INFO Fetch successful Aug 13 00:45:53.164662 unknown[1566]: wrote ssh authorized keys file for user: core Aug 13 00:45:53.198613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:45:53.215750 update-ssh-keys[1641]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:45:53.219641 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:45:53.228273 systemd[1]: Finished sshkeys.service. Aug 13 00:45:53.238435 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:45:53.289587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:45:53.289882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:45:53.297723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:45:53.420200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:45:53.441893 tar[1516]: linux-amd64/LICENSE Aug 13 00:45:53.442410 tar[1516]: linux-amd64/README.md Aug 13 00:45:53.467018 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:45:53.885815 systemd-networkd[1457]: eth1: Gained IPv6LL Aug 13 00:45:53.887286 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Aug 13 00:45:53.889323 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:45:53.890284 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:45:53.893203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:45:53.897855 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:45:53.939673 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:45:54.143767 systemd-networkd[1457]: eth0: Gained IPv6LL Aug 13 00:45:54.144710 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Aug 13 00:45:54.371136 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:45:54.375340 systemd[1]: Started sshd@0-24.144.89.98:22-139.178.68.195:49070.service - OpenSSH per-connection server daemon (139.178.68.195:49070). Aug 13 00:45:54.507423 sshd[1672]: Accepted publickey for core from 139.178.68.195 port 49070 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:45:54.511608 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:45:54.525608 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:45:54.529656 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:45:54.547286 systemd-logind[1500]: New session 1 of user core. Aug 13 00:45:54.572799 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:45:54.579196 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:45:54.603196 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:45:54.610383 systemd-logind[1500]: New session c1 of user core. Aug 13 00:45:54.852028 systemd[1676]: Queued start job for default target default.target. Aug 13 00:45:54.866518 systemd[1676]: Created slice app.slice - User Application Slice. Aug 13 00:45:54.866574 systemd[1676]: Reached target paths.target - Paths. Aug 13 00:45:54.866630 systemd[1676]: Reached target timers.target - Timers. Aug 13 00:45:54.872660 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:45:54.898749 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:45:54.901090 systemd[1676]: Reached target sockets.target - Sockets. Aug 13 00:45:54.901289 systemd[1676]: Reached target basic.target - Basic System. Aug 13 00:45:54.901576 systemd[1676]: Reached target default.target - Main User Target. Aug 13 00:45:54.901733 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:45:54.901737 systemd[1676]: Startup finished in 274ms. Aug 13 00:45:54.909843 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:45:54.999896 systemd[1]: Started sshd@1-24.144.89.98:22-139.178.68.195:49086.service - OpenSSH per-connection server daemon (139.178.68.195:49086). Aug 13 00:45:55.091399 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 49086 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:45:55.094237 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:45:55.104252 systemd-logind[1500]: New session 2 of user core. Aug 13 00:45:55.110783 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:45:55.185587 sshd[1689]: Connection closed by 139.178.68.195 port 49086 Aug 13 00:45:55.186319 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:55.212028 systemd[1]: sshd@1-24.144.89.98:22-139.178.68.195:49086.service: Deactivated successfully. Aug 13 00:45:55.218213 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:45:55.225576 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:45:55.230629 systemd[1]: Started sshd@2-24.144.89.98:22-139.178.68.195:49100.service - OpenSSH per-connection server daemon (139.178.68.195:49100). Aug 13 00:45:55.233982 systemd-logind[1500]: Removed session 2. Aug 13 00:45:55.308435 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 49100 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:45:55.311386 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:45:55.320895 systemd-logind[1500]: New session 3 of user core. Aug 13 00:45:55.333896 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:45:55.405012 sshd[1697]: Connection closed by 139.178.68.195 port 49100 Aug 13 00:45:55.406388 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:55.415282 systemd[1]: sshd@2-24.144.89.98:22-139.178.68.195:49100.service: Deactivated successfully. Aug 13 00:45:55.415286 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:45:55.418892 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:45:55.430106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:45:55.435731 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:45:55.436266 systemd[1]: Startup finished in 3.724s (kernel) + 7.007s (initrd) + 7.853s (userspace) = 18.585s. Aug 13 00:45:55.436749 systemd-logind[1500]: Removed session 3. Aug 13 00:45:55.445731 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:45:56.184400 kubelet[1705]: E0813 00:45:56.184308 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:45:56.187978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:45:56.188136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:45:56.188655 systemd[1]: kubelet.service: Consumed 1.458s CPU time, 263.8M memory peak. Aug 13 00:46:05.451964 systemd[1]: Started sshd@3-24.144.89.98:22-139.178.68.195:55718.service - OpenSSH per-connection server daemon (139.178.68.195:55718). Aug 13 00:46:05.567997 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 55718 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:05.570791 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:05.579676 systemd-logind[1500]: New session 4 of user core. Aug 13 00:46:05.587825 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:46:05.661889 sshd[1722]: Connection closed by 139.178.68.195 port 55718 Aug 13 00:46:05.663936 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:05.678995 systemd[1]: sshd@3-24.144.89.98:22-139.178.68.195:55718.service: Deactivated successfully. Aug 13 00:46:05.682898 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:46:05.684239 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:46:05.690996 systemd[1]: Started sshd@4-24.144.89.98:22-139.178.68.195:55720.service - OpenSSH per-connection server daemon (139.178.68.195:55720). Aug 13 00:46:05.692281 systemd-logind[1500]: Removed session 4. Aug 13 00:46:05.763672 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 55720 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:05.766583 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:05.775524 systemd-logind[1500]: New session 5 of user core. Aug 13 00:46:05.780880 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:46:05.841459 sshd[1730]: Connection closed by 139.178.68.195 port 55720 Aug 13 00:46:05.842217 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:05.856825 systemd[1]: sshd@4-24.144.89.98:22-139.178.68.195:55720.service: Deactivated successfully. Aug 13 00:46:05.860221 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:46:05.862038 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:46:05.868310 systemd[1]: Started sshd@5-24.144.89.98:22-139.178.68.195:55732.service - OpenSSH per-connection server daemon (139.178.68.195:55732). Aug 13 00:46:05.869985 systemd-logind[1500]: Removed session 5. Aug 13 00:46:05.942053 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 55732 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:05.944753 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:05.954112 systemd-logind[1500]: New session 6 of user core. Aug 13 00:46:05.962823 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:46:06.029117 sshd[1738]: Connection closed by 139.178.68.195 port 55732 Aug 13 00:46:06.030052 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:06.043550 systemd[1]: sshd@5-24.144.89.98:22-139.178.68.195:55732.service: Deactivated successfully. Aug 13 00:46:06.046531 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:46:06.047530 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:46:06.053353 systemd[1]: Started sshd@6-24.144.89.98:22-139.178.68.195:55744.service - OpenSSH per-connection server daemon (139.178.68.195:55744). Aug 13 00:46:06.054751 systemd-logind[1500]: Removed session 6. Aug 13 00:46:06.116618 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 55744 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:06.118860 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:06.126193 systemd-logind[1500]: New session 7 of user core. Aug 13 00:46:06.135742 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:46:06.207490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:06.209662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:06.215316 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:46:06.215777 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:06.230877 sudo[1747]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:06.235344 sshd[1746]: Connection closed by 139.178.68.195 port 55744 Aug 13 00:46:06.235923 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:06.251088 systemd[1]: sshd@6-24.144.89.98:22-139.178.68.195:55744.service: Deactivated successfully. Aug 13 00:46:06.255112 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:46:06.258258 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:46:06.264032 systemd[1]: Started sshd@7-24.144.89.98:22-139.178.68.195:55746.service - OpenSSH per-connection server daemon (139.178.68.195:55746). Aug 13 00:46:06.265966 systemd-logind[1500]: Removed session 7. Aug 13 00:46:06.333563 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 55746 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:06.335047 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:06.346135 systemd-logind[1500]: New session 8 of user core. Aug 13 00:46:06.351736 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:46:06.418437 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:46:06.419185 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:06.427930 sudo[1762]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:06.438470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:06.439824 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:46:06.440920 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:06.453092 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:46:06.464288 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:06.532998 augenrules[1793]: No rules Aug 13 00:46:06.536920 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:06.537366 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:06.539448 sudo[1761]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:06.543442 sshd[1758]: Connection closed by 139.178.68.195 port 55746 Aug 13 00:46:06.547282 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:06.548814 kubelet[1766]: E0813 00:46:06.548346 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:06.559795 systemd[1]: sshd@7-24.144.89.98:22-139.178.68.195:55746.service: Deactivated successfully. Aug 13 00:46:06.562648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:06.562959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:06.563461 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.4M memory peak. Aug 13 00:46:06.564097 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:46:06.566331 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:46:06.571247 systemd[1]: Started sshd@8-24.144.89.98:22-139.178.68.195:55756.service - OpenSSH per-connection server daemon (139.178.68.195:55756). Aug 13 00:46:06.573718 systemd-logind[1500]: Removed session 8. Aug 13 00:46:06.648866 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 55756 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:06.651050 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:06.657485 systemd-logind[1500]: New session 9 of user core. Aug 13 00:46:06.672764 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:46:06.735155 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:46:06.736236 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:07.254476 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:46:07.285094 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:46:07.713436 dockerd[1824]: time="2025-08-13T00:46:07.713198674Z" level=info msg="Starting up" Aug 13 00:46:07.716264 dockerd[1824]: time="2025-08-13T00:46:07.716191246Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:46:07.767644 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4130743279-merged.mount: Deactivated successfully. Aug 13 00:46:07.816504 dockerd[1824]: time="2025-08-13T00:46:07.816075873Z" level=info msg="Loading containers: start." Aug 13 00:46:07.833541 kernel: Initializing XFRM netlink socket Aug 13 00:46:08.133732 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Aug 13 00:46:08.917900 systemd-timesyncd[1422]: Contacted time server 23.186.168.127:123 (2.flatcar.pool.ntp.org). Aug 13 00:46:08.918008 systemd-resolved[1407]: Clock change detected. Flushing caches. Aug 13 00:46:08.918011 systemd-timesyncd[1422]: Initial clock synchronization to Wed 2025-08-13 00:46:08.917118 UTC. Aug 13 00:46:08.965680 systemd-networkd[1457]: docker0: Link UP Aug 13 00:46:08.969815 dockerd[1824]: time="2025-08-13T00:46:08.969744835Z" level=info msg="Loading containers: done." Aug 13 00:46:08.988934 dockerd[1824]: time="2025-08-13T00:46:08.988563182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:46:08.988934 dockerd[1824]: time="2025-08-13T00:46:08.988687247Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:46:08.989190 dockerd[1824]: time="2025-08-13T00:46:08.988949507Z" level=info msg="Initializing buildkit" Aug 13 00:46:09.026158 dockerd[1824]: time="2025-08-13T00:46:09.026066263Z" level=info msg="Completed buildkit initialization" Aug 13 00:46:09.038004 dockerd[1824]: time="2025-08-13T00:46:09.037882504Z" level=info msg="Daemon has completed initialization" Aug 13 00:46:09.038314 dockerd[1824]: time="2025-08-13T00:46:09.038142813Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:46:09.038588 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:46:10.030156 containerd[1536]: time="2025-08-13T00:46:10.030095786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:46:10.585189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730931066.mount: Deactivated successfully. Aug 13 00:46:11.829066 containerd[1536]: time="2025-08-13T00:46:11.828988725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:11.830402 containerd[1536]: time="2025-08-13T00:46:11.830330016Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 00:46:11.830728 containerd[1536]: time="2025-08-13T00:46:11.830691587Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:11.833422 containerd[1536]: time="2025-08-13T00:46:11.833379339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:11.834641 containerd[1536]: time="2025-08-13T00:46:11.834603433Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.803720933s" Aug 13 00:46:11.835314 containerd[1536]: time="2025-08-13T00:46:11.834771806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:46:11.835500 containerd[1536]: time="2025-08-13T00:46:11.835464654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:46:13.301259 containerd[1536]: time="2025-08-13T00:46:13.301177636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:13.302662 containerd[1536]: time="2025-08-13T00:46:13.302607450Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 00:46:13.303608 containerd[1536]: time="2025-08-13T00:46:13.303237738Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:13.306679 containerd[1536]: time="2025-08-13T00:46:13.306587033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:13.308354 containerd[1536]: time="2025-08-13T00:46:13.308066812Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.472566196s" Aug 13 00:46:13.308354 containerd[1536]: time="2025-08-13T00:46:13.308120500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:46:13.309100 containerd[1536]: time="2025-08-13T00:46:13.309073996Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:46:14.594824 containerd[1536]: time="2025-08-13T00:46:14.594753726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:14.595824 containerd[1536]: time="2025-08-13T00:46:14.595784601Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 00:46:14.596373 containerd[1536]: time="2025-08-13T00:46:14.596268413Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:14.599391 containerd[1536]: time="2025-08-13T00:46:14.599351254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:14.600650 containerd[1536]: time="2025-08-13T00:46:14.600581612Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.291375758s" Aug 13 00:46:14.600650 containerd[1536]: time="2025-08-13T00:46:14.600620601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:46:14.601274 containerd[1536]: time="2025-08-13T00:46:14.601247495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:46:15.068524 systemd-resolved[1407]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 00:46:15.902994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734185098.mount: Deactivated successfully. Aug 13 00:46:16.632346 containerd[1536]: time="2025-08-13T00:46:16.632276358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:16.633754 containerd[1536]: time="2025-08-13T00:46:16.633695080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 00:46:16.634448 containerd[1536]: time="2025-08-13T00:46:16.634379275Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:16.637026 containerd[1536]: time="2025-08-13T00:46:16.636673337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:16.637699 containerd[1536]: time="2025-08-13T00:46:16.637652456Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.036369848s" Aug 13 00:46:16.637872 containerd[1536]: time="2025-08-13T00:46:16.637847193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:46:16.638794 containerd[1536]: time="2025-08-13T00:46:16.638656048Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:46:17.211749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757173370.mount: Deactivated successfully. Aug 13 00:46:17.462340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:46:17.466577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:17.730258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:17.744186 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:46:17.831261 kubelet[2123]: E0813 00:46:17.831190 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:17.837962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:17.838223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:17.839815 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.6M memory peak. Aug 13 00:46:18.127604 systemd-resolved[1407]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 00:46:18.364977 containerd[1536]: time="2025-08-13T00:46:18.364895761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:18.367398 containerd[1536]: time="2025-08-13T00:46:18.366975448Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:46:18.368168 containerd[1536]: time="2025-08-13T00:46:18.368119650Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:18.371560 containerd[1536]: time="2025-08-13T00:46:18.371502110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:18.373329 containerd[1536]: time="2025-08-13T00:46:18.373256833Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.734333118s" Aug 13 00:46:18.373329 containerd[1536]: time="2025-08-13T00:46:18.373326075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:46:18.374402 containerd[1536]: time="2025-08-13T00:46:18.374080437Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:46:18.866454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339516481.mount: Deactivated successfully. Aug 13 00:46:18.871126 containerd[1536]: time="2025-08-13T00:46:18.871045731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:46:18.872102 containerd[1536]: time="2025-08-13T00:46:18.872050421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:46:18.873337 containerd[1536]: time="2025-08-13T00:46:18.872654239Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:46:18.875105 containerd[1536]: time="2025-08-13T00:46:18.875023768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:46:18.876323 containerd[1536]: time="2025-08-13T00:46:18.875931818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.81513ms" Aug 13 00:46:18.876323 containerd[1536]: time="2025-08-13T00:46:18.875974656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:46:18.877253 containerd[1536]: time="2025-08-13T00:46:18.877213508Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:46:19.402766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579975233.mount: Deactivated successfully. Aug 13 00:46:21.412625 containerd[1536]: time="2025-08-13T00:46:21.412552528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:21.413823 containerd[1536]: time="2025-08-13T00:46:21.413762455Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 00:46:21.415545 containerd[1536]: time="2025-08-13T00:46:21.415484492Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:21.420177 containerd[1536]: time="2025-08-13T00:46:21.420109906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:21.422022 containerd[1536]: time="2025-08-13T00:46:21.421940135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.544680623s" Aug 13 00:46:21.422022 containerd[1536]: time="2025-08-13T00:46:21.422010778Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:46:24.399463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:24.399629 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.6M memory peak. Aug 13 00:46:24.402359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:24.439419 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-9.scope)... Aug 13 00:46:24.439443 systemd[1]: Reloading... Aug 13 00:46:24.613324 zram_generator::config[2296]: No configuration found. Aug 13 00:46:24.738529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:24.894422 systemd[1]: Reloading finished in 454 ms. Aug 13 00:46:24.977992 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:46:24.978312 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:46:24.978726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:24.978787 systemd[1]: kubelet.service: Consumed 138ms CPU time, 98.2M memory peak. Aug 13 00:46:24.981657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:25.175484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:25.192944 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:46:25.264851 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:25.265605 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:46:25.265682 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:25.265889 kubelet[2353]: I0813 00:46:25.265842 2353 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:46:26.137530 kubelet[2353]: I0813 00:46:26.137462 2353 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:46:26.137820 kubelet[2353]: I0813 00:46:26.137801 2353 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:46:26.138510 kubelet[2353]: I0813 00:46:26.138461 2353 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:46:26.185239 kubelet[2353]: I0813 00:46:26.185178 2353 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:46:26.192235 kubelet[2353]: E0813 00:46:26.191541 2353 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.144.89.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:26.214081 kubelet[2353]: I0813 00:46:26.214032 2353 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:46:26.225106 kubelet[2353]: I0813 00:46:26.225059 2353 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:46:26.228012 kubelet[2353]: I0813 00:46:26.227916 2353 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:46:26.228361 kubelet[2353]: I0813 00:46:26.228263 2353 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:46:26.228724 kubelet[2353]: I0813 00:46:26.228360 2353 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-a-9a72d3155b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:46:26.228724 kubelet[2353]: I0813 00:46:26.228718 2353 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:46:26.228724 kubelet[2353]: I0813 00:46:26.228733 2353 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:46:26.229031 kubelet[2353]: I0813 00:46:26.228918 2353 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:26.232786 kubelet[2353]: I0813 00:46:26.232699 2353 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:46:26.232786 kubelet[2353]: I0813 00:46:26.232765 2353 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:46:26.233077 kubelet[2353]: I0813 00:46:26.232833 2353 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:46:26.233077 kubelet[2353]: I0813 00:46:26.232867 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:46:26.241733 kubelet[2353]: W0813 00:46:26.240054 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.144.89.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-9a72d3155b&limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:26.241733 kubelet[2353]: E0813 00:46:26.240174 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.144.89.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-9a72d3155b&limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:26.243406 kubelet[2353]: W0813 00:46:26.243311 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.144.89.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:26.243605 kubelet[2353]: E0813 00:46:26.243419 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.144.89.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:26.243605 kubelet[2353]: I0813 00:46:26.243555 2353 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:46:26.248144 kubelet[2353]: I0813 00:46:26.248085 2353 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:46:26.248335 kubelet[2353]: W0813 00:46:26.248209 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:46:26.249419 kubelet[2353]: I0813 00:46:26.249342 2353 server.go:1274] "Started kubelet" Aug 13 00:46:26.249817 kubelet[2353]: I0813 00:46:26.249736 2353 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:46:26.251555 kubelet[2353]: I0813 00:46:26.251516 2353 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:46:26.254867 kubelet[2353]: I0813 00:46:26.254113 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:46:26.254867 kubelet[2353]: I0813 00:46:26.254583 2353 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:46:26.257130 kubelet[2353]: E0813 00:46:26.255087 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.144.89.98:6443/api/v1/namespaces/default/events\": dial tcp 24.144.89.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-a-9a72d3155b.185b2d069edd0085 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-a-9a72d3155b,UID:ci-4372.1.0-a-9a72d3155b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-a-9a72d3155b,},FirstTimestamp:2025-08-13 00:46:26.249277573 +0000 UTC m=+1.049623541,LastTimestamp:2025-08-13 00:46:26.249277573 +0000 UTC m=+1.049623541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-a-9a72d3155b,}" Aug 13 00:46:26.266921 kubelet[2353]: I0813 00:46:26.266766 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:46:26.272355 kubelet[2353]: I0813 00:46:26.271639 2353 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:46:26.272355 kubelet[2353]: I0813 00:46:26.267326 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:46:26.272355 kubelet[2353]: I0813 00:46:26.272115 2353 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:46:26.272355 kubelet[2353]: I0813 00:46:26.272199 2353 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:46:26.274357 kubelet[2353]: W0813 00:46:26.274222 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.144.89.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:26.274765 kubelet[2353]: E0813 00:46:26.274717 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.144.89.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:26.278391 kubelet[2353]: E0813 00:46:26.278003 2353 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:26.279189 kubelet[2353]: E0813 00:46:26.279155 2353 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:46:26.280595 kubelet[2353]: I0813 00:46:26.280563 2353 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:46:26.281269 kubelet[2353]: I0813 00:46:26.281156 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:46:26.283850 kubelet[2353]: E0813 00:46:26.283798 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.89.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-9a72d3155b?timeout=10s\": dial tcp 24.144.89.98:6443: connect: connection refused" interval="200ms" Aug 13 00:46:26.287748 kubelet[2353]: I0813 00:46:26.287665 2353 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:46:26.314338 kubelet[2353]: I0813 00:46:26.314143 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:46:26.316915 kubelet[2353]: I0813 00:46:26.316864 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:46:26.316915 kubelet[2353]: I0813 00:46:26.316911 2353 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:46:26.317103 kubelet[2353]: I0813 00:46:26.316948 2353 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:46:26.317103 kubelet[2353]: E0813 00:46:26.317023 2353 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:46:26.329184 kubelet[2353]: W0813 00:46:26.329038 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.144.89.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:26.329184 kubelet[2353]: E0813 00:46:26.329119 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.144.89.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:26.334948 kubelet[2353]: I0813 00:46:26.334912 2353 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:46:26.335365 kubelet[2353]: I0813 00:46:26.335126 2353 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:46:26.335365 kubelet[2353]: I0813 00:46:26.335161 2353 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:26.337823 kubelet[2353]: I0813 00:46:26.337781 2353 policy_none.go:49] "None policy: Start" Aug 13 00:46:26.339540 kubelet[2353]: I0813 00:46:26.339506 2353 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:46:26.340370 kubelet[2353]: I0813 00:46:26.339878 2353 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:46:26.347862 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:46:26.365132 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:46:26.372083 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:46:26.378301 kubelet[2353]: E0813 00:46:26.378221 2353 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:26.389639 kubelet[2353]: I0813 00:46:26.388913 2353 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:46:26.389639 kubelet[2353]: I0813 00:46:26.389229 2353 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:46:26.389804 kubelet[2353]: I0813 00:46:26.389249 2353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:46:26.391065 kubelet[2353]: I0813 00:46:26.391042 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:46:26.394365 kubelet[2353]: E0813 00:46:26.393947 2353 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:26.434121 systemd[1]: Created slice kubepods-burstable-pod67b3a3d04a61cbb24ca41e3559372502.slice - libcontainer container kubepods-burstable-pod67b3a3d04a61cbb24ca41e3559372502.slice. Aug 13 00:46:26.454725 systemd[1]: Created slice kubepods-burstable-pod6ee3cad89cb86e47b8fa6123a25b4e64.slice - libcontainer container kubepods-burstable-pod6ee3cad89cb86e47b8fa6123a25b4e64.slice. Aug 13 00:46:26.473655 kubelet[2353]: I0813 00:46:26.473139 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.473655 kubelet[2353]: I0813 00:46:26.473215 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.473655 kubelet[2353]: I0813 00:46:26.473250 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.473655 kubelet[2353]: I0813 00:46:26.473323 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.473655 kubelet[2353]: I0813 00:46:26.473368 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fb9f19c6f7fe2d8189b98ea6a1954de-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-a-9a72d3155b\" (UID: \"0fb9f19c6f7fe2d8189b98ea6a1954de\") " pod="kube-system/kube-scheduler-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.474014 kubelet[2353]: I0813 00:46:26.473396 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67b3a3d04a61cbb24ca41e3559372502-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" (UID: \"67b3a3d04a61cbb24ca41e3559372502\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.474014 kubelet[2353]: I0813 00:46:26.473430 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67b3a3d04a61cbb24ca41e3559372502-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" (UID: \"67b3a3d04a61cbb24ca41e3559372502\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.474014 kubelet[2353]: I0813 00:46:26.473479 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.474014 kubelet[2353]: I0813 00:46:26.473501 2353 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67b3a3d04a61cbb24ca41e3559372502-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" (UID: \"67b3a3d04a61cbb24ca41e3559372502\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.481689 systemd[1]: Created slice kubepods-burstable-pod0fb9f19c6f7fe2d8189b98ea6a1954de.slice - libcontainer container kubepods-burstable-pod0fb9f19c6f7fe2d8189b98ea6a1954de.slice. Aug 13 00:46:26.485794 kubelet[2353]: E0813 00:46:26.485751 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.89.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-9a72d3155b?timeout=10s\": dial tcp 24.144.89.98:6443: connect: connection refused" interval="400ms" Aug 13 00:46:26.491679 kubelet[2353]: I0813 00:46:26.491595 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.492928 kubelet[2353]: E0813 00:46:26.492872 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.144.89.98:6443/api/v1/nodes\": dial tcp 24.144.89.98:6443: connect: connection refused" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.694650 kubelet[2353]: I0813 00:46:26.694506 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.695246 kubelet[2353]: E0813 00:46:26.695165 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.144.89.98:6443/api/v1/nodes\": dial tcp 24.144.89.98:6443: connect: connection refused" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:26.749810 kubelet[2353]: E0813 00:46:26.749673 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:26.751005 containerd[1536]: time="2025-08-13T00:46:26.750920283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-a-9a72d3155b,Uid:67b3a3d04a61cbb24ca41e3559372502,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:26.777320 kubelet[2353]: E0813 00:46:26.777215 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:26.787162 kubelet[2353]: E0813 00:46:26.786692 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:26.793053 containerd[1536]: time="2025-08-13T00:46:26.792507107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-a-9a72d3155b,Uid:0fb9f19c6f7fe2d8189b98ea6a1954de,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:26.793053 containerd[1536]: time="2025-08-13T00:46:26.792797070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-a-9a72d3155b,Uid:6ee3cad89cb86e47b8fa6123a25b4e64,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:26.890383 kubelet[2353]: E0813 00:46:26.889624 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.89.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-9a72d3155b?timeout=10s\": dial tcp 24.144.89.98:6443: connect: connection refused" interval="800ms" Aug 13 00:46:26.975818 containerd[1536]: time="2025-08-13T00:46:26.975743061Z" level=info msg="connecting to shim beaf57e3142c477738d1c263985889bf8462ac539a4dc39eb88dc56f9d88248d" address="unix:///run/containerd/s/db3d0fb597bd4bd24c59aa259f3043bd8330f78cbafb7e1de8079f47fa6e197b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:26.977703 containerd[1536]: time="2025-08-13T00:46:26.977448804Z" level=info msg="connecting to shim 30e083e1db9e28488db981c0499e95514c1635d3e9bd8ec743c56bd3c5185d2c" address="unix:///run/containerd/s/26b932ba5d00c6e0ce7403fb59e3e8331dc0e51c8c90869c429d62edef6d51bc" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:26.982408 containerd[1536]: time="2025-08-13T00:46:26.977554934Z" level=info msg="connecting to shim 80c4a990b24b8ee082f1fc296119ce2560cb1d0ed440b4ba4096a523d8982363" address="unix:///run/containerd/s/2bc273d4c76cf1ba5349937b33b51e791ba5a2aa8093da7510a8075dbfacfc43" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:27.089887 kubelet[2353]: W0813 00:46:27.089763 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.144.89.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-9a72d3155b&limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:27.090173 kubelet[2353]: E0813 00:46:27.089901 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.144.89.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-9a72d3155b&limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:27.099975 kubelet[2353]: I0813 00:46:27.098394 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:27.099975 kubelet[2353]: E0813 00:46:27.098807 2353 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.144.89.98:6443/api/v1/nodes\": dial tcp 24.144.89.98:6443: connect: connection refused" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:27.124766 systemd[1]: Started cri-containerd-30e083e1db9e28488db981c0499e95514c1635d3e9bd8ec743c56bd3c5185d2c.scope - libcontainer container 30e083e1db9e28488db981c0499e95514c1635d3e9bd8ec743c56bd3c5185d2c. Aug 13 00:46:27.126628 systemd[1]: Started cri-containerd-80c4a990b24b8ee082f1fc296119ce2560cb1d0ed440b4ba4096a523d8982363.scope - libcontainer container 80c4a990b24b8ee082f1fc296119ce2560cb1d0ed440b4ba4096a523d8982363. Aug 13 00:46:27.128524 systemd[1]: Started cri-containerd-beaf57e3142c477738d1c263985889bf8462ac539a4dc39eb88dc56f9d88248d.scope - libcontainer container beaf57e3142c477738d1c263985889bf8462ac539a4dc39eb88dc56f9d88248d. Aug 13 00:46:27.163857 kubelet[2353]: W0813 00:46:27.163757 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.144.89.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:27.163857 kubelet[2353]: E0813 00:46:27.163809 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.144.89.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:27.201724 kubelet[2353]: W0813 00:46:27.201568 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.144.89.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:27.201724 kubelet[2353]: E0813 00:46:27.201673 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.144.89.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:27.261252 containerd[1536]: time="2025-08-13T00:46:27.260323302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-a-9a72d3155b,Uid:67b3a3d04a61cbb24ca41e3559372502,Namespace:kube-system,Attempt:0,} returns sandbox id \"80c4a990b24b8ee082f1fc296119ce2560cb1d0ed440b4ba4096a523d8982363\"" Aug 13 00:46:27.262306 containerd[1536]: time="2025-08-13T00:46:27.260455653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-a-9a72d3155b,Uid:6ee3cad89cb86e47b8fa6123a25b4e64,Namespace:kube-system,Attempt:0,} returns sandbox id \"beaf57e3142c477738d1c263985889bf8462ac539a4dc39eb88dc56f9d88248d\"" Aug 13 00:46:27.263328 kubelet[2353]: E0813 00:46:27.263247 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:27.265873 kubelet[2353]: E0813 00:46:27.265644 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:27.269713 containerd[1536]: time="2025-08-13T00:46:27.269666508Z" level=info msg="CreateContainer within sandbox \"80c4a990b24b8ee082f1fc296119ce2560cb1d0ed440b4ba4096a523d8982363\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:46:27.270191 containerd[1536]: time="2025-08-13T00:46:27.269903575Z" level=info msg="CreateContainer within sandbox \"beaf57e3142c477738d1c263985889bf8462ac539a4dc39eb88dc56f9d88248d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:46:27.284870 containerd[1536]: time="2025-08-13T00:46:27.284799643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-a-9a72d3155b,Uid:0fb9f19c6f7fe2d8189b98ea6a1954de,Namespace:kube-system,Attempt:0,} returns sandbox id \"30e083e1db9e28488db981c0499e95514c1635d3e9bd8ec743c56bd3c5185d2c\"" Aug 13 00:46:27.286859 kubelet[2353]: E0813 00:46:27.286743 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:27.288846 containerd[1536]: time="2025-08-13T00:46:27.288761599Z" level=info msg="Container 5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:27.293500 containerd[1536]: time="2025-08-13T00:46:27.291896887Z" level=info msg="Container 9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:27.294278 containerd[1536]: time="2025-08-13T00:46:27.294229412Z" level=info msg="CreateContainer within sandbox \"30e083e1db9e28488db981c0499e95514c1635d3e9bd8ec743c56bd3c5185d2c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:46:27.301812 containerd[1536]: time="2025-08-13T00:46:27.301738474Z" level=info msg="CreateContainer within sandbox \"80c4a990b24b8ee082f1fc296119ce2560cb1d0ed440b4ba4096a523d8982363\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f\"" Aug 13 00:46:27.304879 containerd[1536]: time="2025-08-13T00:46:27.304819944Z" level=info msg="CreateContainer within sandbox \"beaf57e3142c477738d1c263985889bf8462ac539a4dc39eb88dc56f9d88248d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704\"" Aug 13 00:46:27.306708 containerd[1536]: time="2025-08-13T00:46:27.306482684Z" level=info msg="StartContainer for \"9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704\"" Aug 13 00:46:27.307647 containerd[1536]: time="2025-08-13T00:46:27.307606225Z" level=info msg="StartContainer for \"5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f\"" Aug 13 00:46:27.309273 containerd[1536]: time="2025-08-13T00:46:27.309222600Z" level=info msg="connecting to shim 5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f" address="unix:///run/containerd/s/2bc273d4c76cf1ba5349937b33b51e791ba5a2aa8093da7510a8075dbfacfc43" protocol=ttrpc version=3 Aug 13 00:46:27.310604 containerd[1536]: time="2025-08-13T00:46:27.310549514Z" level=info msg="connecting to shim 9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704" address="unix:///run/containerd/s/db3d0fb597bd4bd24c59aa259f3043bd8330f78cbafb7e1de8079f47fa6e197b" protocol=ttrpc version=3 Aug 13 00:46:27.316586 containerd[1536]: time="2025-08-13T00:46:27.316506520Z" level=info msg="Container 70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:27.342400 containerd[1536]: time="2025-08-13T00:46:27.342340101Z" level=info msg="CreateContainer within sandbox \"30e083e1db9e28488db981c0499e95514c1635d3e9bd8ec743c56bd3c5185d2c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be\"" Aug 13 00:46:27.347874 containerd[1536]: time="2025-08-13T00:46:27.347824279Z" level=info msg="StartContainer for \"70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be\"" Aug 13 00:46:27.350436 containerd[1536]: time="2025-08-13T00:46:27.348958550Z" level=info msg="connecting to shim 70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be" address="unix:///run/containerd/s/26b932ba5d00c6e0ce7403fb59e3e8331dc0e51c8c90869c429d62edef6d51bc" protocol=ttrpc version=3 Aug 13 00:46:27.359962 systemd[1]: Started cri-containerd-5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f.scope - libcontainer container 5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f. Aug 13 00:46:27.387060 systemd[1]: Started cri-containerd-9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704.scope - libcontainer container 9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704. Aug 13 00:46:27.415333 systemd[1]: Started cri-containerd-70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be.scope - libcontainer container 70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be. Aug 13 00:46:27.499438 containerd[1536]: time="2025-08-13T00:46:27.499272076Z" level=info msg="StartContainer for \"5d2e59ee1434b2cb373a4bdaef2848fc14adccdcc1f63c4a11bec0d57efd0d4f\" returns successfully" Aug 13 00:46:27.523274 containerd[1536]: time="2025-08-13T00:46:27.523111050Z" level=info msg="StartContainer for \"9647d0a915e67e11b97364db9627f76714c13fbc516d72eaa7f275f5b2688704\" returns successfully" Aug 13 00:46:27.558341 containerd[1536]: time="2025-08-13T00:46:27.558258902Z" level=info msg="StartContainer for \"70a48185bedc0611e62031760d05e4295ce74cf9b233e4d666cc69429f6146be\" returns successfully" Aug 13 00:46:27.615127 kubelet[2353]: W0813 00:46:27.615019 2353 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.144.89.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.89.98:6443: connect: connection refused Aug 13 00:46:27.615127 kubelet[2353]: E0813 00:46:27.615126 2353 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.144.89.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.144.89.98:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:27.900941 kubelet[2353]: I0813 00:46:27.900774 2353 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:28.364180 kubelet[2353]: E0813 00:46:28.364119 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:28.371346 kubelet[2353]: E0813 00:46:28.371261 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:28.379170 kubelet[2353]: E0813 00:46:28.378503 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:29.386343 kubelet[2353]: E0813 00:46:29.384528 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:29.386343 kubelet[2353]: E0813 00:46:29.384686 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:29.386343 kubelet[2353]: E0813 00:46:29.385153 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:29.905678 kubelet[2353]: E0813 00:46:29.905626 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-a-9a72d3155b\" not found" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:29.932917 kubelet[2353]: I0813 00:46:29.932857 2353 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:29.932917 kubelet[2353]: E0813 00:46:29.932924 2353 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-a-9a72d3155b\": node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:29.967375 kubelet[2353]: E0813 00:46:29.967269 2353 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:30.068200 kubelet[2353]: E0813 00:46:30.068128 2353 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:30.169566 kubelet[2353]: E0813 00:46:30.169330 2353 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:30.245922 kubelet[2353]: I0813 00:46:30.245853 2353 apiserver.go:52] "Watching apiserver" Aug 13 00:46:30.272617 kubelet[2353]: I0813 00:46:30.272579 2353 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:46:30.393184 kubelet[2353]: E0813 00:46:30.393118 2353 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:30.393694 kubelet[2353]: E0813 00:46:30.393479 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:32.582324 kubelet[2353]: W0813 00:46:32.582219 2353 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:46:32.584469 kubelet[2353]: E0813 00:46:32.584105 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:32.656793 systemd[1]: Reload requested from client PID 2631 ('systemctl') (unit session-9.scope)... Aug 13 00:46:32.656813 systemd[1]: Reloading... Aug 13 00:46:32.866358 zram_generator::config[2674]: No configuration found. Aug 13 00:46:33.046825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:33.225217 systemd[1]: Reloading finished in 567 ms. Aug 13 00:46:33.261216 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:33.279322 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:46:33.279744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:33.279838 systemd[1]: kubelet.service: Consumed 1.615s CPU time, 123.3M memory peak. Aug 13 00:46:33.284747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:33.529153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:33.544585 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:46:33.649323 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:33.649323 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:46:33.649323 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:33.649323 kubelet[2725]: I0813 00:46:33.647537 2725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:46:33.666796 kubelet[2725]: I0813 00:46:33.666733 2725 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:46:33.666796 kubelet[2725]: I0813 00:46:33.666786 2725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:46:33.667419 kubelet[2725]: I0813 00:46:33.667271 2725 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:46:33.670479 kubelet[2725]: I0813 00:46:33.670165 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:46:33.675546 kubelet[2725]: I0813 00:46:33.674672 2725 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:46:33.687942 kubelet[2725]: I0813 00:46:33.687888 2725 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:46:33.693550 kubelet[2725]: I0813 00:46:33.693417 2725 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:46:33.693856 kubelet[2725]: I0813 00:46:33.693814 2725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:46:33.694299 kubelet[2725]: I0813 00:46:33.694124 2725 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:46:33.694511 kubelet[2725]: I0813 00:46:33.694166 2725 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-a-9a72d3155b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:46:33.694684 kubelet[2725]: I0813 00:46:33.694667 2725 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:46:33.694753 kubelet[2725]: I0813 00:46:33.694745 2725 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:46:33.694853 kubelet[2725]: I0813 00:46:33.694845 2725 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:33.695104 kubelet[2725]: I0813 00:46:33.695063 2725 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:46:33.695104 kubelet[2725]: I0813 00:46:33.695081 2725 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:46:33.696121 kubelet[2725]: I0813 00:46:33.695986 2725 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:46:33.696121 kubelet[2725]: I0813 00:46:33.696016 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:46:33.698873 kubelet[2725]: I0813 00:46:33.698756 2725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:46:33.702211 kubelet[2725]: I0813 00:46:33.701096 2725 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:46:33.702211 kubelet[2725]: I0813 00:46:33.701871 2725 server.go:1274] "Started kubelet" Aug 13 00:46:33.706525 sudo[2739]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:46:33.707150 sudo[2739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:46:33.722407 kubelet[2725]: I0813 00:46:33.722158 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:46:33.727752 kubelet[2725]: I0813 00:46:33.727673 2725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:46:33.734035 kubelet[2725]: I0813 00:46:33.733038 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:46:33.734035 kubelet[2725]: I0813 00:46:33.734009 2725 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:46:33.752318 kubelet[2725]: E0813 00:46:33.750224 2725 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-9a72d3155b\" not found" Aug 13 00:46:33.756492 kubelet[2725]: I0813 00:46:33.756207 2725 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:46:33.759327 kubelet[2725]: I0813 00:46:33.757937 2725 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:46:33.759327 kubelet[2725]: I0813 00:46:33.758083 2725 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:46:33.761368 kubelet[2725]: I0813 00:46:33.760090 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:46:33.761892 kubelet[2725]: I0813 00:46:33.761860 2725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:46:33.796926 kubelet[2725]: I0813 00:46:33.796796 2725 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:46:33.798334 kubelet[2725]: I0813 00:46:33.798308 2725 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:46:33.798612 kubelet[2725]: I0813 00:46:33.798584 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:46:33.804015 kubelet[2725]: E0813 00:46:33.803938 2725 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:46:33.806577 kubelet[2725]: I0813 00:46:33.806512 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:46:33.811333 kubelet[2725]: I0813 00:46:33.809313 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:46:33.811333 kubelet[2725]: I0813 00:46:33.810148 2725 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:46:33.811333 kubelet[2725]: I0813 00:46:33.810192 2725 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:46:33.811333 kubelet[2725]: E0813 00:46:33.810265 2725 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:46:33.898163 kubelet[2725]: I0813 00:46:33.898131 2725 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:46:33.898365 kubelet[2725]: I0813 00:46:33.898350 2725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:46:33.898460 kubelet[2725]: I0813 00:46:33.898451 2725 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:33.898800 kubelet[2725]: I0813 00:46:33.898778 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:46:33.898895 kubelet[2725]: I0813 00:46:33.898870 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:46:33.898957 kubelet[2725]: I0813 00:46:33.898949 2725 policy_none.go:49] "None policy: Start" Aug 13 00:46:33.900254 kubelet[2725]: I0813 00:46:33.900229 2725 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:46:33.900764 kubelet[2725]: I0813 00:46:33.900449 2725 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:46:33.900764 kubelet[2725]: I0813 00:46:33.900663 2725 state_mem.go:75] "Updated machine memory state" Aug 13 00:46:33.911753 kubelet[2725]: E0813 00:46:33.910637 2725 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:46:33.913069 kubelet[2725]: I0813 00:46:33.912602 2725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:46:33.913069 kubelet[2725]: I0813 00:46:33.912827 2725 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:46:33.913069 kubelet[2725]: I0813 00:46:33.912841 2725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:46:33.915124 kubelet[2725]: I0813 00:46:33.915098 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:46:34.033309 kubelet[2725]: I0813 00:46:34.033216 2725 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.054506 kubelet[2725]: I0813 00:46:34.054340 2725 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.054506 kubelet[2725]: I0813 00:46:34.054456 2725 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.128506 kubelet[2725]: W0813 00:46:34.128049 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:46:34.134011 kubelet[2725]: W0813 00:46:34.133727 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:46:34.134011 kubelet[2725]: W0813 00:46:34.133769 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:46:34.134011 kubelet[2725]: E0813 00:46:34.133890 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.1.0-a-9a72d3155b\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.260026 kubelet[2725]: I0813 00:46:34.259976 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.260217 kubelet[2725]: I0813 00:46:34.260054 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67b3a3d04a61cbb24ca41e3559372502-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" (UID: \"67b3a3d04a61cbb24ca41e3559372502\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.260217 kubelet[2725]: I0813 00:46:34.260096 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.260217 kubelet[2725]: I0813 00:46:34.260154 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.260217 kubelet[2725]: I0813 00:46:34.260208 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.261414 kubelet[2725]: I0813 00:46:34.260235 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ee3cad89cb86e47b8fa6123a25b4e64-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-a-9a72d3155b\" (UID: \"6ee3cad89cb86e47b8fa6123a25b4e64\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.261414 kubelet[2725]: I0813 00:46:34.260882 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fb9f19c6f7fe2d8189b98ea6a1954de-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-a-9a72d3155b\" (UID: \"0fb9f19c6f7fe2d8189b98ea6a1954de\") " pod="kube-system/kube-scheduler-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.261414 kubelet[2725]: I0813 00:46:34.261046 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67b3a3d04a61cbb24ca41e3559372502-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" (UID: \"67b3a3d04a61cbb24ca41e3559372502\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.261414 kubelet[2725]: I0813 00:46:34.261074 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67b3a3d04a61cbb24ca41e3559372502-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" (UID: \"67b3a3d04a61cbb24ca41e3559372502\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.430482 kubelet[2725]: E0813 00:46:34.430337 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:34.435134 kubelet[2725]: E0813 00:46:34.435085 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:34.435670 kubelet[2725]: E0813 00:46:34.435640 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:34.576377 sudo[2739]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:34.698976 kubelet[2725]: I0813 00:46:34.698493 2725 apiserver.go:52] "Watching apiserver" Aug 13 00:46:34.759643 kubelet[2725]: I0813 00:46:34.758762 2725 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:46:34.860117 kubelet[2725]: E0813 00:46:34.859170 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:34.861775 kubelet[2725]: E0813 00:46:34.860970 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:34.876061 kubelet[2725]: W0813 00:46:34.875871 2725 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:46:34.876625 kubelet[2725]: E0813 00:46:34.876547 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.1.0-a-9a72d3155b\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" Aug 13 00:46:34.877562 kubelet[2725]: E0813 00:46:34.877406 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:34.906326 kubelet[2725]: I0813 00:46:34.905753 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-a-9a72d3155b" podStartSLOduration=0.905723644 podStartE2EDuration="905.723644ms" podCreationTimestamp="2025-08-13 00:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:34.904901697 +0000 UTC m=+1.347537074" watchObservedRunningTime="2025-08-13 00:46:34.905723644 +0000 UTC m=+1.348359007" Aug 13 00:46:34.945778 kubelet[2725]: I0813 00:46:34.945616 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-a-9a72d3155b" podStartSLOduration=2.945569038 podStartE2EDuration="2.945569038s" podCreationTimestamp="2025-08-13 00:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:34.932032293 +0000 UTC m=+1.374667675" watchObservedRunningTime="2025-08-13 00:46:34.945569038 +0000 UTC m=+1.388204408" Aug 13 00:46:35.865560 kubelet[2725]: E0813 00:46:35.864986 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:36.229993 sudo[1806]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:36.234059 sshd[1805]: Connection closed by 139.178.68.195 port 55756 Aug 13 00:46:36.235616 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:36.241824 systemd[1]: sshd@8-24.144.89.98:22-139.178.68.195:55756.service: Deactivated successfully. Aug 13 00:46:36.247726 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:46:36.248540 systemd[1]: session-9.scope: Consumed 5.442s CPU time, 221.4M memory peak. Aug 13 00:46:36.251472 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:46:36.254378 systemd-logind[1500]: Removed session 9. Aug 13 00:46:36.870388 kubelet[2725]: E0813 00:46:36.868767 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:37.605455 update_engine[1502]: I20250813 00:46:37.605162 1502 update_attempter.cc:509] Updating boot flags... Aug 13 00:46:37.939134 kubelet[2725]: E0813 00:46:37.937486 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:38.015446 kubelet[2725]: I0813 00:46:38.013927 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-a-9a72d3155b" podStartSLOduration=4.013906462 podStartE2EDuration="4.013906462s" podCreationTimestamp="2025-08-13 00:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:34.94709469 +0000 UTC m=+1.389730076" watchObservedRunningTime="2025-08-13 00:46:38.013906462 +0000 UTC m=+4.456541841" Aug 13 00:46:38.874350 kubelet[2725]: E0813 00:46:38.874238 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:38.913523 kubelet[2725]: I0813 00:46:38.913450 2725 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:46:38.914142 containerd[1536]: time="2025-08-13T00:46:38.914100627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:46:38.915454 kubelet[2725]: I0813 00:46:38.914986 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:46:39.221629 kubelet[2725]: E0813 00:46:39.221580 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:39.651661 systemd[1]: Created slice kubepods-besteffort-podf396edca_aea7_472c_ae91_112432d1d83b.slice - libcontainer container kubepods-besteffort-podf396edca_aea7_472c_ae91_112432d1d83b.slice. Aug 13 00:46:39.671708 systemd[1]: Created slice kubepods-burstable-pod3b65b404_6d4f_41e4_9eae_a52e111be624.slice - libcontainer container kubepods-burstable-pod3b65b404_6d4f_41e4_9eae_a52e111be624.slice. Aug 13 00:46:39.804371 kubelet[2725]: I0813 00:46:39.803043 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-hostproc\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804371 kubelet[2725]: I0813 00:46:39.803093 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-hubble-tls\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804371 kubelet[2725]: I0813 00:46:39.803115 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cni-path\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804371 kubelet[2725]: I0813 00:46:39.803132 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-lib-modules\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804371 kubelet[2725]: I0813 00:46:39.803200 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-config-path\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804371 kubelet[2725]: I0813 00:46:39.803221 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd77\" (UniqueName: \"kubernetes.io/projected/f396edca-aea7-472c-ae91-112432d1d83b-kube-api-access-zkd77\") pod \"kube-proxy-g97xv\" (UID: \"f396edca-aea7-472c-ae91-112432d1d83b\") " pod="kube-system/kube-proxy-g97xv" Aug 13 00:46:39.804729 kubelet[2725]: I0813 00:46:39.803281 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-bpf-maps\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804729 kubelet[2725]: I0813 00:46:39.803467 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-cgroup\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804729 kubelet[2725]: I0813 00:46:39.803487 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b65b404-6d4f-41e4-9eae-a52e111be624-clustermesh-secrets\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804729 kubelet[2725]: I0813 00:46:39.803508 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-net\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.804729 kubelet[2725]: I0813 00:46:39.803529 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f396edca-aea7-472c-ae91-112432d1d83b-xtables-lock\") pod \"kube-proxy-g97xv\" (UID: \"f396edca-aea7-472c-ae91-112432d1d83b\") " pod="kube-system/kube-proxy-g97xv" Aug 13 00:46:39.804729 kubelet[2725]: I0813 00:46:39.803570 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f396edca-aea7-472c-ae91-112432d1d83b-kube-proxy\") pod \"kube-proxy-g97xv\" (UID: \"f396edca-aea7-472c-ae91-112432d1d83b\") " pod="kube-system/kube-proxy-g97xv" Aug 13 00:46:39.807096 kubelet[2725]: I0813 00:46:39.803587 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-xtables-lock\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.807096 kubelet[2725]: I0813 00:46:39.803625 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hkc\" (UniqueName: \"kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-kube-api-access-v7hkc\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.807096 kubelet[2725]: I0813 00:46:39.803657 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-run\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.807096 kubelet[2725]: I0813 00:46:39.803683 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-etc-cni-netd\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.807096 kubelet[2725]: I0813 00:46:39.803710 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-kernel\") pod \"cilium-h2l55\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " pod="kube-system/cilium-h2l55" Aug 13 00:46:39.807096 kubelet[2725]: I0813 00:46:39.803736 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f396edca-aea7-472c-ae91-112432d1d83b-lib-modules\") pod \"kube-proxy-g97xv\" (UID: \"f396edca-aea7-472c-ae91-112432d1d83b\") " pod="kube-system/kube-proxy-g97xv" Aug 13 00:46:39.877690 kubelet[2725]: E0813 00:46:39.877597 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:39.984811 kubelet[2725]: E0813 00:46:39.984731 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:39.989837 containerd[1536]: time="2025-08-13T00:46:39.985569590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h2l55,Uid:3b65b404-6d4f-41e4-9eae-a52e111be624,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:40.005335 kubelet[2725]: I0813 00:46:40.005255 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qs9z\" (UniqueName: \"kubernetes.io/projected/33134c88-db86-41a5-80f3-f6590ae0e405-kube-api-access-8qs9z\") pod \"cilium-operator-5d85765b45-fswsm\" (UID: \"33134c88-db86-41a5-80f3-f6590ae0e405\") " pod="kube-system/cilium-operator-5d85765b45-fswsm" Aug 13 00:46:40.005647 kubelet[2725]: I0813 00:46:40.005611 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33134c88-db86-41a5-80f3-f6590ae0e405-cilium-config-path\") pod \"cilium-operator-5d85765b45-fswsm\" (UID: \"33134c88-db86-41a5-80f3-f6590ae0e405\") " pod="kube-system/cilium-operator-5d85765b45-fswsm" Aug 13 00:46:40.006378 systemd[1]: Created slice kubepods-besteffort-pod33134c88_db86_41a5_80f3_f6590ae0e405.slice - libcontainer container kubepods-besteffort-pod33134c88_db86_41a5_80f3_f6590ae0e405.slice. Aug 13 00:46:40.029497 containerd[1536]: time="2025-08-13T00:46:40.029271876Z" level=info msg="connecting to shim 38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55" address="unix:///run/containerd/s/4880e09caa2ccd87e33d556fc4e6f351e0bebc03ba5bfea3119da0fdcf62daf5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:40.063630 systemd[1]: Started cri-containerd-38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55.scope - libcontainer container 38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55. Aug 13 00:46:40.114756 containerd[1536]: time="2025-08-13T00:46:40.114690132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h2l55,Uid:3b65b404-6d4f-41e4-9eae-a52e111be624,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\"" Aug 13 00:46:40.117522 kubelet[2725]: E0813 00:46:40.117484 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:40.121247 containerd[1536]: time="2025-08-13T00:46:40.121147791Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:46:40.124921 systemd-resolved[1407]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 00:46:40.265260 kubelet[2725]: E0813 00:46:40.264543 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:40.266347 containerd[1536]: time="2025-08-13T00:46:40.266017541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g97xv,Uid:f396edca-aea7-472c-ae91-112432d1d83b,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:40.291135 containerd[1536]: time="2025-08-13T00:46:40.290706286Z" level=info msg="connecting to shim 900c6b345863d0a1f4cb2a17d4622e8ec60f84154173929759e20adb524212dd" address="unix:///run/containerd/s/888170847bed227603b0bdf48b6c2cc5defb7c9ca28f1629ad36e2f004212e9b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:40.314786 kubelet[2725]: E0813 00:46:40.314752 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:40.317762 containerd[1536]: time="2025-08-13T00:46:40.317718830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fswsm,Uid:33134c88-db86-41a5-80f3-f6590ae0e405,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:40.321644 systemd[1]: Started cri-containerd-900c6b345863d0a1f4cb2a17d4622e8ec60f84154173929759e20adb524212dd.scope - libcontainer container 900c6b345863d0a1f4cb2a17d4622e8ec60f84154173929759e20adb524212dd. Aug 13 00:46:40.352812 containerd[1536]: time="2025-08-13T00:46:40.350706744Z" level=info msg="connecting to shim ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b" address="unix:///run/containerd/s/0f8ac4ffcc2c27eb818e16ec86a31bec7d67372563c602c2232ea6e3c84a5aca" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:40.383070 containerd[1536]: time="2025-08-13T00:46:40.382358336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g97xv,Uid:f396edca-aea7-472c-ae91-112432d1d83b,Namespace:kube-system,Attempt:0,} returns sandbox id \"900c6b345863d0a1f4cb2a17d4622e8ec60f84154173929759e20adb524212dd\"" Aug 13 00:46:40.385813 kubelet[2725]: E0813 00:46:40.385740 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:40.393101 containerd[1536]: time="2025-08-13T00:46:40.393036632Z" level=info msg="CreateContainer within sandbox \"900c6b345863d0a1f4cb2a17d4622e8ec60f84154173929759e20adb524212dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:46:40.415511 containerd[1536]: time="2025-08-13T00:46:40.415456751Z" level=info msg="Container c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:40.415589 systemd[1]: Started cri-containerd-ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b.scope - libcontainer container ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b. Aug 13 00:46:40.431026 containerd[1536]: time="2025-08-13T00:46:40.430968717Z" level=info msg="CreateContainer within sandbox \"900c6b345863d0a1f4cb2a17d4622e8ec60f84154173929759e20adb524212dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46\"" Aug 13 00:46:40.432093 containerd[1536]: time="2025-08-13T00:46:40.432048758Z" level=info msg="StartContainer for \"c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46\"" Aug 13 00:46:40.439636 containerd[1536]: time="2025-08-13T00:46:40.439580470Z" level=info msg="connecting to shim c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46" address="unix:///run/containerd/s/888170847bed227603b0bdf48b6c2cc5defb7c9ca28f1629ad36e2f004212e9b" protocol=ttrpc version=3 Aug 13 00:46:40.479190 systemd[1]: Started cri-containerd-c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46.scope - libcontainer container c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46. Aug 13 00:46:40.524258 containerd[1536]: time="2025-08-13T00:46:40.524122541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fswsm,Uid:33134c88-db86-41a5-80f3-f6590ae0e405,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\"" Aug 13 00:46:40.527780 kubelet[2725]: E0813 00:46:40.527734 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:40.571782 containerd[1536]: time="2025-08-13T00:46:40.571636841Z" level=info msg="StartContainer for \"c7f6906512355932eb8cf66b89aaa1ff010437a2186d2162abc9d6ddd1271e46\" returns successfully" Aug 13 00:46:40.883855 kubelet[2725]: E0813 00:46:40.883692 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:45.608424 kubelet[2725]: E0813 00:46:45.608349 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:45.634984 kubelet[2725]: I0813 00:46:45.634829 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g97xv" podStartSLOduration=6.63478727 podStartE2EDuration="6.63478727s" podCreationTimestamp="2025-08-13 00:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:40.907655124 +0000 UTC m=+7.350290488" watchObservedRunningTime="2025-08-13 00:46:45.63478727 +0000 UTC m=+12.077422643" Aug 13 00:46:45.907475 kubelet[2725]: E0813 00:46:45.907347 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:50.994020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount920174103.mount: Deactivated successfully. Aug 13 00:46:53.888348 containerd[1536]: time="2025-08-13T00:46:53.866840748Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.889275 containerd[1536]: time="2025-08-13T00:46:53.867330001Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:46:53.889275 containerd[1536]: time="2025-08-13T00:46:53.871718893Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.750520882s" Aug 13 00:46:53.889275 containerd[1536]: time="2025-08-13T00:46:53.889134479Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:46:53.889951 containerd[1536]: time="2025-08-13T00:46:53.889916214Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.893512 containerd[1536]: time="2025-08-13T00:46:53.892950834Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:46:53.895003 containerd[1536]: time="2025-08-13T00:46:53.894745359Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:46:53.952329 containerd[1536]: time="2025-08-13T00:46:53.950182504Z" level=info msg="Container 1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:54.037221 containerd[1536]: time="2025-08-13T00:46:54.037065329Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\"" Aug 13 00:46:54.037931 containerd[1536]: time="2025-08-13T00:46:54.037899909Z" level=info msg="StartContainer for \"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\"" Aug 13 00:46:54.039556 containerd[1536]: time="2025-08-13T00:46:54.039494552Z" level=info msg="connecting to shim 1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14" address="unix:///run/containerd/s/4880e09caa2ccd87e33d556fc4e6f351e0bebc03ba5bfea3119da0fdcf62daf5" protocol=ttrpc version=3 Aug 13 00:46:54.076079 systemd[1]: Started cri-containerd-1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14.scope - libcontainer container 1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14. Aug 13 00:46:54.168698 containerd[1536]: time="2025-08-13T00:46:54.168504576Z" level=info msg="StartContainer for \"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" returns successfully" Aug 13 00:46:54.171112 systemd[1]: cri-containerd-1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14.scope: Deactivated successfully. Aug 13 00:46:54.241214 containerd[1536]: time="2025-08-13T00:46:54.241131476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" id:\"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" pid:3153 exited_at:{seconds:1755046014 nanos:174541175}" Aug 13 00:46:54.272185 containerd[1536]: time="2025-08-13T00:46:54.272101235Z" level=info msg="received exit event container_id:\"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" id:\"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" pid:3153 exited_at:{seconds:1755046014 nanos:174541175}" Aug 13 00:46:54.312150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14-rootfs.mount: Deactivated successfully. Aug 13 00:46:54.941503 kubelet[2725]: E0813 00:46:54.941451 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:54.951974 containerd[1536]: time="2025-08-13T00:46:54.951912621Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:46:54.975476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596800084.mount: Deactivated successfully. Aug 13 00:46:54.981721 containerd[1536]: time="2025-08-13T00:46:54.980727169Z" level=info msg="Container bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:54.996653 containerd[1536]: time="2025-08-13T00:46:54.996580279Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\"" Aug 13 00:46:54.998624 containerd[1536]: time="2025-08-13T00:46:54.998531199Z" level=info msg="StartContainer for \"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\"" Aug 13 00:46:55.002533 containerd[1536]: time="2025-08-13T00:46:55.002476908Z" level=info msg="connecting to shim bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11" address="unix:///run/containerd/s/4880e09caa2ccd87e33d556fc4e6f351e0bebc03ba5bfea3119da0fdcf62daf5" protocol=ttrpc version=3 Aug 13 00:46:55.035579 systemd[1]: Started cri-containerd-bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11.scope - libcontainer container bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11. Aug 13 00:46:55.107452 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:46:55.108174 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:55.109148 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:55.112058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:55.116810 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:46:55.123342 systemd[1]: cri-containerd-bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11.scope: Deactivated successfully. Aug 13 00:46:55.164197 containerd[1536]: time="2025-08-13T00:46:55.164133682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" id:\"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" pid:3198 exited_at:{seconds:1755046015 nanos:124663474}" Aug 13 00:46:55.169164 containerd[1536]: time="2025-08-13T00:46:55.169087856Z" level=info msg="received exit event container_id:\"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" id:\"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" pid:3198 exited_at:{seconds:1755046015 nanos:124663474}" Aug 13 00:46:55.171583 containerd[1536]: time="2025-08-13T00:46:55.171447830Z" level=info msg="StartContainer for \"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" returns successfully" Aug 13 00:46:55.173845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:55.968444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11-rootfs.mount: Deactivated successfully. Aug 13 00:46:55.981923 kubelet[2725]: E0813 00:46:55.981853 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:55.987706 containerd[1536]: time="2025-08-13T00:46:55.987651703Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:46:56.047051 containerd[1536]: time="2025-08-13T00:46:56.044256674Z" level=info msg="Container 19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:56.066851 containerd[1536]: time="2025-08-13T00:46:56.066781960Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\"" Aug 13 00:46:56.072340 containerd[1536]: time="2025-08-13T00:46:56.070840878Z" level=info msg="StartContainer for \"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\"" Aug 13 00:46:56.081918 containerd[1536]: time="2025-08-13T00:46:56.081855641Z" level=info msg="connecting to shim 19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51" address="unix:///run/containerd/s/4880e09caa2ccd87e33d556fc4e6f351e0bebc03ba5bfea3119da0fdcf62daf5" protocol=ttrpc version=3 Aug 13 00:46:56.150106 systemd[1]: Started cri-containerd-19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51.scope - libcontainer container 19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51. Aug 13 00:46:56.249275 containerd[1536]: time="2025-08-13T00:46:56.247785580Z" level=info msg="StartContainer for \"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" returns successfully" Aug 13 00:46:56.248146 systemd[1]: cri-containerd-19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51.scope: Deactivated successfully. Aug 13 00:46:56.257218 containerd[1536]: time="2025-08-13T00:46:56.257174304Z" level=info msg="received exit event container_id:\"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" id:\"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" pid:3257 exited_at:{seconds:1755046016 nanos:256677628}" Aug 13 00:46:56.258515 containerd[1536]: time="2025-08-13T00:46:56.258463895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" id:\"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" pid:3257 exited_at:{seconds:1755046016 nanos:256677628}" Aug 13 00:46:56.307527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51-rootfs.mount: Deactivated successfully. Aug 13 00:46:56.534183 containerd[1536]: time="2025-08-13T00:46:56.533924957Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:56.535466 containerd[1536]: time="2025-08-13T00:46:56.535419418Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:46:56.536135 containerd[1536]: time="2025-08-13T00:46:56.536095877Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:56.538757 containerd[1536]: time="2025-08-13T00:46:56.538502334Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.645404939s" Aug 13 00:46:56.538757 containerd[1536]: time="2025-08-13T00:46:56.538557022Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:46:56.541207 containerd[1536]: time="2025-08-13T00:46:56.541126947Z" level=info msg="CreateContainer within sandbox \"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:46:56.548459 containerd[1536]: time="2025-08-13T00:46:56.548406221Z" level=info msg="Container 76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:56.554877 containerd[1536]: time="2025-08-13T00:46:56.554784150Z" level=info msg="CreateContainer within sandbox \"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\"" Aug 13 00:46:56.555612 containerd[1536]: time="2025-08-13T00:46:56.555580245Z" level=info msg="StartContainer for \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\"" Aug 13 00:46:56.557519 containerd[1536]: time="2025-08-13T00:46:56.557474618Z" level=info msg="connecting to shim 76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86" address="unix:///run/containerd/s/0f8ac4ffcc2c27eb818e16ec86a31bec7d67372563c602c2232ea6e3c84a5aca" protocol=ttrpc version=3 Aug 13 00:46:56.583630 systemd[1]: Started cri-containerd-76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86.scope - libcontainer container 76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86. Aug 13 00:46:56.634880 containerd[1536]: time="2025-08-13T00:46:56.634805042Z" level=info msg="StartContainer for \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" returns successfully" Aug 13 00:46:56.994631 kubelet[2725]: E0813 00:46:56.994559 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:57.010031 kubelet[2725]: E0813 00:46:57.009980 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:57.018258 containerd[1536]: time="2025-08-13T00:46:57.016828509Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:46:57.034592 containerd[1536]: time="2025-08-13T00:46:57.034472713Z" level=info msg="Container 232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:57.045530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823735393.mount: Deactivated successfully. Aug 13 00:46:57.049461 containerd[1536]: time="2025-08-13T00:46:57.049403112Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\"" Aug 13 00:46:57.051310 containerd[1536]: time="2025-08-13T00:46:57.050943620Z" level=info msg="StartContainer for \"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\"" Aug 13 00:46:57.054282 containerd[1536]: time="2025-08-13T00:46:57.054231538Z" level=info msg="connecting to shim 232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5" address="unix:///run/containerd/s/4880e09caa2ccd87e33d556fc4e6f351e0bebc03ba5bfea3119da0fdcf62daf5" protocol=ttrpc version=3 Aug 13 00:46:57.113463 systemd[1]: Started cri-containerd-232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5.scope - libcontainer container 232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5. Aug 13 00:46:57.172148 kubelet[2725]: I0813 00:46:57.171820 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fswsm" podStartSLOduration=2.16108172 podStartE2EDuration="18.171418935s" podCreationTimestamp="2025-08-13 00:46:39 +0000 UTC" firstStartedPulling="2025-08-13 00:46:40.529278112 +0000 UTC m=+6.971913461" lastFinishedPulling="2025-08-13 00:46:56.539615319 +0000 UTC m=+22.982250676" observedRunningTime="2025-08-13 00:46:57.062267475 +0000 UTC m=+23.504902847" watchObservedRunningTime="2025-08-13 00:46:57.171418935 +0000 UTC m=+23.614054311" Aug 13 00:46:57.224903 systemd[1]: cri-containerd-232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5.scope: Deactivated successfully. Aug 13 00:46:57.228800 containerd[1536]: time="2025-08-13T00:46:57.228495732Z" level=info msg="received exit event container_id:\"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" id:\"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" pid:3336 exited_at:{seconds:1755046017 nanos:226752133}" Aug 13 00:46:57.231397 containerd[1536]: time="2025-08-13T00:46:57.231205420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" id:\"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" pid:3336 exited_at:{seconds:1755046017 nanos:226752133}" Aug 13 00:46:57.232735 containerd[1536]: time="2025-08-13T00:46:57.232439023Z" level=info msg="StartContainer for \"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" returns successfully" Aug 13 00:46:57.282890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5-rootfs.mount: Deactivated successfully. Aug 13 00:46:58.016966 kubelet[2725]: E0813 00:46:58.016899 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.019268 kubelet[2725]: E0813 00:46:58.018386 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.023712 containerd[1536]: time="2025-08-13T00:46:58.023622297Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:46:58.048569 containerd[1536]: time="2025-08-13T00:46:58.045140970Z" level=info msg="Container 99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:58.059096 containerd[1536]: time="2025-08-13T00:46:58.059044103Z" level=info msg="CreateContainer within sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\"" Aug 13 00:46:58.063201 containerd[1536]: time="2025-08-13T00:46:58.062689999Z" level=info msg="StartContainer for \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\"" Aug 13 00:46:58.066270 containerd[1536]: time="2025-08-13T00:46:58.066157288Z" level=info msg="connecting to shim 99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e" address="unix:///run/containerd/s/4880e09caa2ccd87e33d556fc4e6f351e0bebc03ba5bfea3119da0fdcf62daf5" protocol=ttrpc version=3 Aug 13 00:46:58.104610 systemd[1]: Started cri-containerd-99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e.scope - libcontainer container 99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e. Aug 13 00:46:58.172747 containerd[1536]: time="2025-08-13T00:46:58.172663784Z" level=info msg="StartContainer for \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" returns successfully" Aug 13 00:46:58.337221 containerd[1536]: time="2025-08-13T00:46:58.335186936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" id:\"6bd36438654b645519d60093f0d3b61a1b71c25b69035de315123806e15032c1\" pid:3404 exited_at:{seconds:1755046018 nanos:334235380}" Aug 13 00:46:58.391103 kubelet[2725]: I0813 00:46:58.388746 2725 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:46:58.480599 systemd[1]: Created slice kubepods-burstable-pod0e056e7f_4dd7_4eb2_9e1e_07586251d83f.slice - libcontainer container kubepods-burstable-pod0e056e7f_4dd7_4eb2_9e1e_07586251d83f.slice. Aug 13 00:46:58.519316 systemd[1]: Created slice kubepods-burstable-podb465a816_7ff4_41a7_b20e_5acea5d9da5f.slice - libcontainer container kubepods-burstable-podb465a816_7ff4_41a7_b20e_5acea5d9da5f.slice. Aug 13 00:46:58.571085 kubelet[2725]: I0813 00:46:58.570973 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b465a816-7ff4-41a7-b20e-5acea5d9da5f-config-volume\") pod \"coredns-7c65d6cfc9-fqdcf\" (UID: \"b465a816-7ff4-41a7-b20e-5acea5d9da5f\") " pod="kube-system/coredns-7c65d6cfc9-fqdcf" Aug 13 00:46:58.571085 kubelet[2725]: I0813 00:46:58.571035 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmgkz\" (UniqueName: \"kubernetes.io/projected/b465a816-7ff4-41a7-b20e-5acea5d9da5f-kube-api-access-dmgkz\") pod \"coredns-7c65d6cfc9-fqdcf\" (UID: \"b465a816-7ff4-41a7-b20e-5acea5d9da5f\") " pod="kube-system/coredns-7c65d6cfc9-fqdcf" Aug 13 00:46:58.571571 kubelet[2725]: I0813 00:46:58.571126 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thptl\" (UniqueName: \"kubernetes.io/projected/0e056e7f-4dd7-4eb2-9e1e-07586251d83f-kube-api-access-thptl\") pod \"coredns-7c65d6cfc9-rwdzk\" (UID: \"0e056e7f-4dd7-4eb2-9e1e-07586251d83f\") " pod="kube-system/coredns-7c65d6cfc9-rwdzk" Aug 13 00:46:58.571571 kubelet[2725]: I0813 00:46:58.571158 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e056e7f-4dd7-4eb2-9e1e-07586251d83f-config-volume\") pod \"coredns-7c65d6cfc9-rwdzk\" (UID: \"0e056e7f-4dd7-4eb2-9e1e-07586251d83f\") " pod="kube-system/coredns-7c65d6cfc9-rwdzk" Aug 13 00:46:58.804053 kubelet[2725]: E0813 00:46:58.803701 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.805311 containerd[1536]: time="2025-08-13T00:46:58.804808298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwdzk,Uid:0e056e7f-4dd7-4eb2-9e1e-07586251d83f,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:58.833863 kubelet[2725]: E0813 00:46:58.833804 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.835103 containerd[1536]: time="2025-08-13T00:46:58.835039913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqdcf,Uid:b465a816-7ff4-41a7-b20e-5acea5d9da5f,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:59.028906 kubelet[2725]: E0813 00:46:59.028869 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:59.098405 kubelet[2725]: I0813 00:46:59.098240 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h2l55" podStartSLOduration=6.325810051 podStartE2EDuration="20.098212978s" podCreationTimestamp="2025-08-13 00:46:39 +0000 UTC" firstStartedPulling="2025-08-13 00:46:40.119166362 +0000 UTC m=+6.561801713" lastFinishedPulling="2025-08-13 00:46:53.891569287 +0000 UTC m=+20.334204640" observedRunningTime="2025-08-13 00:46:59.097651591 +0000 UTC m=+25.540286961" watchObservedRunningTime="2025-08-13 00:46:59.098212978 +0000 UTC m=+25.540848350" Aug 13 00:47:00.031746 kubelet[2725]: E0813 00:47:00.031612 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:00.949616 systemd-networkd[1457]: cilium_host: Link UP Aug 13 00:47:00.951721 systemd-networkd[1457]: cilium_net: Link UP Aug 13 00:47:00.952439 systemd-networkd[1457]: cilium_net: Gained carrier Aug 13 00:47:00.953026 systemd-networkd[1457]: cilium_host: Gained carrier Aug 13 00:47:01.035828 kubelet[2725]: E0813 00:47:01.035791 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:01.140374 systemd-networkd[1457]: cilium_vxlan: Link UP Aug 13 00:47:01.140384 systemd-networkd[1457]: cilium_vxlan: Gained carrier Aug 13 00:47:01.431675 systemd-networkd[1457]: cilium_net: Gained IPv6LL Aug 13 00:47:01.633329 kernel: NET: Registered PF_ALG protocol family Aug 13 00:47:01.839707 systemd-networkd[1457]: cilium_host: Gained IPv6LL Aug 13 00:47:02.737540 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Aug 13 00:47:02.784028 systemd-networkd[1457]: lxc_health: Link UP Aug 13 00:47:02.792687 systemd-networkd[1457]: lxc_health: Gained carrier Aug 13 00:47:03.392216 kernel: eth0: renamed from tmp66fdb Aug 13 00:47:03.394641 systemd-networkd[1457]: lxcfb92e460a231: Link UP Aug 13 00:47:03.395071 systemd-networkd[1457]: lxcfb92e460a231: Gained carrier Aug 13 00:47:03.452346 kernel: eth0: renamed from tmpa4861 Aug 13 00:47:03.457478 systemd-networkd[1457]: lxc15bac958272a: Link UP Aug 13 00:47:03.458012 systemd-networkd[1457]: lxc15bac958272a: Gained carrier Aug 13 00:47:03.990334 kubelet[2725]: E0813 00:47:03.989226 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:04.020383 systemd-networkd[1457]: lxc_health: Gained IPv6LL Aug 13 00:47:04.237214 kubelet[2725]: I0813 00:47:04.237155 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:47:04.239487 kubelet[2725]: E0813 00:47:04.237966 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.051239 kubelet[2725]: E0813 00:47:05.050875 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.103575 systemd-networkd[1457]: lxcfb92e460a231: Gained IPv6LL Aug 13 00:47:05.167621 systemd-networkd[1457]: lxc15bac958272a: Gained IPv6LL Aug 13 00:47:09.384320 containerd[1536]: time="2025-08-13T00:47:09.384222689Z" level=info msg="connecting to shim 66fdb887ecc9c1aaf3abe586d8b69ffd32dc8a510981950f9dc079c9a636726a" address="unix:///run/containerd/s/26e30471f5366d6e2f750be37a17d730e13deacead2786eb01ea4a9d13e6fc84" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:09.398336 containerd[1536]: time="2025-08-13T00:47:09.395539780Z" level=info msg="connecting to shim a4861a063c39b0ac1f7bfadfa36a55e13918c5f815b99fa2e754a65c4bad3b48" address="unix:///run/containerd/s/a3cedb32913c4f4b7e09339be0e5a3b676f728315ed677ab919298904c7bc3ce" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:09.469719 systemd[1]: Started cri-containerd-66fdb887ecc9c1aaf3abe586d8b69ffd32dc8a510981950f9dc079c9a636726a.scope - libcontainer container 66fdb887ecc9c1aaf3abe586d8b69ffd32dc8a510981950f9dc079c9a636726a. Aug 13 00:47:09.498770 systemd[1]: Started cri-containerd-a4861a063c39b0ac1f7bfadfa36a55e13918c5f815b99fa2e754a65c4bad3b48.scope - libcontainer container a4861a063c39b0ac1f7bfadfa36a55e13918c5f815b99fa2e754a65c4bad3b48. Aug 13 00:47:09.660813 containerd[1536]: time="2025-08-13T00:47:09.658328010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwdzk,Uid:0e056e7f-4dd7-4eb2-9e1e-07586251d83f,Namespace:kube-system,Attempt:0,} returns sandbox id \"66fdb887ecc9c1aaf3abe586d8b69ffd32dc8a510981950f9dc079c9a636726a\"" Aug 13 00:47:09.662680 kubelet[2725]: E0813 00:47:09.662628 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:09.669920 containerd[1536]: time="2025-08-13T00:47:09.669858326Z" level=info msg="CreateContainer within sandbox \"66fdb887ecc9c1aaf3abe586d8b69ffd32dc8a510981950f9dc079c9a636726a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:09.684269 containerd[1536]: time="2025-08-13T00:47:09.684198144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqdcf,Uid:b465a816-7ff4-41a7-b20e-5acea5d9da5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4861a063c39b0ac1f7bfadfa36a55e13918c5f815b99fa2e754a65c4bad3b48\"" Aug 13 00:47:09.691310 kubelet[2725]: E0813 00:47:09.690840 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:09.704385 containerd[1536]: time="2025-08-13T00:47:09.703857082Z" level=info msg="Container 19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:09.714823 containerd[1536]: time="2025-08-13T00:47:09.714726255Z" level=info msg="CreateContainer within sandbox \"a4861a063c39b0ac1f7bfadfa36a55e13918c5f815b99fa2e754a65c4bad3b48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:09.730338 containerd[1536]: time="2025-08-13T00:47:09.730251966Z" level=info msg="CreateContainer within sandbox \"66fdb887ecc9c1aaf3abe586d8b69ffd32dc8a510981950f9dc079c9a636726a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b\"" Aug 13 00:47:09.731727 containerd[1536]: time="2025-08-13T00:47:09.731675309Z" level=info msg="StartContainer for \"19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b\"" Aug 13 00:47:09.733600 containerd[1536]: time="2025-08-13T00:47:09.733547715Z" level=info msg="connecting to shim 19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b" address="unix:///run/containerd/s/26e30471f5366d6e2f750be37a17d730e13deacead2786eb01ea4a9d13e6fc84" protocol=ttrpc version=3 Aug 13 00:47:09.735566 containerd[1536]: time="2025-08-13T00:47:09.735499768Z" level=info msg="Container 7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:09.749402 containerd[1536]: time="2025-08-13T00:47:09.749166606Z" level=info msg="CreateContainer within sandbox \"a4861a063c39b0ac1f7bfadfa36a55e13918c5f815b99fa2e754a65c4bad3b48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e\"" Aug 13 00:47:09.750531 containerd[1536]: time="2025-08-13T00:47:09.750007656Z" level=info msg="StartContainer for \"7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e\"" Aug 13 00:47:09.756436 containerd[1536]: time="2025-08-13T00:47:09.756373115Z" level=info msg="connecting to shim 7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e" address="unix:///run/containerd/s/a3cedb32913c4f4b7e09339be0e5a3b676f728315ed677ab919298904c7bc3ce" protocol=ttrpc version=3 Aug 13 00:47:09.778649 systemd[1]: Started cri-containerd-19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b.scope - libcontainer container 19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b. Aug 13 00:47:09.799970 systemd[1]: Started cri-containerd-7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e.scope - libcontainer container 7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e. Aug 13 00:47:09.877737 containerd[1536]: time="2025-08-13T00:47:09.877591019Z" level=info msg="StartContainer for \"19b04105de7b68770647f895d93fcfd7ca739e59d18869796d926e15216f7d9b\" returns successfully" Aug 13 00:47:09.896799 containerd[1536]: time="2025-08-13T00:47:09.896661288Z" level=info msg="StartContainer for \"7e35ce2b5363b4b8c92b6b697b049acfe27cd361340c2dfe06b85bc251454b2e\" returns successfully" Aug 13 00:47:10.098563 kubelet[2725]: E0813 00:47:10.097562 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:10.104603 kubelet[2725]: E0813 00:47:10.104563 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:10.145218 kubelet[2725]: I0813 00:47:10.145120 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fqdcf" podStartSLOduration=31.145096634 podStartE2EDuration="31.145096634s" podCreationTimestamp="2025-08-13 00:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:10.144661246 +0000 UTC m=+36.587296619" watchObservedRunningTime="2025-08-13 00:47:10.145096634 +0000 UTC m=+36.587732004" Aug 13 00:47:10.178694 kubelet[2725]: I0813 00:47:10.178582 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rwdzk" podStartSLOduration=31.178453207 podStartE2EDuration="31.178453207s" podCreationTimestamp="2025-08-13 00:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:10.176466396 +0000 UTC m=+36.619101766" watchObservedRunningTime="2025-08-13 00:47:10.178453207 +0000 UTC m=+36.621088582" Aug 13 00:47:10.359394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987798197.mount: Deactivated successfully. Aug 13 00:47:11.107337 kubelet[2725]: E0813 00:47:11.107182 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:11.109446 kubelet[2725]: E0813 00:47:11.108263 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:12.109320 kubelet[2725]: E0813 00:47:12.109253 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:12.111451 kubelet[2725]: E0813 00:47:12.109881 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:16.283962 systemd[1]: Started sshd@9-24.144.89.98:22-139.178.68.195:42042.service - OpenSSH per-connection server daemon (139.178.68.195:42042). Aug 13 00:47:16.386180 sshd[4065]: Accepted publickey for core from 139.178.68.195 port 42042 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:16.389526 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:16.402110 systemd-logind[1500]: New session 10 of user core. Aug 13 00:47:16.409716 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:47:17.222260 sshd[4067]: Connection closed by 139.178.68.195 port 42042 Aug 13 00:47:17.224164 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:17.231526 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:47:17.232483 systemd[1]: sshd@9-24.144.89.98:22-139.178.68.195:42042.service: Deactivated successfully. Aug 13 00:47:17.238033 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:47:17.245873 systemd-logind[1500]: Removed session 10. Aug 13 00:47:22.242377 systemd[1]: Started sshd@10-24.144.89.98:22-139.178.68.195:42922.service - OpenSSH per-connection server daemon (139.178.68.195:42922). Aug 13 00:47:22.351036 sshd[4080]: Accepted publickey for core from 139.178.68.195 port 42922 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:22.353558 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:22.360826 systemd-logind[1500]: New session 11 of user core. Aug 13 00:47:22.371762 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:47:22.542020 sshd[4083]: Connection closed by 139.178.68.195 port 42922 Aug 13 00:47:22.544586 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:22.549961 systemd[1]: sshd@10-24.144.89.98:22-139.178.68.195:42922.service: Deactivated successfully. Aug 13 00:47:22.553546 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:47:22.559249 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:47:22.560746 systemd-logind[1500]: Removed session 11. Aug 13 00:47:27.569964 systemd[1]: Started sshd@11-24.144.89.98:22-139.178.68.195:42924.service - OpenSSH per-connection server daemon (139.178.68.195:42924). Aug 13 00:47:27.629877 sshd[4097]: Accepted publickey for core from 139.178.68.195 port 42924 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:27.632149 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:27.640863 systemd-logind[1500]: New session 12 of user core. Aug 13 00:47:27.643505 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:47:27.789148 sshd[4099]: Connection closed by 139.178.68.195 port 42924 Aug 13 00:47:27.789013 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:27.793852 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:47:27.794001 systemd[1]: sshd@11-24.144.89.98:22-139.178.68.195:42924.service: Deactivated successfully. Aug 13 00:47:27.797137 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:47:27.802740 systemd-logind[1500]: Removed session 12. Aug 13 00:47:32.809742 systemd[1]: Started sshd@12-24.144.89.98:22-139.178.68.195:36762.service - OpenSSH per-connection server daemon (139.178.68.195:36762). Aug 13 00:47:32.889560 sshd[4111]: Accepted publickey for core from 139.178.68.195 port 36762 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:32.891970 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:32.899797 systemd-logind[1500]: New session 13 of user core. Aug 13 00:47:32.908643 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:47:33.097624 sshd[4113]: Connection closed by 139.178.68.195 port 36762 Aug 13 00:47:33.098656 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:33.117542 systemd[1]: sshd@12-24.144.89.98:22-139.178.68.195:36762.service: Deactivated successfully. Aug 13 00:47:33.122921 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:47:33.124425 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:47:33.131470 systemd[1]: Started sshd@13-24.144.89.98:22-139.178.68.195:36764.service - OpenSSH per-connection server daemon (139.178.68.195:36764). Aug 13 00:47:33.133034 systemd-logind[1500]: Removed session 13. Aug 13 00:47:33.208040 sshd[4125]: Accepted publickey for core from 139.178.68.195 port 36764 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:33.211052 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:33.218454 systemd-logind[1500]: New session 14 of user core. Aug 13 00:47:33.225655 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:47:33.522936 sshd[4127]: Connection closed by 139.178.68.195 port 36764 Aug 13 00:47:33.523471 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:33.542834 systemd[1]: sshd@13-24.144.89.98:22-139.178.68.195:36764.service: Deactivated successfully. Aug 13 00:47:33.549139 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:47:33.553161 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:47:33.561045 systemd[1]: Started sshd@14-24.144.89.98:22-139.178.68.195:36766.service - OpenSSH per-connection server daemon (139.178.68.195:36766). Aug 13 00:47:33.563474 systemd-logind[1500]: Removed session 14. Aug 13 00:47:33.674390 sshd[4137]: Accepted publickey for core from 139.178.68.195 port 36766 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:33.677567 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:33.690741 systemd-logind[1500]: New session 15 of user core. Aug 13 00:47:33.695771 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:47:33.989341 sshd[4139]: Connection closed by 139.178.68.195 port 36766 Aug 13 00:47:33.989797 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:33.997718 systemd[1]: sshd@14-24.144.89.98:22-139.178.68.195:36766.service: Deactivated successfully. Aug 13 00:47:34.003987 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:47:34.019101 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:47:34.021091 systemd-logind[1500]: Removed session 15. Aug 13 00:47:39.008210 systemd[1]: Started sshd@15-24.144.89.98:22-139.178.68.195:36774.service - OpenSSH per-connection server daemon (139.178.68.195:36774). Aug 13 00:47:39.083866 sshd[4154]: Accepted publickey for core from 139.178.68.195 port 36774 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:39.087382 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:39.096700 systemd-logind[1500]: New session 16 of user core. Aug 13 00:47:39.108964 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:47:39.290999 sshd[4156]: Connection closed by 139.178.68.195 port 36774 Aug 13 00:47:39.290733 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:39.296652 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:47:39.297616 systemd[1]: sshd@15-24.144.89.98:22-139.178.68.195:36774.service: Deactivated successfully. Aug 13 00:47:39.302568 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:47:39.308683 systemd-logind[1500]: Removed session 16. Aug 13 00:47:44.309027 systemd[1]: Started sshd@16-24.144.89.98:22-139.178.68.195:41580.service - OpenSSH per-connection server daemon (139.178.68.195:41580). Aug 13 00:47:44.377899 sshd[4170]: Accepted publickey for core from 139.178.68.195 port 41580 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:44.379951 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:44.387352 systemd-logind[1500]: New session 17 of user core. Aug 13 00:47:44.396589 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:47:44.560178 sshd[4172]: Connection closed by 139.178.68.195 port 41580 Aug 13 00:47:44.560893 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:44.576738 systemd[1]: sshd@16-24.144.89.98:22-139.178.68.195:41580.service: Deactivated successfully. Aug 13 00:47:44.580315 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:47:44.583606 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:47:44.587174 systemd[1]: Started sshd@17-24.144.89.98:22-139.178.68.195:41588.service - OpenSSH per-connection server daemon (139.178.68.195:41588). Aug 13 00:47:44.589135 systemd-logind[1500]: Removed session 17. Aug 13 00:47:44.657225 sshd[4184]: Accepted publickey for core from 139.178.68.195 port 41588 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:44.659108 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:44.666689 systemd-logind[1500]: New session 18 of user core. Aug 13 00:47:44.675680 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:47:45.045352 sshd[4186]: Connection closed by 139.178.68.195 port 41588 Aug 13 00:47:45.047447 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:45.059234 systemd[1]: sshd@17-24.144.89.98:22-139.178.68.195:41588.service: Deactivated successfully. Aug 13 00:47:45.062195 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:47:45.063653 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:47:45.070256 systemd[1]: Started sshd@18-24.144.89.98:22-139.178.68.195:41604.service - OpenSSH per-connection server daemon (139.178.68.195:41604). Aug 13 00:47:45.071977 systemd-logind[1500]: Removed session 18. Aug 13 00:47:45.177721 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 41604 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:45.180094 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:45.188216 systemd-logind[1500]: New session 19 of user core. Aug 13 00:47:45.203657 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:47:47.221672 sshd[4198]: Connection closed by 139.178.68.195 port 41604 Aug 13 00:47:47.222264 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:47.242688 systemd[1]: sshd@18-24.144.89.98:22-139.178.68.195:41604.service: Deactivated successfully. Aug 13 00:47:47.250724 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:47:47.251434 systemd[1]: session-19.scope: Consumed 751ms CPU time, 66.3M memory peak. Aug 13 00:47:47.253415 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:47:47.261042 systemd-logind[1500]: Removed session 19. Aug 13 00:47:47.269463 systemd[1]: Started sshd@19-24.144.89.98:22-139.178.68.195:41612.service - OpenSSH per-connection server daemon (139.178.68.195:41612). Aug 13 00:47:47.365645 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 41612 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:47.371412 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:47.383568 systemd-logind[1500]: New session 20 of user core. Aug 13 00:47:47.399079 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:47:47.847951 sshd[4217]: Connection closed by 139.178.68.195 port 41612 Aug 13 00:47:47.849342 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:47.862323 systemd[1]: sshd@19-24.144.89.98:22-139.178.68.195:41612.service: Deactivated successfully. Aug 13 00:47:47.866503 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:47:47.872738 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:47:47.875789 systemd-logind[1500]: Removed session 20. Aug 13 00:47:47.879125 systemd[1]: Started sshd@20-24.144.89.98:22-139.178.68.195:41624.service - OpenSSH per-connection server daemon (139.178.68.195:41624). Aug 13 00:47:47.948428 sshd[4227]: Accepted publickey for core from 139.178.68.195 port 41624 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:47.950440 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:47.960240 systemd-logind[1500]: New session 21 of user core. Aug 13 00:47:47.968611 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:47:48.119048 sshd[4229]: Connection closed by 139.178.68.195 port 41624 Aug 13 00:47:48.119593 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:48.127925 systemd[1]: sshd@20-24.144.89.98:22-139.178.68.195:41624.service: Deactivated successfully. Aug 13 00:47:48.132729 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:47:48.134836 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:47:48.137848 systemd-logind[1500]: Removed session 21. Aug 13 00:47:49.812789 kubelet[2725]: E0813 00:47:49.811811 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:53.139683 systemd[1]: Started sshd@21-24.144.89.98:22-139.178.68.195:40628.service - OpenSSH per-connection server daemon (139.178.68.195:40628). Aug 13 00:47:53.220642 sshd[4244]: Accepted publickey for core from 139.178.68.195 port 40628 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:53.223481 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:53.235444 systemd-logind[1500]: New session 22 of user core. Aug 13 00:47:53.242760 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:47:53.412665 sshd[4246]: Connection closed by 139.178.68.195 port 40628 Aug 13 00:47:53.415365 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:53.421885 systemd[1]: sshd@21-24.144.89.98:22-139.178.68.195:40628.service: Deactivated successfully. Aug 13 00:47:53.427030 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:47:53.432437 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:47:53.436168 systemd-logind[1500]: Removed session 22. Aug 13 00:47:56.811012 kubelet[2725]: E0813 00:47:56.810964 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:58.436584 systemd[1]: Started sshd@22-24.144.89.98:22-139.178.68.195:40640.service - OpenSSH per-connection server daemon (139.178.68.195:40640). Aug 13 00:47:58.511906 sshd[4259]: Accepted publickey for core from 139.178.68.195 port 40640 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:58.515560 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:58.524196 systemd-logind[1500]: New session 23 of user core. Aug 13 00:47:58.543620 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:47:58.687942 sshd[4261]: Connection closed by 139.178.68.195 port 40640 Aug 13 00:47:58.689190 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:58.697161 systemd[1]: sshd@22-24.144.89.98:22-139.178.68.195:40640.service: Deactivated successfully. Aug 13 00:47:58.700443 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:47:58.702369 systemd-logind[1500]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:47:58.704596 systemd-logind[1500]: Removed session 23. Aug 13 00:48:03.706952 systemd[1]: Started sshd@23-24.144.89.98:22-139.178.68.195:47384.service - OpenSSH per-connection server daemon (139.178.68.195:47384). Aug 13 00:48:03.781237 sshd[4273]: Accepted publickey for core from 139.178.68.195 port 47384 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:03.785393 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:03.793381 systemd-logind[1500]: New session 24 of user core. Aug 13 00:48:03.800684 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:48:03.995433 sshd[4275]: Connection closed by 139.178.68.195 port 47384 Aug 13 00:48:03.996246 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:04.002137 systemd[1]: sshd@23-24.144.89.98:22-139.178.68.195:47384.service: Deactivated successfully. Aug 13 00:48:04.007119 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:48:04.010417 systemd-logind[1500]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:48:04.014504 systemd-logind[1500]: Removed session 24. Aug 13 00:48:05.813139 kubelet[2725]: E0813 00:48:05.812059 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:06.811814 kubelet[2725]: E0813 00:48:06.811542 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:06.811814 kubelet[2725]: E0813 00:48:06.811652 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:09.016357 systemd[1]: Started sshd@24-24.144.89.98:22-139.178.68.195:47392.service - OpenSSH per-connection server daemon (139.178.68.195:47392). Aug 13 00:48:09.144004 sshd[4287]: Accepted publickey for core from 139.178.68.195 port 47392 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:09.147135 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:09.157412 systemd-logind[1500]: New session 25 of user core. Aug 13 00:48:09.163960 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:48:09.339139 sshd[4289]: Connection closed by 139.178.68.195 port 47392 Aug 13 00:48:09.340732 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:09.354837 systemd[1]: sshd@24-24.144.89.98:22-139.178.68.195:47392.service: Deactivated successfully. Aug 13 00:48:09.360615 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:48:09.364693 systemd-logind[1500]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:48:09.375481 systemd[1]: Started sshd@25-24.144.89.98:22-139.178.68.195:47398.service - OpenSSH per-connection server daemon (139.178.68.195:47398). Aug 13 00:48:09.377046 systemd-logind[1500]: Removed session 25. Aug 13 00:48:09.485515 sshd[4301]: Accepted publickey for core from 139.178.68.195 port 47398 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:09.488913 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:09.504392 systemd-logind[1500]: New session 26 of user core. Aug 13 00:48:09.516053 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:48:11.125547 containerd[1536]: time="2025-08-13T00:48:11.125416588Z" level=info msg="StopContainer for \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" with timeout 30 (s)" Aug 13 00:48:11.131376 containerd[1536]: time="2025-08-13T00:48:11.131330058Z" level=info msg="Stop container \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" with signal terminated" Aug 13 00:48:11.152303 systemd[1]: cri-containerd-76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86.scope: Deactivated successfully. Aug 13 00:48:11.161388 containerd[1536]: time="2025-08-13T00:48:11.161124718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" id:\"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" pid:3303 exited_at:{seconds:1755046091 nanos:160122899}" Aug 13 00:48:11.161388 containerd[1536]: time="2025-08-13T00:48:11.161171616Z" level=info msg="received exit event container_id:\"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" id:\"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" pid:3303 exited_at:{seconds:1755046091 nanos:160122899}" Aug 13 00:48:11.175474 containerd[1536]: time="2025-08-13T00:48:11.175398167Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:48:11.182361 containerd[1536]: time="2025-08-13T00:48:11.182263859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" id:\"5de5dc8f4839de3fc6596509c5536f69af73ff77977edccc0cfc10aa49f29fc4\" pid:4331 exited_at:{seconds:1755046091 nanos:181692536}" Aug 13 00:48:11.186236 containerd[1536]: time="2025-08-13T00:48:11.186043374Z" level=info msg="StopContainer for \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" with timeout 2 (s)" Aug 13 00:48:11.186640 containerd[1536]: time="2025-08-13T00:48:11.186609280Z" level=info msg="Stop container \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" with signal terminated" Aug 13 00:48:11.208094 systemd-networkd[1457]: lxc_health: Link DOWN Aug 13 00:48:11.208104 systemd-networkd[1457]: lxc_health: Lost carrier Aug 13 00:48:11.208683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86-rootfs.mount: Deactivated successfully. Aug 13 00:48:11.225904 containerd[1536]: time="2025-08-13T00:48:11.225835311Z" level=info msg="StopContainer for \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" returns successfully" Aug 13 00:48:11.230502 containerd[1536]: time="2025-08-13T00:48:11.230327908Z" level=info msg="StopPodSandbox for \"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\"" Aug 13 00:48:11.234971 systemd[1]: cri-containerd-99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e.scope: Deactivated successfully. Aug 13 00:48:11.235367 systemd[1]: cri-containerd-99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e.scope: Consumed 10.211s CPU time, 193.3M memory peak, 69.4M read from disk, 13.3M written to disk. Aug 13 00:48:11.238672 containerd[1536]: time="2025-08-13T00:48:11.238524588Z" level=info msg="Container to stop \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:48:11.240649 containerd[1536]: time="2025-08-13T00:48:11.240565103Z" level=info msg="received exit event container_id:\"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" id:\"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" pid:3373 exited_at:{seconds:1755046091 nanos:237564627}" Aug 13 00:48:11.241399 containerd[1536]: time="2025-08-13T00:48:11.241343248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" id:\"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" pid:3373 exited_at:{seconds:1755046091 nanos:237564627}" Aug 13 00:48:11.256809 systemd[1]: cri-containerd-ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b.scope: Deactivated successfully. Aug 13 00:48:11.259657 containerd[1536]: time="2025-08-13T00:48:11.259510438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" id:\"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" pid:2940 exit_status:137 exited_at:{seconds:1755046091 nanos:259005395}" Aug 13 00:48:11.289388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e-rootfs.mount: Deactivated successfully. Aug 13 00:48:11.299579 containerd[1536]: time="2025-08-13T00:48:11.299323729Z" level=info msg="StopContainer for \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" returns successfully" Aug 13 00:48:11.304922 containerd[1536]: time="2025-08-13T00:48:11.304844504Z" level=info msg="StopPodSandbox for \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\"" Aug 13 00:48:11.305088 containerd[1536]: time="2025-08-13T00:48:11.304945848Z" level=info msg="Container to stop \"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:48:11.305088 containerd[1536]: time="2025-08-13T00:48:11.304966517Z" level=info msg="Container to stop \"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:48:11.305088 containerd[1536]: time="2025-08-13T00:48:11.304978410Z" level=info msg="Container to stop \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:48:11.305088 containerd[1536]: time="2025-08-13T00:48:11.304989996Z" level=info msg="Container to stop \"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:48:11.305088 containerd[1536]: time="2025-08-13T00:48:11.305000997Z" level=info msg="Container to stop \"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:48:11.333817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b-rootfs.mount: Deactivated successfully. Aug 13 00:48:11.339519 containerd[1536]: time="2025-08-13T00:48:11.339479738Z" level=info msg="shim disconnected" id=ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b namespace=k8s.io Aug 13 00:48:11.339880 containerd[1536]: time="2025-08-13T00:48:11.339859780Z" level=warning msg="cleaning up after shim disconnected" id=ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b namespace=k8s.io Aug 13 00:48:11.342723 containerd[1536]: time="2025-08-13T00:48:11.339927789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:48:11.342769 systemd[1]: cri-containerd-38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55.scope: Deactivated successfully. Aug 13 00:48:11.370319 containerd[1536]: time="2025-08-13T00:48:11.370254513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" id:\"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" pid:2849 exit_status:137 exited_at:{seconds:1755046091 nanos:345033532}" Aug 13 00:48:11.374557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b-shm.mount: Deactivated successfully. Aug 13 00:48:11.375510 containerd[1536]: time="2025-08-13T00:48:11.374754832Z" level=info msg="received exit event sandbox_id:\"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" exit_status:137 exited_at:{seconds:1755046091 nanos:259005395}" Aug 13 00:48:11.385615 containerd[1536]: time="2025-08-13T00:48:11.385251599Z" level=info msg="TearDown network for sandbox \"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" successfully" Aug 13 00:48:11.385615 containerd[1536]: time="2025-08-13T00:48:11.385322287Z" level=info msg="StopPodSandbox for \"ca670a7329b1c669685092df69ef71fb1c2e3598cd0ea0da57ec082808af154b\" returns successfully" Aug 13 00:48:11.399539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55-rootfs.mount: Deactivated successfully. Aug 13 00:48:11.403690 containerd[1536]: time="2025-08-13T00:48:11.403417291Z" level=info msg="shim disconnected" id=38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55 namespace=k8s.io Aug 13 00:48:11.403690 containerd[1536]: time="2025-08-13T00:48:11.403468952Z" level=warning msg="cleaning up after shim disconnected" id=38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55 namespace=k8s.io Aug 13 00:48:11.403690 containerd[1536]: time="2025-08-13T00:48:11.403484348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:48:11.404792 containerd[1536]: time="2025-08-13T00:48:11.404696345Z" level=info msg="received exit event sandbox_id:\"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" exit_status:137 exited_at:{seconds:1755046091 nanos:345033532}" Aug 13 00:48:11.409170 containerd[1536]: time="2025-08-13T00:48:11.408988537Z" level=info msg="TearDown network for sandbox \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" successfully" Aug 13 00:48:11.409170 containerd[1536]: time="2025-08-13T00:48:11.409030908Z" level=info msg="StopPodSandbox for \"38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55\" returns successfully" Aug 13 00:48:11.470310 kubelet[2725]: I0813 00:48:11.469935 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33134c88-db86-41a5-80f3-f6590ae0e405-cilium-config-path\") pod \"33134c88-db86-41a5-80f3-f6590ae0e405\" (UID: \"33134c88-db86-41a5-80f3-f6590ae0e405\") " Aug 13 00:48:11.470310 kubelet[2725]: I0813 00:48:11.469983 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-kernel\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470310 kubelet[2725]: I0813 00:48:11.470049 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-net\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470310 kubelet[2725]: I0813 00:48:11.470073 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-xtables-lock\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470310 kubelet[2725]: I0813 00:48:11.470098 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hkc\" (UniqueName: \"kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-kube-api-access-v7hkc\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470310 kubelet[2725]: I0813 00:48:11.470114 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qs9z\" (UniqueName: \"kubernetes.io/projected/33134c88-db86-41a5-80f3-f6590ae0e405-kube-api-access-8qs9z\") pod \"33134c88-db86-41a5-80f3-f6590ae0e405\" (UID: \"33134c88-db86-41a5-80f3-f6590ae0e405\") " Aug 13 00:48:11.470931 kubelet[2725]: I0813 00:48:11.470129 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-cgroup\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470931 kubelet[2725]: I0813 00:48:11.470143 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-lib-modules\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470931 kubelet[2725]: I0813 00:48:11.470158 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-hostproc\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470931 kubelet[2725]: I0813 00:48:11.470175 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-hubble-tls\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470931 kubelet[2725]: I0813 00:48:11.470193 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-config-path\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.470931 kubelet[2725]: I0813 00:48:11.470208 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-etc-cni-netd\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.471937 kubelet[2725]: I0813 00:48:11.470222 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-run\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.471937 kubelet[2725]: I0813 00:48:11.470243 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b65b404-6d4f-41e4-9eae-a52e111be624-clustermesh-secrets\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.471937 kubelet[2725]: I0813 00:48:11.470257 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-bpf-maps\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.471937 kubelet[2725]: I0813 00:48:11.470273 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cni-path\") pod \"3b65b404-6d4f-41e4-9eae-a52e111be624\" (UID: \"3b65b404-6d4f-41e4-9eae-a52e111be624\") " Aug 13 00:48:11.471937 kubelet[2725]: I0813 00:48:11.471098 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.471937 kubelet[2725]: I0813 00:48:11.471125 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cni-path" (OuterVolumeSpecName: "cni-path") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.472168 kubelet[2725]: I0813 00:48:11.471178 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-hostproc" (OuterVolumeSpecName: "hostproc") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.475312 kubelet[2725]: I0813 00:48:11.474232 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33134c88-db86-41a5-80f3-f6590ae0e405-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33134c88-db86-41a5-80f3-f6590ae0e405" (UID: "33134c88-db86-41a5-80f3-f6590ae0e405"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:48:11.475312 kubelet[2725]: I0813 00:48:11.474330 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.475312 kubelet[2725]: I0813 00:48:11.474351 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.475312 kubelet[2725]: I0813 00:48:11.474365 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.476703 kubelet[2725]: I0813 00:48:11.476654 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.480199 kubelet[2725]: I0813 00:48:11.477148 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.484241 kubelet[2725]: I0813 00:48:11.479021 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.484641 kubelet[2725]: I0813 00:48:11.480142 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:48:11.484732 kubelet[2725]: I0813 00:48:11.482213 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:48:11.487956 kubelet[2725]: I0813 00:48:11.487897 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-kube-api-access-v7hkc" (OuterVolumeSpecName: "kube-api-access-v7hkc") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "kube-api-access-v7hkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:48:11.488250 kubelet[2725]: I0813 00:48:11.488223 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b65b404-6d4f-41e4-9eae-a52e111be624-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:48:11.488435 kubelet[2725]: I0813 00:48:11.488410 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3b65b404-6d4f-41e4-9eae-a52e111be624" (UID: "3b65b404-6d4f-41e4-9eae-a52e111be624"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:48:11.488588 kubelet[2725]: I0813 00:48:11.488390 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33134c88-db86-41a5-80f3-f6590ae0e405-kube-api-access-8qs9z" (OuterVolumeSpecName: "kube-api-access-8qs9z") pod "33134c88-db86-41a5-80f3-f6590ae0e405" (UID: "33134c88-db86-41a5-80f3-f6590ae0e405"). InnerVolumeSpecName "kube-api-access-8qs9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.571909 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-cgroup\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.571965 2725 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-hostproc\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.571982 2725 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-hubble-tls\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.571993 2725 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-lib-modules\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.572008 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-config-path\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.572021 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cilium-run\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.572035 2725 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b65b404-6d4f-41e4-9eae-a52e111be624-clustermesh-secrets\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572190 kubelet[2725]: I0813 00:48:11.572047 2725 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-bpf-maps\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572059 2725 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-etc-cni-netd\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572073 2725 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-cni-path\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572086 2725 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33134c88-db86-41a5-80f3-f6590ae0e405-cilium-config-path\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572098 2725 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-kernel\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572110 2725 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-xtables-lock\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572123 2725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hkc\" (UniqueName: \"kubernetes.io/projected/3b65b404-6d4f-41e4-9eae-a52e111be624-kube-api-access-v7hkc\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572137 2725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qs9z\" (UniqueName: \"kubernetes.io/projected/33134c88-db86-41a5-80f3-f6590ae0e405-kube-api-access-8qs9z\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.572770 kubelet[2725]: I0813 00:48:11.572151 2725 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b65b404-6d4f-41e4-9eae-a52e111be624-host-proc-sys-net\") on node \"ci-4372.1.0-a-9a72d3155b\" DevicePath \"\"" Aug 13 00:48:11.823748 systemd[1]: Removed slice kubepods-burstable-pod3b65b404_6d4f_41e4_9eae_a52e111be624.slice - libcontainer container kubepods-burstable-pod3b65b404_6d4f_41e4_9eae_a52e111be624.slice. Aug 13 00:48:11.824042 systemd[1]: kubepods-burstable-pod3b65b404_6d4f_41e4_9eae_a52e111be624.slice: Consumed 10.350s CPU time, 193.6M memory peak, 69.4M read from disk, 13.3M written to disk. Aug 13 00:48:11.827014 systemd[1]: Removed slice kubepods-besteffort-pod33134c88_db86_41a5_80f3_f6590ae0e405.slice - libcontainer container kubepods-besteffort-pod33134c88_db86_41a5_80f3_f6590ae0e405.slice. Aug 13 00:48:12.205275 systemd[1]: var-lib-kubelet-pods-33134c88\x2ddb86\x2d41a5\x2d80f3\x2df6590ae0e405-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8qs9z.mount: Deactivated successfully. Aug 13 00:48:12.205529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38e29172c7ee42917efed9b821dd1026f4aa33c0c4895cca955653b1b5adbd55-shm.mount: Deactivated successfully. Aug 13 00:48:12.205673 systemd[1]: var-lib-kubelet-pods-3b65b404\x2d6d4f\x2d41e4\x2d9eae\x2da52e111be624-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7hkc.mount: Deactivated successfully. Aug 13 00:48:12.205777 systemd[1]: var-lib-kubelet-pods-3b65b404\x2d6d4f\x2d41e4\x2d9eae\x2da52e111be624-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:48:12.206328 systemd[1]: var-lib-kubelet-pods-3b65b404\x2d6d4f\x2d41e4\x2d9eae\x2da52e111be624-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:48:12.353452 kubelet[2725]: I0813 00:48:12.350410 2725 scope.go:117] "RemoveContainer" containerID="99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e" Aug 13 00:48:12.359233 containerd[1536]: time="2025-08-13T00:48:12.359138043Z" level=info msg="RemoveContainer for \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\"" Aug 13 00:48:12.375648 containerd[1536]: time="2025-08-13T00:48:12.375352744Z" level=info msg="RemoveContainer for \"99a1bcc5c6e3e430f10d7c25e6d522d2ef6a3410f1270c2d249b4149841ca87e\" returns successfully" Aug 13 00:48:12.376934 kubelet[2725]: I0813 00:48:12.376830 2725 scope.go:117] "RemoveContainer" containerID="232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5" Aug 13 00:48:12.381660 containerd[1536]: time="2025-08-13T00:48:12.381542816Z" level=info msg="RemoveContainer for \"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\"" Aug 13 00:48:12.389638 containerd[1536]: time="2025-08-13T00:48:12.389380395Z" level=info msg="RemoveContainer for \"232b983ead635714e740c9ff0ef5a380e3fe5feb2a3a6c48160d11abe06132f5\" returns successfully" Aug 13 00:48:12.390323 kubelet[2725]: I0813 00:48:12.390139 2725 scope.go:117] "RemoveContainer" containerID="19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51" Aug 13 00:48:12.396606 containerd[1536]: time="2025-08-13T00:48:12.396038234Z" level=info msg="RemoveContainer for \"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\"" Aug 13 00:48:12.402308 containerd[1536]: time="2025-08-13T00:48:12.402225135Z" level=info msg="RemoveContainer for \"19cbe6876ac3b34bbf2af521fc62536762d6172a2edcca753d7ce5a3c044ec51\" returns successfully" Aug 13 00:48:12.402766 kubelet[2725]: I0813 00:48:12.402711 2725 scope.go:117] "RemoveContainer" containerID="bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11" Aug 13 00:48:12.407922 containerd[1536]: time="2025-08-13T00:48:12.407827894Z" level=info msg="RemoveContainer for \"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\"" Aug 13 00:48:12.411692 containerd[1536]: time="2025-08-13T00:48:12.411600627Z" level=info msg="RemoveContainer for \"bec16fb793ba83bcd8c7d73d77af8e3b4cb03ef60291d35752d131f85087ce11\" returns successfully" Aug 13 00:48:12.412510 kubelet[2725]: I0813 00:48:12.412094 2725 scope.go:117] "RemoveContainer" containerID="1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14" Aug 13 00:48:12.419355 containerd[1536]: time="2025-08-13T00:48:12.419274430Z" level=info msg="RemoveContainer for \"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\"" Aug 13 00:48:12.422931 containerd[1536]: time="2025-08-13T00:48:12.422870031Z" level=info msg="RemoveContainer for \"1bdcdb1e9644586b4a5dc45c1211766845050c1e38f911a6a0bcd733873b2b14\" returns successfully" Aug 13 00:48:12.423576 kubelet[2725]: I0813 00:48:12.423507 2725 scope.go:117] "RemoveContainer" containerID="76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86" Aug 13 00:48:12.429067 containerd[1536]: time="2025-08-13T00:48:12.429016131Z" level=info msg="RemoveContainer for \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\"" Aug 13 00:48:12.433675 containerd[1536]: time="2025-08-13T00:48:12.433575818Z" level=info msg="RemoveContainer for \"76cf96ed889a63afdbe8eed1e5ede8f4c96d799e956168849f3584abf573da86\" returns successfully" Aug 13 00:48:12.653933 update_engine[1502]: I20250813 00:48:12.653618 1502 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:48:12.653933 update_engine[1502]: I20250813 00:48:12.653707 1502 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:48:12.657436 update_engine[1502]: I20250813 00:48:12.656728 1502 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:48:12.658154 update_engine[1502]: I20250813 00:48:12.658109 1502 omaha_request_params.cc:62] Current group set to beta Aug 13 00:48:12.658600 update_engine[1502]: I20250813 00:48:12.658563 1502 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:48:12.658720 update_engine[1502]: I20250813 00:48:12.658700 1502 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:48:12.658926 update_engine[1502]: I20250813 00:48:12.658880 1502 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:48:12.660745 update_engine[1502]: I20250813 00:48:12.659083 1502 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:48:12.660745 update_engine[1502]: I20250813 00:48:12.659199 1502 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:48:12.660745 update_engine[1502]: I20250813 00:48:12.659214 1502 omaha_request_action.cc:272] Request: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: Aug 13 00:48:12.660745 update_engine[1502]: I20250813 00:48:12.659222 1502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:48:12.678762 update_engine[1502]: I20250813 00:48:12.678705 1502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:48:12.679724 update_engine[1502]: I20250813 00:48:12.679621 1502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:48:12.683570 update_engine[1502]: E20250813 00:48:12.682989 1502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:48:12.685215 update_engine[1502]: I20250813 00:48:12.685158 1502 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:48:12.685581 locksmithd[1534]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:48:13.058078 sshd[4303]: Connection closed by 139.178.68.195 port 47398 Aug 13 00:48:13.059060 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:13.073624 systemd[1]: sshd@25-24.144.89.98:22-139.178.68.195:47398.service: Deactivated successfully. Aug 13 00:48:13.077984 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:48:13.080649 systemd-logind[1500]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:48:13.086575 systemd-logind[1500]: Removed session 26. Aug 13 00:48:13.089573 systemd[1]: Started sshd@26-24.144.89.98:22-139.178.68.195:44352.service - OpenSSH per-connection server daemon (139.178.68.195:44352). Aug 13 00:48:13.190308 sshd[4456]: Accepted publickey for core from 139.178.68.195 port 44352 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:13.192467 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:13.202361 systemd-logind[1500]: New session 27 of user core. Aug 13 00:48:13.212674 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:48:13.824394 kubelet[2725]: I0813 00:48:13.824321 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33134c88-db86-41a5-80f3-f6590ae0e405" path="/var/lib/kubelet/pods/33134c88-db86-41a5-80f3-f6590ae0e405/volumes" Aug 13 00:48:13.825020 kubelet[2725]: I0813 00:48:13.824989 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" path="/var/lib/kubelet/pods/3b65b404-6d4f-41e4-9eae-a52e111be624/volumes" Aug 13 00:48:13.956781 kubelet[2725]: E0813 00:48:13.956685 2725 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:48:14.336348 sshd[4458]: Connection closed by 139.178.68.195 port 44352 Aug 13 00:48:14.336888 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:14.355977 systemd[1]: sshd@26-24.144.89.98:22-139.178.68.195:44352.service: Deactivated successfully. Aug 13 00:48:14.363176 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:48:14.364692 systemd-logind[1500]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:48:14.376059 systemd[1]: Started sshd@27-24.144.89.98:22-139.178.68.195:44356.service - OpenSSH per-connection server daemon (139.178.68.195:44356). Aug 13 00:48:14.379056 systemd-logind[1500]: Removed session 27. Aug 13 00:48:14.433956 kubelet[2725]: E0813 00:48:14.433873 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" containerName="apply-sysctl-overwrites" Aug 13 00:48:14.433956 kubelet[2725]: E0813 00:48:14.433941 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" containerName="mount-bpf-fs" Aug 13 00:48:14.433956 kubelet[2725]: E0813 00:48:14.433953 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="33134c88-db86-41a5-80f3-f6590ae0e405" containerName="cilium-operator" Aug 13 00:48:14.433956 kubelet[2725]: E0813 00:48:14.433964 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" containerName="clean-cilium-state" Aug 13 00:48:14.434371 kubelet[2725]: E0813 00:48:14.433976 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" containerName="cilium-agent" Aug 13 00:48:14.434371 kubelet[2725]: E0813 00:48:14.434016 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" containerName="mount-cgroup" Aug 13 00:48:14.434371 kubelet[2725]: I0813 00:48:14.434071 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b65b404-6d4f-41e4-9eae-a52e111be624" containerName="cilium-agent" Aug 13 00:48:14.434371 kubelet[2725]: I0813 00:48:14.434095 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="33134c88-db86-41a5-80f3-f6590ae0e405" containerName="cilium-operator" Aug 13 00:48:14.454923 kubelet[2725]: W0813 00:48:14.453116 2725 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4372.1.0-a-9a72d3155b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372.1.0-a-9a72d3155b' and this object Aug 13 00:48:14.454923 kubelet[2725]: E0813 00:48:14.453174 2725 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4372.1.0-a-9a72d3155b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.1.0-a-9a72d3155b' and this object" logger="UnhandledError" Aug 13 00:48:14.458111 systemd[1]: Created slice kubepods-burstable-pod49c764c6_0436_424c_94b6_9eeda6d8dec8.slice - libcontainer container kubepods-burstable-pod49c764c6_0436_424c_94b6_9eeda6d8dec8.slice. Aug 13 00:48:14.490589 sshd[4469]: Accepted publickey for core from 139.178.68.195 port 44356 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:14.496707 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:14.504985 systemd-logind[1500]: New session 28 of user core. Aug 13 00:48:14.512148 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:48:14.573014 sshd[4471]: Connection closed by 139.178.68.195 port 44356 Aug 13 00:48:14.573847 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:14.587959 systemd[1]: sshd@27-24.144.89.98:22-139.178.68.195:44356.service: Deactivated successfully. Aug 13 00:48:14.592173 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:48:14.592684 kubelet[2725]: I0813 00:48:14.592605 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-cilium-cgroup\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592781 kubelet[2725]: I0813 00:48:14.592693 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49c764c6-0436-424c-94b6-9eeda6d8dec8-clustermesh-secrets\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592781 kubelet[2725]: I0813 00:48:14.592723 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-host-proc-sys-kernel\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592907 kubelet[2725]: I0813 00:48:14.592838 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/49c764c6-0436-424c-94b6-9eeda6d8dec8-cilium-ipsec-secrets\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592907 kubelet[2725]: I0813 00:48:14.592869 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgw2q\" (UniqueName: \"kubernetes.io/projected/49c764c6-0436-424c-94b6-9eeda6d8dec8-kube-api-access-qgw2q\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592999 kubelet[2725]: I0813 00:48:14.592914 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-cilium-run\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592999 kubelet[2725]: I0813 00:48:14.592934 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-cni-path\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592999 kubelet[2725]: I0813 00:48:14.592948 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-etc-cni-netd\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.592999 kubelet[2725]: I0813 00:48:14.592965 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-host-proc-sys-net\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.593181 kubelet[2725]: I0813 00:48:14.593017 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-hostproc\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.593181 kubelet[2725]: I0813 00:48:14.593031 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-lib-modules\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.593181 kubelet[2725]: I0813 00:48:14.593066 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49c764c6-0436-424c-94b6-9eeda6d8dec8-hubble-tls\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.593181 kubelet[2725]: I0813 00:48:14.593102 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-bpf-maps\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.593181 kubelet[2725]: I0813 00:48:14.593119 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49c764c6-0436-424c-94b6-9eeda6d8dec8-xtables-lock\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.593181 kubelet[2725]: I0813 00:48:14.593152 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49c764c6-0436-424c-94b6-9eeda6d8dec8-cilium-config-path\") pod \"cilium-9xg4q\" (UID: \"49c764c6-0436-424c-94b6-9eeda6d8dec8\") " pod="kube-system/cilium-9xg4q" Aug 13 00:48:14.595024 systemd-logind[1500]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:48:14.601628 systemd[1]: Started sshd@28-24.144.89.98:22-139.178.68.195:44368.service - OpenSSH per-connection server daemon (139.178.68.195:44368). Aug 13 00:48:14.603992 systemd-logind[1500]: Removed session 28. Aug 13 00:48:14.669340 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 44368 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:14.671237 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:14.677036 systemd-logind[1500]: New session 29 of user core. Aug 13 00:48:14.687707 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:48:15.701122 kubelet[2725]: E0813 00:48:15.701049 2725 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 13 00:48:15.701732 kubelet[2725]: E0813 00:48:15.701188 2725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49c764c6-0436-424c-94b6-9eeda6d8dec8-cilium-ipsec-secrets podName:49c764c6-0436-424c-94b6-9eeda6d8dec8 nodeName:}" failed. No retries permitted until 2025-08-13 00:48:16.201156569 +0000 UTC m=+102.643791918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/49c764c6-0436-424c-94b6-9eeda6d8dec8-cilium-ipsec-secrets") pod "cilium-9xg4q" (UID: "49c764c6-0436-424c-94b6-9eeda6d8dec8") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:48:16.267214 kubelet[2725]: E0813 00:48:16.266728 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:16.267537 containerd[1536]: time="2025-08-13T00:48:16.267469908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9xg4q,Uid:49c764c6-0436-424c-94b6-9eeda6d8dec8,Namespace:kube-system,Attempt:0,}" Aug 13 00:48:16.290099 containerd[1536]: time="2025-08-13T00:48:16.290026627Z" level=info msg="connecting to shim 2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82" address="unix:///run/containerd/s/044bf9f523a886038e2b666a267ca77d82f719533e65b0681fe6116ad5b3bf4d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:48:16.333596 systemd[1]: Started cri-containerd-2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82.scope - libcontainer container 2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82. Aug 13 00:48:16.388145 containerd[1536]: time="2025-08-13T00:48:16.388086649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9xg4q,Uid:49c764c6-0436-424c-94b6-9eeda6d8dec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\"" Aug 13 00:48:16.389806 kubelet[2725]: E0813 00:48:16.389760 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:16.396376 containerd[1536]: time="2025-08-13T00:48:16.395444587Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:48:16.424533 containerd[1536]: time="2025-08-13T00:48:16.422355592Z" level=info msg="Container 230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:16.451889 containerd[1536]: time="2025-08-13T00:48:16.451836216Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\"" Aug 13 00:48:16.453858 containerd[1536]: time="2025-08-13T00:48:16.453130082Z" level=info msg="StartContainer for \"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\"" Aug 13 00:48:16.455526 containerd[1536]: time="2025-08-13T00:48:16.455482521Z" level=info msg="connecting to shim 230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba" address="unix:///run/containerd/s/044bf9f523a886038e2b666a267ca77d82f719533e65b0681fe6116ad5b3bf4d" protocol=ttrpc version=3 Aug 13 00:48:16.480638 systemd[1]: Started cri-containerd-230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba.scope - libcontainer container 230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba. Aug 13 00:48:16.529796 containerd[1536]: time="2025-08-13T00:48:16.529521365Z" level=info msg="StartContainer for \"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\" returns successfully" Aug 13 00:48:16.550093 systemd[1]: cri-containerd-230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba.scope: Deactivated successfully. Aug 13 00:48:16.550877 systemd[1]: cri-containerd-230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba.scope: Consumed 36ms CPU time, 9.6M memory peak, 3.2M read from disk. Aug 13 00:48:16.557785 containerd[1536]: time="2025-08-13T00:48:16.557664125Z" level=info msg="received exit event container_id:\"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\" id:\"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\" pid:4550 exited_at:{seconds:1755046096 nanos:555733577}" Aug 13 00:48:16.558140 containerd[1536]: time="2025-08-13T00:48:16.557765640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\" id:\"230648342220a8a16b8c9a5f22b04f1197bd3498bb6e71cf24905a931c46a6ba\" pid:4550 exited_at:{seconds:1755046096 nanos:555733577}" Aug 13 00:48:16.580333 kubelet[2725]: I0813 00:48:16.579675 2725 setters.go:600] "Node became not ready" node="ci-4372.1.0-a-9a72d3155b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:48:16Z","lastTransitionTime":"2025-08-13T00:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:48:17.391811 kubelet[2725]: E0813 00:48:17.391449 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:17.400980 containerd[1536]: time="2025-08-13T00:48:17.400861855Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:48:17.423370 containerd[1536]: time="2025-08-13T00:48:17.416962579Z" level=info msg="Container 24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:17.425037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805768547.mount: Deactivated successfully. Aug 13 00:48:17.444122 containerd[1536]: time="2025-08-13T00:48:17.444052473Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\"" Aug 13 00:48:17.446993 containerd[1536]: time="2025-08-13T00:48:17.446866151Z" level=info msg="StartContainer for \"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\"" Aug 13 00:48:17.449883 containerd[1536]: time="2025-08-13T00:48:17.449753868Z" level=info msg="connecting to shim 24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de" address="unix:///run/containerd/s/044bf9f523a886038e2b666a267ca77d82f719533e65b0681fe6116ad5b3bf4d" protocol=ttrpc version=3 Aug 13 00:48:17.494589 systemd[1]: Started cri-containerd-24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de.scope - libcontainer container 24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de. Aug 13 00:48:17.536041 containerd[1536]: time="2025-08-13T00:48:17.535914243Z" level=info msg="StartContainer for \"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\" returns successfully" Aug 13 00:48:17.548713 systemd[1]: cri-containerd-24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de.scope: Deactivated successfully. Aug 13 00:48:17.549649 systemd[1]: cri-containerd-24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de.scope: Consumed 25ms CPU time, 7.6M memory peak, 2.2M read from disk. Aug 13 00:48:17.550884 containerd[1536]: time="2025-08-13T00:48:17.550774650Z" level=info msg="received exit event container_id:\"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\" id:\"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\" pid:4598 exited_at:{seconds:1755046097 nanos:549770601}" Aug 13 00:48:17.551947 containerd[1536]: time="2025-08-13T00:48:17.551906515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\" id:\"24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de\" pid:4598 exited_at:{seconds:1755046097 nanos:549770601}" Aug 13 00:48:18.214384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24af12cd07d674d2738e1e73759d8313cf2b80424f113891a34e78bda07c61de-rootfs.mount: Deactivated successfully. Aug 13 00:48:18.396745 kubelet[2725]: E0813 00:48:18.396652 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:18.407253 containerd[1536]: time="2025-08-13T00:48:18.407205205Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:48:18.450499 containerd[1536]: time="2025-08-13T00:48:18.450371107Z" level=info msg="Container bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:18.458829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113430552.mount: Deactivated successfully. Aug 13 00:48:18.472530 containerd[1536]: time="2025-08-13T00:48:18.471911091Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\"" Aug 13 00:48:18.475447 containerd[1536]: time="2025-08-13T00:48:18.474770445Z" level=info msg="StartContainer for \"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\"" Aug 13 00:48:18.479460 containerd[1536]: time="2025-08-13T00:48:18.479213082Z" level=info msg="connecting to shim bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388" address="unix:///run/containerd/s/044bf9f523a886038e2b666a267ca77d82f719533e65b0681fe6116ad5b3bf4d" protocol=ttrpc version=3 Aug 13 00:48:18.511569 systemd[1]: Started cri-containerd-bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388.scope - libcontainer container bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388. Aug 13 00:48:18.580181 containerd[1536]: time="2025-08-13T00:48:18.580017886Z" level=info msg="StartContainer for \"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\" returns successfully" Aug 13 00:48:18.588208 systemd[1]: cri-containerd-bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388.scope: Deactivated successfully. Aug 13 00:48:18.591229 containerd[1536]: time="2025-08-13T00:48:18.591127825Z" level=info msg="received exit event container_id:\"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\" id:\"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\" pid:4642 exited_at:{seconds:1755046098 nanos:590700983}" Aug 13 00:48:18.591675 containerd[1536]: time="2025-08-13T00:48:18.591522898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\" id:\"bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388\" pid:4642 exited_at:{seconds:1755046098 nanos:590700983}" Aug 13 00:48:18.623417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd3ecc485b39b6c476d403b2150de1c0a5a796cc57385b8f0291940a78f55388-rootfs.mount: Deactivated successfully. Aug 13 00:48:18.959280 kubelet[2725]: E0813 00:48:18.958401 2725 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:48:19.409149 kubelet[2725]: E0813 00:48:19.409098 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:19.414760 containerd[1536]: time="2025-08-13T00:48:19.414685437Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:48:19.428026 containerd[1536]: time="2025-08-13T00:48:19.426664764Z" level=info msg="Container a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:19.445453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630513154.mount: Deactivated successfully. Aug 13 00:48:19.466669 containerd[1536]: time="2025-08-13T00:48:19.466537822Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\"" Aug 13 00:48:19.468589 containerd[1536]: time="2025-08-13T00:48:19.468486308Z" level=info msg="StartContainer for \"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\"" Aug 13 00:48:19.475269 containerd[1536]: time="2025-08-13T00:48:19.475114360Z" level=info msg="connecting to shim a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb" address="unix:///run/containerd/s/044bf9f523a886038e2b666a267ca77d82f719533e65b0681fe6116ad5b3bf4d" protocol=ttrpc version=3 Aug 13 00:48:19.524880 systemd[1]: Started cri-containerd-a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb.scope - libcontainer container a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb. Aug 13 00:48:19.576816 systemd[1]: cri-containerd-a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb.scope: Deactivated successfully. Aug 13 00:48:19.579911 containerd[1536]: time="2025-08-13T00:48:19.579594062Z" level=info msg="received exit event container_id:\"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\" id:\"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\" pid:4681 exited_at:{seconds:1755046099 nanos:579312087}" Aug 13 00:48:19.579911 containerd[1536]: time="2025-08-13T00:48:19.579869415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\" id:\"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\" pid:4681 exited_at:{seconds:1755046099 nanos:579312087}" Aug 13 00:48:19.592333 containerd[1536]: time="2025-08-13T00:48:19.592248150Z" level=info msg="StartContainer for \"a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb\" returns successfully" Aug 13 00:48:19.617535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1fbef245e9385bb58e4df70739cbf09e71bb620b4cab820bf906ae9c3154edb-rootfs.mount: Deactivated successfully. Aug 13 00:48:20.418892 kubelet[2725]: E0813 00:48:20.418728 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:20.423102 containerd[1536]: time="2025-08-13T00:48:20.423035804Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:48:20.446326 containerd[1536]: time="2025-08-13T00:48:20.446145290Z" level=info msg="Container ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:20.463326 containerd[1536]: time="2025-08-13T00:48:20.463059248Z" level=info msg="CreateContainer within sandbox \"2fa1d0e4066f80015732796e6f5c6b255ff9f3dc98463ee1656c04cdda48ae82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\"" Aug 13 00:48:20.469572 containerd[1536]: time="2025-08-13T00:48:20.469494282Z" level=info msg="StartContainer for \"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\"" Aug 13 00:48:20.471308 containerd[1536]: time="2025-08-13T00:48:20.471143868Z" level=info msg="connecting to shim ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37" address="unix:///run/containerd/s/044bf9f523a886038e2b666a267ca77d82f719533e65b0681fe6116ad5b3bf4d" protocol=ttrpc version=3 Aug 13 00:48:20.501632 systemd[1]: Started cri-containerd-ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37.scope - libcontainer container ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37. Aug 13 00:48:20.552684 containerd[1536]: time="2025-08-13T00:48:20.552539607Z" level=info msg="StartContainer for \"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\" returns successfully" Aug 13 00:48:20.676859 containerd[1536]: time="2025-08-13T00:48:20.676724183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\" id:\"ac82442e0d89c59e62659d829e7e409e2930ac8a431e36f78bfb5aefd93bac4a\" pid:4748 exited_at:{seconds:1755046100 nanos:673911388}" Aug 13 00:48:21.220340 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 00:48:21.432789 kubelet[2725]: E0813 00:48:21.431600 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:21.457384 kubelet[2725]: I0813 00:48:21.457279 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9xg4q" podStartSLOduration=7.457255209 podStartE2EDuration="7.457255209s" podCreationTimestamp="2025-08-13 00:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:48:21.455152319 +0000 UTC m=+107.897787717" watchObservedRunningTime="2025-08-13 00:48:21.457255209 +0000 UTC m=+107.899890579" Aug 13 00:48:22.435242 kubelet[2725]: E0813 00:48:22.435154 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:22.606584 update_engine[1502]: I20250813 00:48:22.606477 1502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:48:22.607990 update_engine[1502]: I20250813 00:48:22.607564 1502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:48:22.607990 update_engine[1502]: I20250813 00:48:22.607928 1502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:48:22.610711 update_engine[1502]: E20250813 00:48:22.610644 1502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:48:22.611010 update_engine[1502]: I20250813 00:48:22.610982 1502 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:48:23.625702 containerd[1536]: time="2025-08-13T00:48:23.625252542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\" id:\"c7de99081d3d8a17b35cb470cbbaf752fb9ec265aea7112e914a5c05a9c895ee\" pid:4909 exit_status:1 exited_at:{seconds:1755046103 nanos:624139796}" Aug 13 00:48:25.215100 systemd-networkd[1457]: lxc_health: Link UP Aug 13 00:48:25.216600 systemd-networkd[1457]: lxc_health: Gained carrier Aug 13 00:48:25.899633 containerd[1536]: time="2025-08-13T00:48:25.898193328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\" id:\"a14022b08903717518e40950a645e9ba96878b8aec60e00cc9285ffc95f86c20\" pid:5273 exited_at:{seconds:1755046105 nanos:897754102}" Aug 13 00:48:26.272495 kubelet[2725]: E0813 00:48:26.272437 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:26.448782 kubelet[2725]: E0813 00:48:26.448730 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:26.511548 systemd-networkd[1457]: lxc_health: Gained IPv6LL Aug 13 00:48:27.451242 kubelet[2725]: E0813 00:48:27.451027 2725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:28.087976 containerd[1536]: time="2025-08-13T00:48:28.087875772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\" id:\"04e72bce2f4d10a95e107c9e1537723c94bfe71dc596dc4f0d5f957cb0171a6b\" pid:5308 exited_at:{seconds:1755046108 nanos:87539020}" Aug 13 00:48:30.279326 containerd[1536]: time="2025-08-13T00:48:30.279235537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccd7d978b325f6dc329f9f81bd018baec21cc7d5238bc632d922e2ea524fea37\" id:\"de50ab9678194235a8700e62a48cc1daf556080df2dedfe5a16077c837251c3a\" pid:5335 exited_at:{seconds:1755046110 nanos:278672778}" Aug 13 00:48:30.296412 sshd[4480]: Connection closed by 139.178.68.195 port 44368 Aug 13 00:48:30.297613 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:30.320594 systemd[1]: sshd@28-24.144.89.98:22-139.178.68.195:44368.service: Deactivated successfully. Aug 13 00:48:30.327194 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:48:30.334662 systemd-logind[1500]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:48:30.336804 systemd-logind[1500]: Removed session 29. Aug 13 00:48:32.612578 update_engine[1502]: I20250813 00:48:32.611927 1502 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:48:32.615926 update_engine[1502]: I20250813 00:48:32.615792 1502 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:48:32.616263 update_engine[1502]: I20250813 00:48:32.616170 1502 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:48:32.617324 update_engine[1502]: E20250813 00:48:32.616685 1502 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:48:32.617324 update_engine[1502]: I20250813 00:48:32.616742 1502 libcurl_http_fetcher.cc:283] No HTTP response, retry 3