Apr 30 03:23:53.947652 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:23:53.947687 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:53.947705 kernel: BIOS-provided physical RAM map: Apr 30 03:23:53.947716 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:23:53.947726 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:23:53.947736 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:23:53.947749 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 03:23:53.947761 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 03:23:53.947771 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:23:53.947785 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:23:53.947797 kernel: NX (Execute Disable) protection: active Apr 30 03:23:53.947807 kernel: APIC: Static calls initialized Apr 30 03:23:53.947824 kernel: SMBIOS 2.8 present. Apr 30 03:23:53.947835 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 03:23:53.947849 kernel: Hypervisor detected: KVM Apr 30 03:23:53.947864 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:23:53.947881 kernel: kvm-clock: using sched offset of 3021143393 cycles Apr 30 03:23:53.947894 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:23:53.947906 kernel: tsc: Detected 2494.138 MHz processor Apr 30 03:23:53.949067 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:23:53.949085 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:23:53.949098 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 03:23:53.949111 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:23:53.949124 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:23:53.949143 kernel: ACPI: Early table checksum verification disabled Apr 30 03:23:53.949156 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 03:23:53.949169 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949181 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949194 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949206 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 03:23:53.949218 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949230 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949243 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949259 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:53.949271 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 03:23:53.949283 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 03:23:53.949296 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 03:23:53.949308 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 03:23:53.949321 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 03:23:53.949334 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 03:23:53.949355 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 03:23:53.949369 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:23:53.949382 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:23:53.949396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:23:53.949409 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 03:23:53.949432 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 03:23:53.949446 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 03:23:53.949463 kernel: Zone ranges: Apr 30 03:23:53.949478 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:23:53.949491 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 03:23:53.949504 kernel: Normal empty Apr 30 03:23:53.949517 kernel: Movable zone start for each node Apr 30 03:23:53.949530 kernel: Early memory node ranges Apr 30 03:23:53.949543 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:23:53.949557 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 03:23:53.949570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 03:23:53.949587 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:23:53.949601 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:23:53.949617 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 03:23:53.949630 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:23:53.949643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:23:53.949657 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:23:53.949670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:23:53.949684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:23:53.949697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:23:53.949715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:23:53.949728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:23:53.949741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:23:53.949754 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:23:53.949767 kernel: TSC deadline timer available Apr 30 03:23:53.949780 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:23:53.949793 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:23:53.949806 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 03:23:53.949824 kernel: Booting paravirtualized kernel on KVM Apr 30 03:23:53.949838 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:23:53.949855 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:23:53.949867 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:23:53.949881 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:23:53.949894 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:23:53.949907 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:23:53.949936 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:53.949950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:23:53.949963 kernel: random: crng init done Apr 30 03:23:53.949980 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:23:53.949994 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:23:53.950008 kernel: Fallback order for Node 0: 0 Apr 30 03:23:53.950021 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 03:23:53.950035 kernel: Policy zone: DMA32 Apr 30 03:23:53.950049 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:23:53.950063 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125152K reserved, 0K cma-reserved) Apr 30 03:23:53.950077 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:23:53.950094 kernel: Kernel/User page tables isolation: enabled Apr 30 03:23:53.950108 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:23:53.950122 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:23:53.950136 kernel: Dynamic Preempt: voluntary Apr 30 03:23:53.950149 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:23:53.950170 kernel: rcu: RCU event tracing is enabled. Apr 30 03:23:53.950184 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:23:53.950198 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:23:53.950212 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:23:53.950226 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:23:53.950243 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:23:53.950257 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:23:53.950270 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:23:53.950284 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:23:53.950301 kernel: Console: colour VGA+ 80x25 Apr 30 03:23:53.950315 kernel: printk: console [tty0] enabled Apr 30 03:23:53.950328 kernel: printk: console [ttyS0] enabled Apr 30 03:23:53.950342 kernel: ACPI: Core revision 20230628 Apr 30 03:23:53.950355 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:23:53.950373 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:23:53.950387 kernel: x2apic enabled Apr 30 03:23:53.950400 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:23:53.950413 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:23:53.950427 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Apr 30 03:23:53.950441 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Apr 30 03:23:53.950454 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 03:23:53.950468 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 03:23:53.950498 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:23:53.950513 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:23:53.950527 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:23:53.950545 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:23:53.950559 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 03:23:53.950573 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:23:53.950587 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:23:53.950602 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:23:53.950616 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:23:53.950637 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:23:53.950652 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:23:53.950667 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:23:53.950681 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:23:53.950696 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:23:53.950710 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:23:53.950724 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:23:53.950739 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:23:53.950756 kernel: landlock: Up and running. Apr 30 03:23:53.950765 kernel: SELinux: Initializing. Apr 30 03:23:53.950774 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:23:53.950783 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:23:53.950792 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 03:23:53.950801 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:23:53.950810 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:23:53.950819 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:23:53.950828 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 03:23:53.950840 kernel: signal: max sigframe size: 1776 Apr 30 03:23:53.950869 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:23:53.950882 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:23:53.950894 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:23:53.950907 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:23:53.953973 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:23:53.954007 kernel: .... node #0, CPUs: #1 Apr 30 03:23:53.954017 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:23:53.954037 kernel: smpboot: Max logical packages: 1 Apr 30 03:23:53.954053 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Apr 30 03:23:53.954067 kernel: devtmpfs: initialized Apr 30 03:23:53.954082 kernel: x86/mm: Memory block size: 128MB Apr 30 03:23:53.954095 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:23:53.954107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:23:53.954120 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:23:53.954135 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:23:53.954161 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:23:53.954173 kernel: audit: type=2000 audit(1745983433.193:1): state=initialized audit_enabled=0 res=1 Apr 30 03:23:53.954190 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:23:53.954203 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:23:53.954215 kernel: cpuidle: using governor menu Apr 30 03:23:53.954227 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:23:53.954239 kernel: dca service started, version 1.12.1 Apr 30 03:23:53.954252 kernel: PCI: Using configuration type 1 for base access Apr 30 03:23:53.954264 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:23:53.954277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:23:53.954291 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:23:53.954310 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:23:53.954325 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:23:53.954338 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:23:53.954351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:23:53.954365 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:23:53.954378 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:23:53.954391 kernel: ACPI: Interpreter enabled Apr 30 03:23:53.954404 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:23:53.954417 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:23:53.954430 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:23:53.954440 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:23:53.954449 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:23:53.954458 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:23:53.954719 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:23:53.954866 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:23:53.955046 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:23:53.955072 kernel: acpiphp: Slot [3] registered Apr 30 03:23:53.955087 kernel: acpiphp: Slot [4] registered Apr 30 03:23:53.955100 kernel: acpiphp: Slot [5] registered Apr 30 03:23:53.955110 kernel: acpiphp: Slot [6] registered Apr 30 03:23:53.955119 kernel: acpiphp: Slot [7] registered Apr 30 03:23:53.955128 kernel: acpiphp: Slot [8] registered Apr 30 03:23:53.955137 kernel: acpiphp: Slot [9] registered Apr 30 03:23:53.955147 kernel: acpiphp: Slot [10] registered Apr 30 03:23:53.955156 kernel: acpiphp: Slot [11] registered Apr 30 03:23:53.955165 kernel: acpiphp: Slot [12] registered Apr 30 03:23:53.955178 kernel: acpiphp: Slot [13] registered Apr 30 03:23:53.955187 kernel: acpiphp: Slot [14] registered Apr 30 03:23:53.955196 kernel: acpiphp: Slot [15] registered Apr 30 03:23:53.955205 kernel: acpiphp: Slot [16] registered Apr 30 03:23:53.955214 kernel: acpiphp: Slot [17] registered Apr 30 03:23:53.955223 kernel: acpiphp: Slot [18] registered Apr 30 03:23:53.955231 kernel: acpiphp: Slot [19] registered Apr 30 03:23:53.955240 kernel: acpiphp: Slot [20] registered Apr 30 03:23:53.955249 kernel: acpiphp: Slot [21] registered Apr 30 03:23:53.955261 kernel: acpiphp: Slot [22] registered Apr 30 03:23:53.955270 kernel: acpiphp: Slot [23] registered Apr 30 03:23:53.955279 kernel: acpiphp: Slot [24] registered Apr 30 03:23:53.955288 kernel: acpiphp: Slot [25] registered Apr 30 03:23:53.955296 kernel: acpiphp: Slot [26] registered Apr 30 03:23:53.955305 kernel: acpiphp: Slot [27] registered Apr 30 03:23:53.955314 kernel: acpiphp: Slot [28] registered Apr 30 03:23:53.955322 kernel: acpiphp: Slot [29] registered Apr 30 03:23:53.955331 kernel: acpiphp: Slot [30] registered Apr 30 03:23:53.955340 kernel: acpiphp: Slot [31] registered Apr 30 03:23:53.955352 kernel: PCI host bridge to bus 0000:00 Apr 30 03:23:53.955483 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:23:53.955578 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:23:53.955704 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:23:53.955836 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:23:53.958063 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 03:23:53.958184 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:23:53.958351 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:23:53.958465 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:23:53.958589 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 03:23:53.958689 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 03:23:53.958791 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 03:23:53.958939 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 03:23:53.959099 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 03:23:53.959227 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 03:23:53.959371 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 03:23:53.959480 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 03:23:53.959611 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:23:53.959730 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 03:23:53.959911 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 03:23:53.961226 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:23:53.961379 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 03:23:53.961489 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 03:23:53.961594 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 03:23:53.961694 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 03:23:53.961791 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:23:53.961917 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:23:53.963131 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 03:23:53.963264 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 03:23:53.963364 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 03:23:53.963481 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:23:53.963597 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 03:23:53.963699 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 03:23:53.963805 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 03:23:53.965993 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 03:23:53.966154 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 03:23:53.966254 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 03:23:53.966352 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 03:23:53.966471 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:23:53.966585 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:23:53.966709 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 03:23:53.966808 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 03:23:53.967003 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:23:53.967128 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 03:23:53.967246 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 03:23:53.967373 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 03:23:53.967536 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 03:23:53.967698 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 03:23:53.967844 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 03:23:53.967863 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:23:53.967879 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:23:53.967893 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:23:53.967908 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:23:53.970868 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:23:53.970934 kernel: iommu: Default domain type: Translated Apr 30 03:23:53.970949 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:23:53.970964 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:23:53.970978 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:23:53.970993 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:23:53.971007 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 03:23:53.971233 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 03:23:53.971386 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 03:23:53.971543 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:23:53.971563 kernel: vgaarb: loaded Apr 30 03:23:53.971578 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:23:53.971593 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:23:53.971608 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:23:53.971623 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:23:53.971638 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:23:53.971652 kernel: pnp: PnP ACPI init Apr 30 03:23:53.971667 kernel: pnp: PnP ACPI: found 4 devices Apr 30 03:23:53.971687 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:23:53.971702 kernel: NET: Registered PF_INET protocol family Apr 30 03:23:53.971717 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:23:53.971731 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:23:53.971746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:23:53.971761 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:23:53.971777 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:23:53.971791 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:23:53.971805 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:23:53.971823 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:23:53.971838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:23:53.971863 kernel: NET: Registered PF_XDP protocol family Apr 30 03:23:53.972056 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:23:53.972184 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:23:53.972312 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:23:53.972441 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:23:53.972565 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 03:23:53.972725 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 03:23:53.972877 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:23:53.972899 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:23:53.975761 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 31998 usecs Apr 30 03:23:53.975803 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:23:53.975820 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:23:53.975835 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Apr 30 03:23:53.975850 kernel: Initialise system trusted keyrings Apr 30 03:23:53.975876 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:23:53.975890 kernel: Key type asymmetric registered Apr 30 03:23:53.975905 kernel: Asymmetric key parser 'x509' registered Apr 30 03:23:53.975918 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:23:53.975961 kernel: io scheduler mq-deadline registered Apr 30 03:23:53.975970 kernel: io scheduler kyber registered Apr 30 03:23:53.975979 kernel: io scheduler bfq registered Apr 30 03:23:53.975988 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:23:53.975999 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 03:23:53.976008 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:23:53.976020 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:23:53.976029 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:23:53.976038 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:23:53.976047 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:23:53.976060 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:23:53.976071 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:23:53.976230 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:23:53.976250 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:23:53.976362 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:23:53.976455 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:23:53 UTC (1745983433) Apr 30 03:23:53.976541 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 03:23:53.976552 kernel: intel_pstate: CPU model not supported Apr 30 03:23:53.976561 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:23:53.976570 kernel: Segment Routing with IPv6 Apr 30 03:23:53.976579 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:23:53.976587 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:23:53.976601 kernel: Key type dns_resolver registered Apr 30 03:23:53.976609 kernel: IPI shorthand broadcast: enabled Apr 30 03:23:53.976618 kernel: sched_clock: Marking stable (865005478, 84979009)->(1052797833, -102813346) Apr 30 03:23:53.976627 kernel: registered taskstats version 1 Apr 30 03:23:53.976636 kernel: Loading compiled-in X.509 certificates Apr 30 03:23:53.976645 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:23:53.976653 kernel: Key type .fscrypt registered Apr 30 03:23:53.976662 kernel: Key type fscrypt-provisioning registered Apr 30 03:23:53.976670 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:23:53.976682 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:23:53.976691 kernel: ima: No architecture policies found Apr 30 03:23:53.976700 kernel: clk: Disabling unused clocks Apr 30 03:23:53.976709 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:23:53.976718 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:23:53.976745 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:23:53.976757 kernel: Run /init as init process Apr 30 03:23:53.976767 kernel: with arguments: Apr 30 03:23:53.976776 kernel: /init Apr 30 03:23:53.976788 kernel: with environment: Apr 30 03:23:53.976797 kernel: HOME=/ Apr 30 03:23:53.976806 kernel: TERM=linux Apr 30 03:23:53.976815 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:23:53.976830 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:23:53.976841 systemd[1]: Detected virtualization kvm. Apr 30 03:23:53.976851 systemd[1]: Detected architecture x86-64. Apr 30 03:23:53.976861 systemd[1]: Running in initrd. Apr 30 03:23:53.976873 systemd[1]: No hostname configured, using default hostname. Apr 30 03:23:53.976882 systemd[1]: Hostname set to . Apr 30 03:23:53.976892 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:23:53.976902 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:23:53.976912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:53.976956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:53.976966 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:23:53.976976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:23:53.976990 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:23:53.976999 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:23:53.977011 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:23:53.977020 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:23:53.977030 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:53.977040 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:53.977053 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:23:53.977063 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:23:53.977073 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:23:53.977086 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:23:53.977096 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:23:53.977106 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:23:53.977118 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:23:53.977128 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:23:53.977138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:53.977148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:53.977157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:53.977167 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:23:53.977177 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:23:53.977187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:23:53.977199 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:23:53.977209 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:23:53.977219 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:23:53.977228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:23:53.977238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:53.977248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:23:53.977285 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:23:53.977312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:53.977322 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:23:53.977332 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:23:53.977347 systemd-journald[183]: Journal started Apr 30 03:23:53.977369 systemd-journald[183]: Runtime Journal (/run/log/journal/f5709ec2d47e4670851046a4e7ea220b) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:23:53.981672 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:23:53.979887 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:23:54.003549 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:23:54.013953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:23:54.015307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:23:54.018158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:23:54.019791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:54.022658 kernel: Bridge firewalling registered Apr 30 03:23:54.020954 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:23:54.023727 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:54.033563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:54.039265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:23:54.039997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:54.047360 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:54.061151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:54.066246 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:23:54.068326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:54.078505 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:23:54.098421 dracut-cmdline[215]: dracut-dracut-053 Apr 30 03:23:54.107956 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:54.120078 systemd-resolved[218]: Positive Trust Anchors: Apr 30 03:23:54.121005 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:23:54.121769 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:23:54.128882 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 30 03:23:54.131438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:23:54.132987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:54.193981 kernel: SCSI subsystem initialized Apr 30 03:23:54.203988 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:23:54.216959 kernel: iscsi: registered transport (tcp) Apr 30 03:23:54.241274 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:23:54.241384 kernel: QLogic iSCSI HBA Driver Apr 30 03:23:54.304852 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:23:54.315509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:23:54.350101 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:23:54.350204 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:23:54.350220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:23:54.406997 kernel: raid6: avx2x4 gen() 14756 MB/s Apr 30 03:23:54.423979 kernel: raid6: avx2x2 gen() 14736 MB/s Apr 30 03:23:54.440976 kernel: raid6: avx2x1 gen() 12351 MB/s Apr 30 03:23:54.441072 kernel: raid6: using algorithm avx2x4 gen() 14756 MB/s Apr 30 03:23:54.459135 kernel: raid6: .... xor() 6173 MB/s, rmw enabled Apr 30 03:23:54.459217 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:23:54.491970 kernel: xor: automatically using best checksumming function avx Apr 30 03:23:54.738983 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:23:54.754793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:23:54.763275 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:54.786677 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 30 03:23:54.792275 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:54.801537 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:23:54.824950 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 30 03:23:54.868788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:23:54.874173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:23:54.955792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:54.966301 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:23:54.989141 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:23:54.990674 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:23:54.994174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:54.995279 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:23:55.001682 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:23:55.028253 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:23:55.058969 kernel: libata version 3.00 loaded. Apr 30 03:23:55.062173 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 03:23:55.086422 kernel: scsi host0: ata_piix Apr 30 03:23:55.086654 kernel: scsi host1: ata_piix Apr 30 03:23:55.086786 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 03:23:55.086801 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 03:23:55.086813 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 03:23:55.115121 kernel: scsi host2: Virtio SCSI HBA Apr 30 03:23:55.115304 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:23:55.115318 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 03:23:55.115429 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:23:55.115443 kernel: GPT:9289727 != 125829119 Apr 30 03:23:55.115454 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:23:55.115465 kernel: GPT:9289727 != 125829119 Apr 30 03:23:55.115476 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:23:55.115487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:55.115503 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 03:23:55.119275 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Apr 30 03:23:55.120901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:23:55.121063 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:55.123479 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:55.124689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:55.124992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:55.126278 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:55.131398 kernel: ACPI: bus type USB registered Apr 30 03:23:55.131471 kernel: usbcore: registered new interface driver usbfs Apr 30 03:23:55.131494 kernel: usbcore: registered new interface driver hub Apr 30 03:23:55.133455 kernel: usbcore: registered new device driver usb Apr 30 03:23:55.138513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:55.188553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:55.196328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:55.222693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:55.284104 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:23:55.303474 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (452) Apr 30 03:23:55.306646 kernel: AES CTR mode by8 optimization enabled Apr 30 03:23:55.306722 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443) Apr 30 03:23:55.334505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:23:55.346383 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:23:55.357807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:23:55.364902 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:23:55.365630 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:23:55.376367 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:23:55.382897 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 03:23:55.385335 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 03:23:55.385525 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 03:23:55.385673 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 03:23:55.385799 kernel: hub 1-0:1.0: USB hub found Apr 30 03:23:55.386020 kernel: hub 1-0:1.0: 2 ports detected Apr 30 03:23:55.395512 disk-uuid[546]: Primary Header is updated. Apr 30 03:23:55.395512 disk-uuid[546]: Secondary Entries is updated. Apr 30 03:23:55.395512 disk-uuid[546]: Secondary Header is updated. Apr 30 03:23:55.406982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:55.426041 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:56.433023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:56.434360 disk-uuid[547]: The operation has completed successfully. Apr 30 03:23:56.491097 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:23:56.491226 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:23:56.501206 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:23:56.509783 sh[558]: Success Apr 30 03:23:56.531334 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:23:56.594956 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:23:56.597677 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:23:56.605256 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:23:56.626967 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:23:56.627062 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:56.627078 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:23:56.628025 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:23:56.629334 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:23:56.638717 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:23:56.640264 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:23:56.650273 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:23:56.652645 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:23:56.670684 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:56.670778 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:56.670819 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:56.678974 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:56.689308 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:23:56.690988 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:56.699974 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:23:56.707305 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:23:56.836505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:23:56.849305 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:23:56.889384 ignition[644]: Ignition 2.19.0 Apr 30 03:23:56.889398 ignition[644]: Stage: fetch-offline Apr 30 03:23:56.889440 ignition[644]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:56.889450 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:56.891458 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:23:56.889578 ignition[644]: parsed url from cmdline: "" Apr 30 03:23:56.889582 ignition[644]: no config URL provided Apr 30 03:23:56.889588 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:23:56.889597 ignition[644]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:23:56.889603 ignition[644]: failed to fetch config: resource requires networking Apr 30 03:23:56.890033 ignition[644]: Ignition finished successfully Apr 30 03:23:56.897513 systemd-networkd[744]: lo: Link UP Apr 30 03:23:56.897529 systemd-networkd[744]: lo: Gained carrier Apr 30 03:23:56.901326 systemd-networkd[744]: Enumeration completed Apr 30 03:23:56.901521 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:23:56.901911 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:23:56.901917 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 03:23:56.902117 systemd[1]: Reached target network.target - Network. Apr 30 03:23:56.903648 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:56.903654 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:23:56.907437 systemd-networkd[744]: eth0: Link UP Apr 30 03:23:56.907442 systemd-networkd[744]: eth0: Gained carrier Apr 30 03:23:56.907462 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:23:56.910486 systemd-networkd[744]: eth1: Link UP Apr 30 03:23:56.910490 systemd-networkd[744]: eth1: Gained carrier Apr 30 03:23:56.910506 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:56.912237 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:23:56.925131 systemd-networkd[744]: eth0: DHCPv4 address 209.38.154.103/19, gateway 209.38.128.1 acquired from 169.254.169.253 Apr 30 03:23:56.931123 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Apr 30 03:23:56.944520 ignition[752]: Ignition 2.19.0 Apr 30 03:23:56.944552 ignition[752]: Stage: fetch Apr 30 03:23:56.944983 ignition[752]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:56.945000 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:56.945123 ignition[752]: parsed url from cmdline: "" Apr 30 03:23:56.945127 ignition[752]: no config URL provided Apr 30 03:23:56.945133 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:23:56.945143 ignition[752]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:23:56.945171 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 03:23:56.977348 ignition[752]: GET result: OK Apr 30 03:23:56.977479 ignition[752]: parsing config with SHA512: 0b9f51bf463bb0028e5725a62bc8109d47c2d49161aa04a0ecb37a4270b279438568f0fe7040d41f084d936fd9d928dbc782849c1e14cbf364b579c91c745ec7 Apr 30 03:23:56.982761 unknown[752]: fetched base config from "system" Apr 30 03:23:56.983386 ignition[752]: fetch: fetch complete Apr 30 03:23:56.982781 unknown[752]: fetched base config from "system" Apr 30 03:23:56.983393 ignition[752]: fetch: fetch passed Apr 30 03:23:56.982789 unknown[752]: fetched user config from "digitalocean" Apr 30 03:23:56.983462 ignition[752]: Ignition finished successfully Apr 30 03:23:56.986443 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:23:56.991321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:23:57.015834 ignition[760]: Ignition 2.19.0 Apr 30 03:23:57.015846 ignition[760]: Stage: kargs Apr 30 03:23:57.016128 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:57.016141 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:57.018823 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:23:57.017090 ignition[760]: kargs: kargs passed Apr 30 03:23:57.017157 ignition[760]: Ignition finished successfully Apr 30 03:23:57.027274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:23:57.050555 ignition[766]: Ignition 2.19.0 Apr 30 03:23:57.050567 ignition[766]: Stage: disks Apr 30 03:23:57.050805 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:57.050820 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:57.054087 ignition[766]: disks: disks passed Apr 30 03:23:57.054205 ignition[766]: Ignition finished successfully Apr 30 03:23:57.055660 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:23:57.059548 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:23:57.060188 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:23:57.061031 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:23:57.061718 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:23:57.062407 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:23:57.070222 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:23:57.114061 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:23:57.116947 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:23:57.125449 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:23:57.231293 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:23:57.232139 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:23:57.233365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:23:57.244182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:23:57.248097 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:23:57.250594 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Apr 30 03:23:57.259989 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Apr 30 03:23:57.259503 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:23:57.260054 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:23:57.260095 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:23:57.266079 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:57.266134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:57.267010 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:57.271220 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:23:57.275975 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:57.281324 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:23:57.286499 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:23:57.356959 coreos-metadata[786]: Apr 30 03:23:57.356 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:23:57.361995 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:23:57.368083 coreos-metadata[785]: Apr 30 03:23:57.367 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:23:57.370344 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:23:57.373247 coreos-metadata[786]: Apr 30 03:23:57.372 INFO Fetch successful Apr 30 03:23:57.377749 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:23:57.379838 coreos-metadata[786]: Apr 30 03:23:57.378 INFO wrote hostname ci-4081.3.3-c-cb9001cac8 to /sysroot/etc/hostname Apr 30 03:23:57.381777 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:23:57.384444 coreos-metadata[785]: Apr 30 03:23:57.383 INFO Fetch successful Apr 30 03:23:57.388469 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:23:57.390732 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Apr 30 03:23:57.390980 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Apr 30 03:23:57.501147 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:23:57.507190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:23:57.523322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:23:57.534951 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:57.557281 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:23:57.565410 ignition[905]: INFO : Ignition 2.19.0 Apr 30 03:23:57.566977 ignition[905]: INFO : Stage: mount Apr 30 03:23:57.567680 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:57.568157 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:57.569558 ignition[905]: INFO : mount: mount passed Apr 30 03:23:57.570028 ignition[905]: INFO : Ignition finished successfully Apr 30 03:23:57.571075 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:23:57.577151 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:23:57.625200 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:23:57.633237 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:23:57.645196 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (916) Apr 30 03:23:57.648464 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:57.648564 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:57.648588 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:57.652978 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:57.656836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:23:57.687093 ignition[932]: INFO : Ignition 2.19.0 Apr 30 03:23:57.687918 ignition[932]: INFO : Stage: files Apr 30 03:23:57.688611 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:57.689989 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:57.691100 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:23:57.693081 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:23:57.693872 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:23:57.699067 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:23:57.700171 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:23:57.701518 unknown[932]: wrote ssh authorized keys file for user: core Apr 30 03:23:57.702441 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:23:57.703919 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:23:57.704758 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:23:57.753170 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:23:57.854965 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:23:57.856080 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:23:57.856080 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 03:23:58.098203 systemd-networkd[744]: eth1: Gained IPv6LL Apr 30 03:23:58.420547 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:23:58.486988 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:23:58.486988 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:23:58.488861 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 03:23:58.802338 systemd-networkd[744]: eth0: Gained IPv6LL Apr 30 03:23:58.866477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:23:59.137760 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:23:59.137760 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 03:23:59.139442 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:23:59.139442 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:23:59.139442 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 03:23:59.139442 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:23:59.139442 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:23:59.139442 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:23:59.143345 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:23:59.143345 ignition[932]: INFO : files: files passed Apr 30 03:23:59.143345 ignition[932]: INFO : Ignition finished successfully Apr 30 03:23:59.141054 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:23:59.147234 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:23:59.156181 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:23:59.160885 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:23:59.161564 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:23:59.170718 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:59.170718 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:59.174093 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:59.176711 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:23:59.177869 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:23:59.184247 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:23:59.225342 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:23:59.225479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:23:59.227033 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:23:59.227905 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:23:59.228394 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:23:59.234263 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:23:59.265246 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:23:59.278436 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:23:59.294638 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:59.295404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:59.296423 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:23:59.297288 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:23:59.297588 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:23:59.299067 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:23:59.300090 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:23:59.301009 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:23:59.301656 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:23:59.302773 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:23:59.303803 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:23:59.304665 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:23:59.305622 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:23:59.306573 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:23:59.307487 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:23:59.308207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:23:59.308491 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:23:59.309935 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:59.310589 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:59.311546 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:23:59.311754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:59.312568 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:23:59.312802 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:23:59.314334 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:23:59.314565 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:23:59.315530 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:23:59.315720 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:23:59.316441 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:23:59.316617 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:23:59.324452 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:23:59.325787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:23:59.327017 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:59.331256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:23:59.331738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:23:59.331893 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:59.332396 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:23:59.332563 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:23:59.344591 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:23:59.345201 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:23:59.353444 ignition[985]: INFO : Ignition 2.19.0 Apr 30 03:23:59.353444 ignition[985]: INFO : Stage: umount Apr 30 03:23:59.360380 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:59.360380 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:23:59.360380 ignition[985]: INFO : umount: umount passed Apr 30 03:23:59.360380 ignition[985]: INFO : Ignition finished successfully Apr 30 03:23:59.359954 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:23:59.360118 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:23:59.361901 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:23:59.362044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:23:59.362483 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:23:59.364907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:23:59.366358 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:23:59.366464 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:23:59.369795 systemd[1]: Stopped target network.target - Network. Apr 30 03:23:59.370198 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:23:59.370316 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:23:59.371318 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:23:59.371735 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:23:59.372330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:59.373016 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:23:59.373895 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:23:59.374535 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:23:59.374616 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:23:59.375488 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:23:59.375766 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:23:59.376202 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:23:59.376268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:23:59.376679 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:23:59.376728 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:23:59.379660 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:23:59.381018 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:23:59.403755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:23:59.404549 systemd-networkd[744]: eth1: DHCPv6 lease lost Apr 30 03:23:59.411383 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:23:59.411584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:23:59.414913 systemd-networkd[744]: eth0: DHCPv6 lease lost Apr 30 03:23:59.420760 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:23:59.421480 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:23:59.423355 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:23:59.424042 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:23:59.426164 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:23:59.426223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:59.427177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:23:59.427240 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:23:59.434153 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:23:59.434581 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:23:59.434664 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:23:59.437074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:23:59.437168 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:59.437714 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:23:59.437788 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:59.438451 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:23:59.438500 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:59.439541 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:59.454816 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:23:59.455095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:23:59.460108 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:23:59.460346 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:59.461315 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:23:59.461378 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:59.462063 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:23:59.462107 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:59.463187 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:23:59.463251 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:23:59.464971 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:23:59.465089 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:23:59.466117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:23:59.466193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:59.475299 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:23:59.476517 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:23:59.476626 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:59.479605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:59.479700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:59.488349 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:23:59.488536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:23:59.490538 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:23:59.499282 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:23:59.510270 systemd[1]: Switching root. Apr 30 03:23:59.551361 systemd-journald[183]: Journal stopped Apr 30 03:24:00.861013 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:24:00.861161 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:24:00.861187 kernel: SELinux: policy capability open_perms=1 Apr 30 03:24:00.861208 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:24:00.861227 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:24:00.861250 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:24:00.861274 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:24:00.861293 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:24:00.861313 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:24:00.861333 kernel: audit: type=1403 audit(1745983439.715:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:24:00.861368 systemd[1]: Successfully loaded SELinux policy in 49.954ms. Apr 30 03:24:00.861406 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.394ms. Apr 30 03:24:00.861428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:24:00.861453 systemd[1]: Detected virtualization kvm. Apr 30 03:24:00.861478 systemd[1]: Detected architecture x86-64. Apr 30 03:24:00.861498 systemd[1]: Detected first boot. Apr 30 03:24:00.861519 systemd[1]: Hostname set to . Apr 30 03:24:00.861550 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:24:00.861571 zram_generator::config[1035]: No configuration found. Apr 30 03:24:00.861594 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:24:00.861616 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:24:00.861642 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:24:00.861665 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:24:00.861688 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:24:00.861709 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:24:00.861730 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:24:00.861752 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:24:00.861773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:24:00.861796 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:24:00.861817 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:24:00.861844 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:24:00.861865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:24:00.861887 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:24:00.861908 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:24:00.862977 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:24:00.863035 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:24:00.863060 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:24:00.863084 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:24:00.863107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:24:00.863146 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:24:00.863168 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:24:00.863191 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:24:00.863212 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:24:00.863235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:24:00.863257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:24:00.863284 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:24:00.863306 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:24:00.863328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:24:00.863349 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:24:00.863371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:24:00.863394 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:24:00.863415 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:24:00.863445 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:24:00.863466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:24:00.863499 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:24:00.863531 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:24:00.863551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:00.863572 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:24:00.863592 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:24:00.863613 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:24:00.863634 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:24:00.863654 systemd[1]: Reached target machines.target - Containers. Apr 30 03:24:00.863675 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:24:00.863700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:24:00.863721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:24:00.863742 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:24:00.863763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:24:00.863784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:24:00.863805 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:24:00.863826 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:24:00.863848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:24:00.863880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:24:00.863901 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:24:00.863921 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:24:00.866144 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:24:00.866180 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:24:00.866202 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:24:00.866223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:24:00.866244 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:24:00.866267 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:24:00.866302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:24:00.866323 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:24:00.866345 systemd[1]: Stopped verity-setup.service. Apr 30 03:24:00.866367 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:00.866390 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:24:00.866412 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:24:00.866433 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:24:00.866455 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:24:00.866481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:24:00.866502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:24:00.866523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:24:00.866545 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:24:00.866567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:24:00.866593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:24:00.866619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:24:00.866641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:24:00.866662 kernel: fuse: init (API version 7.39) Apr 30 03:24:00.866686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:24:00.866708 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:24:00.866736 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:24:00.866758 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:24:00.866851 systemd-journald[1101]: Collecting audit messages is disabled. Apr 30 03:24:00.866895 kernel: loop: module loaded Apr 30 03:24:00.866915 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:24:00.866957 systemd-journald[1101]: Journal started Apr 30 03:24:00.867016 systemd-journald[1101]: Runtime Journal (/run/log/journal/f5709ec2d47e4670851046a4e7ea220b) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:24:00.457189 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:24:00.485583 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:24:00.486294 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:24:00.875912 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:24:00.876007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:24:00.876035 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:24:00.878848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:24:00.879910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:24:00.882283 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:24:00.884312 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:24:00.885045 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:24:00.899640 kernel: ACPI: bus type drm_connector registered Apr 30 03:24:00.899999 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:24:00.900289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:24:00.922477 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:24:00.923192 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:24:00.923238 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:24:00.927885 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:24:00.938296 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:24:00.950318 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:24:00.951337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:24:00.962282 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:24:00.965168 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:24:00.965874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:24:00.968268 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:24:00.970099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:24:00.982013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:24:00.991349 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:24:00.994542 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:24:00.995631 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:24:01.012217 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:24:01.052961 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:24:01.055562 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:24:01.058736 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:24:01.069310 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:24:01.089587 systemd-journald[1101]: Time spent on flushing to /var/log/journal/f5709ec2d47e4670851046a4e7ea220b is 28.157ms for 994 entries. Apr 30 03:24:01.089587 systemd-journald[1101]: System Journal (/var/log/journal/f5709ec2d47e4670851046a4e7ea220b) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:24:01.134042 systemd-journald[1101]: Received client request to flush runtime journal. Apr 30 03:24:01.134109 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:24:01.148254 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:24:01.159275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:24:01.166255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:24:01.168910 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:24:01.185336 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:24:01.196137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:24:01.211416 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:24:01.254985 kernel: loop2: detected capacity change from 0 to 205544 Apr 30 03:24:01.285426 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:24:01.300979 kernel: loop3: detected capacity change from 0 to 8 Apr 30 03:24:01.302504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:24:01.304164 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:24:01.361970 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:24:01.407060 kernel: loop5: detected capacity change from 0 to 142488 Apr 30 03:24:01.443047 kernel: loop6: detected capacity change from 0 to 205544 Apr 30 03:24:01.451061 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Apr 30 03:24:01.452146 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Apr 30 03:24:01.479002 kernel: loop7: detected capacity change from 0 to 8 Apr 30 03:24:01.476074 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 03:24:01.482157 (sd-merge)[1171]: Merged extensions into '/usr'. Apr 30 03:24:01.490274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:24:01.497152 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:24:01.497477 systemd[1]: Reloading... Apr 30 03:24:01.790018 zram_generator::config[1202]: No configuration found. Apr 30 03:24:01.839762 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:24:02.022485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:24:02.112434 systemd[1]: Reloading finished in 613 ms. Apr 30 03:24:02.145456 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:24:02.152626 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:24:02.162453 systemd[1]: Starting ensure-sysext.service... Apr 30 03:24:02.176300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:24:02.207325 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:24:02.207358 systemd[1]: Reloading... Apr 30 03:24:02.265861 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:24:02.267262 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:24:02.268450 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:24:02.268805 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Apr 30 03:24:02.269012 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Apr 30 03:24:02.277536 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:24:02.279481 systemd-tmpfiles[1243]: Skipping /boot Apr 30 03:24:02.306373 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:24:02.307732 systemd-tmpfiles[1243]: Skipping /boot Apr 30 03:24:02.420976 zram_generator::config[1273]: No configuration found. Apr 30 03:24:02.583298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:24:02.663290 systemd[1]: Reloading finished in 455 ms. Apr 30 03:24:02.683711 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:24:02.690975 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:24:02.706351 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:24:02.716954 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:24:02.721265 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:24:02.733363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:24:02.743194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:24:02.747623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:24:02.771043 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:24:02.776021 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:02.776230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:24:02.787847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:24:02.793419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:24:02.800046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:24:02.800666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:24:02.800878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:02.804477 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:24:02.814547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:02.814746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:24:02.816048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:24:02.827336 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:24:02.827755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:02.829750 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:24:02.832661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:24:02.839713 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Apr 30 03:24:02.848772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:24:02.849462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:24:02.859262 systemd[1]: Finished ensure-sysext.service. Apr 30 03:24:02.863639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:24:02.863875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:24:02.871568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:02.871898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:24:02.881249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:24:02.882194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:24:02.882265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:24:02.888190 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:24:02.889638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:24:02.889688 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:02.889966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:24:02.892009 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:24:02.892775 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:24:02.893489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:24:02.916228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:24:02.917878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:24:02.961412 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:24:02.964643 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:24:02.966058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:24:02.974592 augenrules[1368]: No rules Apr 30 03:24:02.976998 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:24:03.012554 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:24:03.155159 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 03:24:03.155744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:03.156046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:24:03.165379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:24:03.175299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:24:03.186392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:24:03.187136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:24:03.187206 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:24:03.187231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:24:03.187733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:24:03.188291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:24:03.199983 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1349) Apr 30 03:24:03.203857 systemd-networkd[1365]: lo: Link UP Apr 30 03:24:03.203872 systemd-networkd[1365]: lo: Gained carrier Apr 30 03:24:03.214048 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:24:03.224211 systemd-networkd[1365]: Enumeration completed Apr 30 03:24:03.224823 systemd-networkd[1365]: eth0: Configuring with /run/systemd/network/10-9a:21:79:01:c9:57.network. Apr 30 03:24:03.225186 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:24:03.234387 systemd-networkd[1365]: eth1: Configuring with /run/systemd/network/10-0a:9b:32:ba:be:10.network. Apr 30 03:24:03.235067 systemd-networkd[1365]: eth0: Link UP Apr 30 03:24:03.235073 systemd-networkd[1365]: eth0: Gained carrier Apr 30 03:24:03.235192 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:24:03.238490 systemd-resolved[1320]: Positive Trust Anchors: Apr 30 03:24:03.238507 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:24:03.238546 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:24:03.245955 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 03:24:03.244746 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 03:24:03.245852 systemd-networkd[1365]: eth1: Link UP Apr 30 03:24:03.245858 systemd-networkd[1365]: eth1: Gained carrier Apr 30 03:24:03.248523 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:24:03.249196 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:24:03.259375 systemd-resolved[1320]: Using system hostname 'ci-4081.3.3-c-cb9001cac8'. Apr 30 03:24:03.261246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:24:03.261251 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Apr 30 03:24:03.264704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:24:03.265554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:24:03.266764 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:24:03.267435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:24:03.271248 systemd[1]: Reached target network.target - Network. Apr 30 03:24:03.272103 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:24:03.272809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:24:03.272867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:24:03.349905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:24:03.360032 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 03:24:03.362392 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:24:03.359553 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:24:03.381993 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:24:03.393861 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:24:03.417980 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:24:03.464966 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:24:03.474853 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 03:24:03.475017 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 03:24:03.475961 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:24:03.477263 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:24:03.477336 kernel: [drm] features: -context_init Apr 30 03:24:03.479138 kernel: [drm] number of scanouts: 1 Apr 30 03:24:03.479247 kernel: [drm] number of cap sets: 0 Apr 30 03:24:03.480955 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 03:24:04.140179 systemd-resolved[1320]: Clock change detected. Flushing caches. Apr 30 03:24:04.140385 systemd-timesyncd[1347]: Contacted time server 66.42.71.197:123 (0.flatcar.pool.ntp.org). Apr 30 03:24:04.140469 systemd-timesyncd[1347]: Initial clock synchronization to Wed 2025-04-30 03:24:04.140095 UTC. Apr 30 03:24:04.145023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:24:04.157240 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:24:04.159041 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:24:04.186364 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:24:04.182622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:24:04.182827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:24:04.195303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:24:04.204118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:24:04.204634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:24:04.253645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:24:04.307988 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:24:04.330872 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:24:04.337561 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:24:04.359506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:24:04.362609 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:24:04.400718 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:24:04.402384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:24:04.402576 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:24:04.402791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:24:04.403169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:24:04.404973 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:24:04.405322 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:24:04.405447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:24:04.405535 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:24:04.405576 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:24:04.405649 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:24:04.407680 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:24:04.410257 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:24:04.419384 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:24:04.423997 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:24:04.429101 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:24:04.432008 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:24:04.432787 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:24:04.433606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:24:04.433662 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:24:04.438929 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:24:04.442286 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:24:04.456320 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:24:04.469309 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:24:04.475843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:24:04.487211 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:24:04.487814 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:24:04.498184 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:24:04.503011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:24:04.513571 jq[1430]: false Apr 30 03:24:04.517259 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:24:04.531062 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:24:04.535942 dbus-daemon[1429]: [system] SELinux support is enabled Apr 30 03:24:04.549153 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:24:04.552459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:24:04.554291 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:24:04.563229 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:24:04.575097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:24:04.577479 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:24:04.577710 coreos-metadata[1428]: Apr 30 03:24:04.577 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:24:04.593702 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:24:04.598818 coreos-metadata[1428]: Apr 30 03:24:04.597 INFO Fetch successful Apr 30 03:24:04.602473 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:24:04.602690 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:24:04.608729 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:24:04.609053 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:24:04.627942 extend-filesystems[1431]: Found loop4 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found loop5 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found loop6 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found loop7 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda1 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda2 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda3 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found usr Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda4 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda6 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda7 Apr 30 03:24:04.627942 extend-filesystems[1431]: Found vda9 Apr 30 03:24:04.627942 extend-filesystems[1431]: Checking size of /dev/vda9 Apr 30 03:24:04.741500 jq[1441]: true Apr 30 03:24:04.630628 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:24:04.741961 update_engine[1439]: I20250430 03:24:04.685431 1439 main.cc:92] Flatcar Update Engine starting Apr 30 03:24:04.741961 update_engine[1439]: I20250430 03:24:04.718076 1439 update_check_scheduler.cc:74] Next update check in 4m17s Apr 30 03:24:04.630715 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:24:04.742497 tar[1444]: linux-amd64/helm Apr 30 03:24:04.631856 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:24:04.632017 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 03:24:04.752743 jq[1460]: true Apr 30 03:24:04.632046 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:24:04.654035 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:24:04.654366 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:24:04.678718 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:24:04.713861 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:24:04.761995 extend-filesystems[1431]: Resized partition /dev/vda9 Apr 30 03:24:04.730555 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:24:04.770411 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:24:04.760580 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:24:04.767615 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:24:04.799174 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1348) Apr 30 03:24:04.799292 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 03:24:04.892105 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:24:04.891685 systemd-logind[1438]: New seat seat0. Apr 30 03:24:04.894687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:24:04.905424 systemd[1]: Starting sshkeys.service... Apr 30 03:24:04.908806 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:24:04.908848 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:24:04.909438 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:24:05.008304 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 03:24:05.009265 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:24:05.023526 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:24:05.080534 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:24:05.080534 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 03:24:05.080534 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 03:24:05.097061 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Apr 30 03:24:05.097061 extend-filesystems[1431]: Found vdb Apr 30 03:24:05.085237 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:24:05.085533 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:24:05.121123 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:24:05.128128 coreos-metadata[1496]: Apr 30 03:24:05.126 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:24:05.151716 coreos-metadata[1496]: Apr 30 03:24:05.151 INFO Fetch successful Apr 30 03:24:05.174057 unknown[1496]: wrote ssh authorized keys file for user: core Apr 30 03:24:05.234952 update-ssh-keys[1508]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:24:05.235920 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:24:05.246256 systemd[1]: Finished sshkeys.service. Apr 30 03:24:05.313834 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:24:05.363930 containerd[1457]: time="2025-04-30T03:24:05.362377504Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:24:05.408979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:24:05.422433 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:24:05.443622 containerd[1457]: time="2025-04-30T03:24:05.443486924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.449268 containerd[1457]: time="2025-04-30T03:24:05.449192708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:24:05.449268 containerd[1457]: time="2025-04-30T03:24:05.449252126Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:24:05.449268 containerd[1457]: time="2025-04-30T03:24:05.449282091Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:24:05.449505 containerd[1457]: time="2025-04-30T03:24:05.449489336Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:24:05.449557 containerd[1457]: time="2025-04-30T03:24:05.449512221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.449593 containerd[1457]: time="2025-04-30T03:24:05.449574027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:24:05.449630 containerd[1457]: time="2025-04-30T03:24:05.449593130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450519 containerd[1457]: time="2025-04-30T03:24:05.449854803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450519 containerd[1457]: time="2025-04-30T03:24:05.449896272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450519 containerd[1457]: time="2025-04-30T03:24:05.449914261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450519 containerd[1457]: time="2025-04-30T03:24:05.449924628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450519 containerd[1457]: time="2025-04-30T03:24:05.450006449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450519 containerd[1457]: time="2025-04-30T03:24:05.450243958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450827 containerd[1457]: time="2025-04-30T03:24:05.450779708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:24:05.450827 containerd[1457]: time="2025-04-30T03:24:05.450821887Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:24:05.452754 containerd[1457]: time="2025-04-30T03:24:05.452706327Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:24:05.452947 containerd[1457]: time="2025-04-30T03:24:05.452798970Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:24:05.456846 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:24:05.457277 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:24:05.457410 containerd[1457]: time="2025-04-30T03:24:05.457369037Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:24:05.457550 containerd[1457]: time="2025-04-30T03:24:05.457455450Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:24:05.457550 containerd[1457]: time="2025-04-30T03:24:05.457491571Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:24:05.457550 containerd[1457]: time="2025-04-30T03:24:05.457538951Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:24:05.457626 containerd[1457]: time="2025-04-30T03:24:05.457566850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:24:05.458003 containerd[1457]: time="2025-04-30T03:24:05.457756827Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:24:05.458252 containerd[1457]: time="2025-04-30T03:24:05.458229482Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:24:05.458474 containerd[1457]: time="2025-04-30T03:24:05.458407421Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:24:05.458474 containerd[1457]: time="2025-04-30T03:24:05.458433171Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:24:05.458474 containerd[1457]: time="2025-04-30T03:24:05.458448956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:24:05.458474 containerd[1457]: time="2025-04-30T03:24:05.458465591Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458480716Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458495692Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458511150Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458527870Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458544488Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458556967Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458569 containerd[1457]: time="2025-04-30T03:24:05.458570590Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458595573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458612749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458626464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458641641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458655706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458685340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458709752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458727147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458741286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458760559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458775219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458789117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458802727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.458828 containerd[1457]: time="2025-04-30T03:24:05.458819842Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:24:05.459714 containerd[1457]: time="2025-04-30T03:24:05.458845710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.459714 containerd[1457]: time="2025-04-30T03:24:05.458861712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.465333 containerd[1457]: time="2025-04-30T03:24:05.458877560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466081325Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466148971Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466163076Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466176529Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466186688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466202007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466217718Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:24:05.466421 containerd[1457]: time="2025-04-30T03:24:05.466235292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:24:05.466705 containerd[1457]: time="2025-04-30T03:24:05.466607884Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:24:05.466705 containerd[1457]: time="2025-04-30T03:24:05.466676291Z" level=info msg="Connect containerd service" Apr 30 03:24:05.467483 containerd[1457]: time="2025-04-30T03:24:05.466730278Z" level=info msg="using legacy CRI server" Apr 30 03:24:05.467483 containerd[1457]: time="2025-04-30T03:24:05.466739061Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:24:05.467483 containerd[1457]: time="2025-04-30T03:24:05.466865417Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:24:05.472637 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:24:05.483296 containerd[1457]: time="2025-04-30T03:24:05.482752438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:24:05.483651 containerd[1457]: time="2025-04-30T03:24:05.483588288Z" level=info msg="Start subscribing containerd event" Apr 30 03:24:05.483878 containerd[1457]: time="2025-04-30T03:24:05.483854773Z" level=info msg="Start recovering state" Apr 30 03:24:05.484139 containerd[1457]: time="2025-04-30T03:24:05.484121577Z" level=info msg="Start event monitor" Apr 30 03:24:05.484400 containerd[1457]: time="2025-04-30T03:24:05.484385763Z" level=info msg="Start snapshots syncer" Apr 30 03:24:05.484672 containerd[1457]: time="2025-04-30T03:24:05.484656583Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:24:05.484725 containerd[1457]: time="2025-04-30T03:24:05.484716586Z" level=info msg="Start streaming server" Apr 30 03:24:05.484825 containerd[1457]: time="2025-04-30T03:24:05.484119759Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:24:05.484968 containerd[1457]: time="2025-04-30T03:24:05.484944342Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:24:05.485180 containerd[1457]: time="2025-04-30T03:24:05.485052036Z" level=info msg="containerd successfully booted in 0.124173s" Apr 30 03:24:05.489123 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:24:05.519806 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:24:05.534529 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:24:05.543468 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:24:05.549029 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:24:05.699744 tar[1444]: linux-amd64/LICENSE Apr 30 03:24:05.699744 tar[1444]: linux-amd64/README.md Apr 30 03:24:05.715875 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:24:05.785166 systemd-networkd[1365]: eth1: Gained IPv6LL Apr 30 03:24:05.788131 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:24:05.791324 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:24:05.806270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:05.810198 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:24:05.850544 systemd-networkd[1365]: eth0: Gained IPv6LL Apr 30 03:24:05.853722 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:24:06.870376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:06.873682 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:24:06.880191 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:24:06.880606 systemd[1]: Startup finished in 1.014s (kernel) + 5.996s (initrd) + 6.565s (userspace) = 13.576s. Apr 30 03:24:07.590733 kubelet[1551]: E0430 03:24:07.590541 1551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:24:07.594174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:24:07.594379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:24:07.595002 systemd[1]: kubelet.service: Consumed 1.303s CPU time. Apr 30 03:24:08.020660 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:24:08.031600 systemd[1]: Started sshd@0-209.38.154.103:22-139.178.89.65:43794.service - OpenSSH per-connection server daemon (139.178.89.65:43794). Apr 30 03:24:08.117010 sshd[1563]: Accepted publickey for core from 139.178.89.65 port 43794 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:08.118502 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:08.130569 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:24:08.135348 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:24:08.141224 systemd-logind[1438]: New session 1 of user core. Apr 30 03:24:08.171129 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:24:08.186506 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:24:08.192757 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:24:08.326225 systemd[1567]: Queued start job for default target default.target. Apr 30 03:24:08.335624 systemd[1567]: Created slice app.slice - User Application Slice. Apr 30 03:24:08.335683 systemd[1567]: Reached target paths.target - Paths. Apr 30 03:24:08.335706 systemd[1567]: Reached target timers.target - Timers. Apr 30 03:24:08.338374 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:24:08.362007 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:24:08.362245 systemd[1567]: Reached target sockets.target - Sockets. Apr 30 03:24:08.362274 systemd[1567]: Reached target basic.target - Basic System. Apr 30 03:24:08.362365 systemd[1567]: Reached target default.target - Main User Target. Apr 30 03:24:08.362429 systemd[1567]: Startup finished in 157ms. Apr 30 03:24:08.362811 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:24:08.371335 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:24:08.454961 systemd[1]: Started sshd@1-209.38.154.103:22-139.178.89.65:43802.service - OpenSSH per-connection server daemon (139.178.89.65:43802). Apr 30 03:24:08.512272 sshd[1578]: Accepted publickey for core from 139.178.89.65 port 43802 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:08.514784 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:08.522601 systemd-logind[1438]: New session 2 of user core. Apr 30 03:24:08.528230 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:24:08.594717 sshd[1578]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:08.599866 systemd[1]: sshd@1-209.38.154.103:22-139.178.89.65:43802.service: Deactivated successfully. Apr 30 03:24:08.602226 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:24:08.603059 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:24:08.604701 systemd-logind[1438]: Removed session 2. Apr 30 03:24:08.633400 systemd[1]: Started sshd@2-209.38.154.103:22-139.178.89.65:43804.service - OpenSSH per-connection server daemon (139.178.89.65:43804). Apr 30 03:24:08.692111 sshd[1585]: Accepted publickey for core from 139.178.89.65 port 43804 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:08.694730 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:08.704934 systemd-logind[1438]: New session 3 of user core. Apr 30 03:24:08.716263 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:24:08.779644 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:08.793293 systemd[1]: sshd@2-209.38.154.103:22-139.178.89.65:43804.service: Deactivated successfully. Apr 30 03:24:08.795398 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:24:08.797867 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:24:08.804458 systemd[1]: Started sshd@3-209.38.154.103:22-139.178.89.65:43818.service - OpenSSH per-connection server daemon (139.178.89.65:43818). Apr 30 03:24:08.807616 systemd-logind[1438]: Removed session 3. Apr 30 03:24:08.855010 sshd[1592]: Accepted publickey for core from 139.178.89.65 port 43818 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:08.857533 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:08.868262 systemd-logind[1438]: New session 4 of user core. Apr 30 03:24:08.876259 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:24:08.943387 sshd[1592]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:08.955302 systemd[1]: sshd@3-209.38.154.103:22-139.178.89.65:43818.service: Deactivated successfully. Apr 30 03:24:08.959108 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:24:08.961981 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:24:08.969565 systemd[1]: Started sshd@4-209.38.154.103:22-139.178.89.65:43830.service - OpenSSH per-connection server daemon (139.178.89.65:43830). Apr 30 03:24:08.972610 systemd-logind[1438]: Removed session 4. Apr 30 03:24:09.033057 sshd[1599]: Accepted publickey for core from 139.178.89.65 port 43830 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:09.035496 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:09.041255 systemd-logind[1438]: New session 5 of user core. Apr 30 03:24:09.051221 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:24:09.125314 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:24:09.126980 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:24:09.146198 sudo[1602]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:09.152410 sshd[1599]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:09.167364 systemd[1]: sshd@4-209.38.154.103:22-139.178.89.65:43830.service: Deactivated successfully. Apr 30 03:24:09.170388 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:24:09.171865 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:24:09.180360 systemd[1]: Started sshd@5-209.38.154.103:22-139.178.89.65:43846.service - OpenSSH per-connection server daemon (139.178.89.65:43846). Apr 30 03:24:09.183115 systemd-logind[1438]: Removed session 5. Apr 30 03:24:09.242416 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 43846 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:09.244703 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:09.254989 systemd-logind[1438]: New session 6 of user core. Apr 30 03:24:09.261275 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:24:09.325963 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:24:09.326361 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:24:09.332139 sudo[1611]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:09.343004 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:24:09.343560 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:24:09.362463 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:24:09.367997 auditctl[1614]: No rules Apr 30 03:24:09.368591 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:24:09.368825 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:24:09.384567 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:24:09.425213 augenrules[1632]: No rules Apr 30 03:24:09.427632 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:24:09.429510 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:09.434050 sshd[1607]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:09.452016 systemd[1]: sshd@5-209.38.154.103:22-139.178.89.65:43846.service: Deactivated successfully. Apr 30 03:24:09.455108 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:24:09.456271 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:24:09.466497 systemd[1]: Started sshd@6-209.38.154.103:22-139.178.89.65:43854.service - OpenSSH per-connection server daemon (139.178.89.65:43854). Apr 30 03:24:09.467406 systemd-logind[1438]: Removed session 6. Apr 30 03:24:09.515953 sshd[1640]: Accepted publickey for core from 139.178.89.65 port 43854 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:24:09.518075 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:09.526987 systemd-logind[1438]: New session 7 of user core. Apr 30 03:24:09.529244 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:24:09.594007 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:24:09.594379 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:24:10.104428 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:24:10.122812 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:24:10.666840 dockerd[1659]: time="2025-04-30T03:24:10.666469477Z" level=info msg="Starting up" Apr 30 03:24:10.802366 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1217774340-merged.mount: Deactivated successfully. Apr 30 03:24:10.881349 dockerd[1659]: time="2025-04-30T03:24:10.881251077Z" level=info msg="Loading containers: start." Apr 30 03:24:11.051031 kernel: Initializing XFRM netlink socket Apr 30 03:24:11.165278 systemd-networkd[1365]: docker0: Link UP Apr 30 03:24:11.187918 dockerd[1659]: time="2025-04-30T03:24:11.187782813Z" level=info msg="Loading containers: done." Apr 30 03:24:11.210707 dockerd[1659]: time="2025-04-30T03:24:11.208855137Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:24:11.210707 dockerd[1659]: time="2025-04-30T03:24:11.209058191Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:24:11.210707 dockerd[1659]: time="2025-04-30T03:24:11.209227947Z" level=info msg="Daemon has completed initialization" Apr 30 03:24:11.255305 dockerd[1659]: time="2025-04-30T03:24:11.255081035Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:24:11.255800 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:24:12.172849 containerd[1457]: time="2025-04-30T03:24:12.172775528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 03:24:12.693632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196693834.mount: Deactivated successfully. Apr 30 03:24:13.973924 containerd[1457]: time="2025-04-30T03:24:13.972059367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:13.973924 containerd[1457]: time="2025-04-30T03:24:13.973503306Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Apr 30 03:24:13.974672 containerd[1457]: time="2025-04-30T03:24:13.974623896Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:13.978823 containerd[1457]: time="2025-04-30T03:24:13.978726167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:13.980799 containerd[1457]: time="2025-04-30T03:24:13.980730213Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.807888748s" Apr 30 03:24:13.981062 containerd[1457]: time="2025-04-30T03:24:13.981038060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 03:24:13.984095 containerd[1457]: time="2025-04-30T03:24:13.984038724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 03:24:15.498314 containerd[1457]: time="2025-04-30T03:24:15.497096965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:15.499643 containerd[1457]: time="2025-04-30T03:24:15.499591336Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Apr 30 03:24:15.501351 containerd[1457]: time="2025-04-30T03:24:15.501282990Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:15.504605 containerd[1457]: time="2025-04-30T03:24:15.504551046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:15.505623 containerd[1457]: time="2025-04-30T03:24:15.505579881Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.521266356s" Apr 30 03:24:15.505705 containerd[1457]: time="2025-04-30T03:24:15.505632937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 03:24:15.506707 containerd[1457]: time="2025-04-30T03:24:15.506668727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 03:24:16.748971 containerd[1457]: time="2025-04-30T03:24:16.748167214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:16.751837 containerd[1457]: time="2025-04-30T03:24:16.751654972Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Apr 30 03:24:16.755922 containerd[1457]: time="2025-04-30T03:24:16.754312498Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:16.760149 containerd[1457]: time="2025-04-30T03:24:16.760095182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:16.761282 containerd[1457]: time="2025-04-30T03:24:16.761219975Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.254510767s" Apr 30 03:24:16.761282 containerd[1457]: time="2025-04-30T03:24:16.761283696Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 03:24:16.762444 containerd[1457]: time="2025-04-30T03:24:16.762405000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 03:24:17.829298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408040936.mount: Deactivated successfully. Apr 30 03:24:17.831562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:24:17.839328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:18.017365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:18.026397 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:24:18.095056 kubelet[1881]: E0430 03:24:18.094901 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:24:18.099975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:24:18.100128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:24:18.428701 containerd[1457]: time="2025-04-30T03:24:18.428231826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:18.429868 containerd[1457]: time="2025-04-30T03:24:18.429793671Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Apr 30 03:24:18.430841 containerd[1457]: time="2025-04-30T03:24:18.430757087Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:18.433045 containerd[1457]: time="2025-04-30T03:24:18.432946157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:18.434041 containerd[1457]: time="2025-04-30T03:24:18.433642832Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.671201225s" Apr 30 03:24:18.434041 containerd[1457]: time="2025-04-30T03:24:18.433688391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 03:24:18.434987 containerd[1457]: time="2025-04-30T03:24:18.434946727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:24:18.438978 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 03:24:18.933000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059795224.mount: Deactivated successfully. Apr 30 03:24:19.816041 containerd[1457]: time="2025-04-30T03:24:19.815080389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:19.816982 containerd[1457]: time="2025-04-30T03:24:19.816904869Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:24:19.817597 containerd[1457]: time="2025-04-30T03:24:19.817556432Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:19.821727 containerd[1457]: time="2025-04-30T03:24:19.821651454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:19.823476 containerd[1457]: time="2025-04-30T03:24:19.823417490Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.38827131s" Apr 30 03:24:19.823816 containerd[1457]: time="2025-04-30T03:24:19.823680160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:24:19.825245 containerd[1457]: time="2025-04-30T03:24:19.824986137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:24:20.271756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299365384.mount: Deactivated successfully. Apr 30 03:24:20.278085 containerd[1457]: time="2025-04-30T03:24:20.278010112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:20.279024 containerd[1457]: time="2025-04-30T03:24:20.278918502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 03:24:20.279910 containerd[1457]: time="2025-04-30T03:24:20.279661741Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:20.282177 containerd[1457]: time="2025-04-30T03:24:20.282111808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:20.283282 containerd[1457]: time="2025-04-30T03:24:20.283042207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.980616ms" Apr 30 03:24:20.283282 containerd[1457]: time="2025-04-30T03:24:20.283091753Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:24:20.283981 containerd[1457]: time="2025-04-30T03:24:20.283954619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 03:24:20.826628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625522710.mount: Deactivated successfully. Apr 30 03:24:21.529123 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 03:24:22.719923 containerd[1457]: time="2025-04-30T03:24:22.719818429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:22.720617 containerd[1457]: time="2025-04-30T03:24:22.720551527Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Apr 30 03:24:22.723803 containerd[1457]: time="2025-04-30T03:24:22.723120658Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:22.726214 containerd[1457]: time="2025-04-30T03:24:22.726158925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:22.727558 containerd[1457]: time="2025-04-30T03:24:22.727498559Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.443499136s" Apr 30 03:24:22.727558 containerd[1457]: time="2025-04-30T03:24:22.727557903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 03:24:25.286282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:25.300429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:25.345638 systemd[1]: Reloading requested from client PID 2016 ('systemctl') (unit session-7.scope)... Apr 30 03:24:25.345983 systemd[1]: Reloading... Apr 30 03:24:25.501931 zram_generator::config[2055]: No configuration found. Apr 30 03:24:25.662174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:24:25.758600 systemd[1]: Reloading finished in 411 ms. Apr 30 03:24:25.815341 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:24:25.815636 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:24:25.816040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:25.823334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:25.968963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:25.982596 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:24:26.041309 kubelet[2108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:26.041309 kubelet[2108]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:24:26.041309 kubelet[2108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:26.043169 kubelet[2108]: I0430 03:24:26.043012 2108 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:24:26.274724 kubelet[2108]: I0430 03:24:26.270138 2108 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:24:26.274724 kubelet[2108]: I0430 03:24:26.270670 2108 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:24:26.274724 kubelet[2108]: I0430 03:24:26.271645 2108 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:24:26.301672 kubelet[2108]: I0430 03:24:26.301622 2108 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:24:26.304869 kubelet[2108]: E0430 03:24:26.304135 2108 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://209.38.154.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:26.313631 kubelet[2108]: E0430 03:24:26.313554 2108 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:24:26.313631 kubelet[2108]: I0430 03:24:26.313620 2108 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:24:26.319255 kubelet[2108]: I0430 03:24:26.319204 2108 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:24:26.320834 kubelet[2108]: I0430 03:24:26.320754 2108 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:24:26.321160 kubelet[2108]: I0430 03:24:26.321096 2108 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:24:26.321421 kubelet[2108]: I0430 03:24:26.321161 2108 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-c-cb9001cac8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:24:26.321546 kubelet[2108]: I0430 03:24:26.321431 2108 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:24:26.321546 kubelet[2108]: I0430 03:24:26.321443 2108 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:24:26.321597 kubelet[2108]: I0430 03:24:26.321567 2108 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:26.324618 kubelet[2108]: I0430 03:24:26.324277 2108 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:24:26.324618 kubelet[2108]: I0430 03:24:26.324341 2108 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:24:26.324618 kubelet[2108]: I0430 03:24:26.324389 2108 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:24:26.324618 kubelet[2108]: I0430 03:24:26.324414 2108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:24:26.330268 kubelet[2108]: W0430 03:24:26.329864 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.154.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-c-cb9001cac8&limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:26.330268 kubelet[2108]: E0430 03:24:26.329965 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.154.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-c-cb9001cac8&limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:26.330705 kubelet[2108]: W0430 03:24:26.330635 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.154.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:26.330934 kubelet[2108]: E0430 03:24:26.330708 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.154.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:26.330934 kubelet[2108]: I0430 03:24:26.330861 2108 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:24:26.335224 kubelet[2108]: I0430 03:24:26.334738 2108 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:24:26.335224 kubelet[2108]: W0430 03:24:26.334875 2108 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:24:26.337922 kubelet[2108]: I0430 03:24:26.336837 2108 server.go:1269] "Started kubelet" Apr 30 03:24:26.339145 kubelet[2108]: I0430 03:24:26.339109 2108 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:24:26.344558 kubelet[2108]: I0430 03:24:26.344428 2108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:24:26.345197 kubelet[2108]: I0430 03:24:26.345166 2108 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:24:26.350492 kubelet[2108]: E0430 03:24:26.347804 2108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.154.103:6443/api/v1/namespaces/default/events\": dial tcp 209.38.154.103:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-c-cb9001cac8.183afab646b5cc8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-c-cb9001cac8,UID:ci-4081.3.3-c-cb9001cac8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-c-cb9001cac8,},FirstTimestamp:2025-04-30 03:24:26.336799885 +0000 UTC m=+0.342267254,LastTimestamp:2025-04-30 03:24:26.336799885 +0000 UTC m=+0.342267254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-c-cb9001cac8,}" Apr 30 03:24:26.354513 kubelet[2108]: I0430 03:24:26.354353 2108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:24:26.358010 kubelet[2108]: I0430 03:24:26.356260 2108 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:24:26.361847 kubelet[2108]: I0430 03:24:26.360840 2108 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:24:26.363595 kubelet[2108]: I0430 03:24:26.363559 2108 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:24:26.363920 kubelet[2108]: E0430 03:24:26.363898 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:26.365302 kubelet[2108]: E0430 03:24:26.365245 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-c-cb9001cac8?timeout=10s\": dial tcp 209.38.154.103:6443: connect: connection refused" interval="200ms" Apr 30 03:24:26.367017 kubelet[2108]: I0430 03:24:26.366983 2108 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:24:26.367278 kubelet[2108]: I0430 03:24:26.367253 2108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:24:26.368728 kubelet[2108]: I0430 03:24:26.368707 2108 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:24:26.368936 kubelet[2108]: I0430 03:24:26.368925 2108 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:24:26.369440 kubelet[2108]: W0430 03:24:26.369393 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.154.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:26.369554 kubelet[2108]: E0430 03:24:26.369537 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.154.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:26.369816 kubelet[2108]: E0430 03:24:26.369796 2108 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:24:26.370133 kubelet[2108]: I0430 03:24:26.370113 2108 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:24:26.381412 kubelet[2108]: I0430 03:24:26.381357 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:24:26.383089 kubelet[2108]: I0430 03:24:26.383055 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:24:26.383610 kubelet[2108]: I0430 03:24:26.383245 2108 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:24:26.383610 kubelet[2108]: I0430 03:24:26.383276 2108 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:24:26.383610 kubelet[2108]: E0430 03:24:26.383338 2108 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:24:26.395369 kubelet[2108]: W0430 03:24:26.395186 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.154.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:26.395706 kubelet[2108]: E0430 03:24:26.395677 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.154.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:26.400779 kubelet[2108]: I0430 03:24:26.400724 2108 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:24:26.400779 kubelet[2108]: I0430 03:24:26.400779 2108 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:24:26.401046 kubelet[2108]: I0430 03:24:26.400810 2108 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:26.402926 kubelet[2108]: I0430 03:24:26.402880 2108 policy_none.go:49] "None policy: Start" Apr 30 03:24:26.403996 kubelet[2108]: I0430 03:24:26.403766 2108 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:24:26.403996 kubelet[2108]: I0430 03:24:26.403800 2108 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:24:26.411710 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:24:26.424491 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:24:26.430390 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:24:26.443773 kubelet[2108]: I0430 03:24:26.442872 2108 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:24:26.443773 kubelet[2108]: I0430 03:24:26.443166 2108 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:24:26.443773 kubelet[2108]: I0430 03:24:26.443185 2108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:24:26.444034 kubelet[2108]: I0430 03:24:26.443816 2108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:24:26.448755 kubelet[2108]: E0430 03:24:26.448716 2108 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:26.499429 systemd[1]: Created slice kubepods-burstable-pod75b1d264b154291a71999d78dbb75796.slice - libcontainer container kubepods-burstable-pod75b1d264b154291a71999d78dbb75796.slice. Apr 30 03:24:26.514025 systemd[1]: Created slice kubepods-burstable-podb68acc47653b58fdaf7964f36f9059b1.slice - libcontainer container kubepods-burstable-podb68acc47653b58fdaf7964f36f9059b1.slice. Apr 30 03:24:26.528947 systemd[1]: Created slice kubepods-burstable-pod29b2eaa1179512a69e978db13ed01a32.slice - libcontainer container kubepods-burstable-pod29b2eaa1179512a69e978db13ed01a32.slice. Apr 30 03:24:26.545440 kubelet[2108]: I0430 03:24:26.545389 2108 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.546153 kubelet[2108]: E0430 03:24:26.546099 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.154.103:6443/api/v1/nodes\": dial tcp 209.38.154.103:6443: connect: connection refused" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.566813 kubelet[2108]: E0430 03:24:26.566703 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-c-cb9001cac8?timeout=10s\": dial tcp 209.38.154.103:6443: connect: connection refused" interval="400ms" Apr 30 03:24:26.671081 kubelet[2108]: I0430 03:24:26.671000 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75b1d264b154291a71999d78dbb75796-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-c-cb9001cac8\" (UID: \"75b1d264b154291a71999d78dbb75796\") " pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671081 kubelet[2108]: I0430 03:24:26.671051 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75b1d264b154291a71999d78dbb75796-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-c-cb9001cac8\" (UID: \"75b1d264b154291a71999d78dbb75796\") " pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671081 kubelet[2108]: I0430 03:24:26.671072 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671081 kubelet[2108]: I0430 03:24:26.671089 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671457 kubelet[2108]: I0430 03:24:26.671118 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671457 kubelet[2108]: I0430 03:24:26.671136 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75b1d264b154291a71999d78dbb75796-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-c-cb9001cac8\" (UID: \"75b1d264b154291a71999d78dbb75796\") " pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671457 kubelet[2108]: I0430 03:24:26.671160 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671457 kubelet[2108]: I0430 03:24:26.671185 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.671457 kubelet[2108]: I0430 03:24:26.671209 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29b2eaa1179512a69e978db13ed01a32-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-c-cb9001cac8\" (UID: \"29b2eaa1179512a69e978db13ed01a32\") " pod="kube-system/kube-scheduler-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.747747 kubelet[2108]: I0430 03:24:26.747681 2108 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.748256 kubelet[2108]: E0430 03:24:26.748212 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.154.103:6443/api/v1/nodes\": dial tcp 209.38.154.103:6443: connect: connection refused" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:26.810173 kubelet[2108]: E0430 03:24:26.810004 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:26.810976 containerd[1457]: time="2025-04-30T03:24:26.810759676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-c-cb9001cac8,Uid:75b1d264b154291a71999d78dbb75796,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:26.812978 systemd-resolved[1320]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Apr 30 03:24:26.825013 kubelet[2108]: E0430 03:24:26.824547 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:26.831457 containerd[1457]: time="2025-04-30T03:24:26.831386780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-c-cb9001cac8,Uid:b68acc47653b58fdaf7964f36f9059b1,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:26.833706 kubelet[2108]: E0430 03:24:26.832952 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:26.833923 containerd[1457]: time="2025-04-30T03:24:26.833558912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-c-cb9001cac8,Uid:29b2eaa1179512a69e978db13ed01a32,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:26.968193 kubelet[2108]: E0430 03:24:26.968123 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-c-cb9001cac8?timeout=10s\": dial tcp 209.38.154.103:6443: connect: connection refused" interval="800ms" Apr 30 03:24:27.151794 kubelet[2108]: I0430 03:24:27.151130 2108 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:27.151794 kubelet[2108]: E0430 03:24:27.151687 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.154.103:6443/api/v1/nodes\": dial tcp 209.38.154.103:6443: connect: connection refused" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:27.241632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962928841.mount: Deactivated successfully. Apr 30 03:24:27.247354 containerd[1457]: time="2025-04-30T03:24:27.247268496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:27.248698 containerd[1457]: time="2025-04-30T03:24:27.248602526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:24:27.251567 containerd[1457]: time="2025-04-30T03:24:27.249528334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:27.251567 containerd[1457]: time="2025-04-30T03:24:27.250711528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:24:27.251861 containerd[1457]: time="2025-04-30T03:24:27.251821175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:24:27.251991 containerd[1457]: time="2025-04-30T03:24:27.251969142Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:27.256295 containerd[1457]: time="2025-04-30T03:24:27.256228266Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:27.257140 containerd[1457]: time="2025-04-30T03:24:27.257096478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 423.452136ms" Apr 30 03:24:27.258630 containerd[1457]: time="2025-04-30T03:24:27.258471532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 426.987419ms" Apr 30 03:24:27.262477 containerd[1457]: time="2025-04-30T03:24:27.262361689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:27.271010 containerd[1457]: time="2025-04-30T03:24:27.270787994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.929145ms" Apr 30 03:24:27.387949 kubelet[2108]: W0430 03:24:27.387899 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.154.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:27.388230 kubelet[2108]: E0430 03:24:27.388209 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.154.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:27.450562 containerd[1457]: time="2025-04-30T03:24:27.450073567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:27.450562 containerd[1457]: time="2025-04-30T03:24:27.450163582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:27.450562 containerd[1457]: time="2025-04-30T03:24:27.450188782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:27.450562 containerd[1457]: time="2025-04-30T03:24:27.450308161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:27.452017 containerd[1457]: time="2025-04-30T03:24:27.451668789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:27.452017 containerd[1457]: time="2025-04-30T03:24:27.451729850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:27.452017 containerd[1457]: time="2025-04-30T03:24:27.451764516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:27.452017 containerd[1457]: time="2025-04-30T03:24:27.451861930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:27.457989 containerd[1457]: time="2025-04-30T03:24:27.456946671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:27.457989 containerd[1457]: time="2025-04-30T03:24:27.457035479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:27.457989 containerd[1457]: time="2025-04-30T03:24:27.457051488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:27.459182 containerd[1457]: time="2025-04-30T03:24:27.458949878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:27.487209 systemd[1]: Started cri-containerd-0f7734c076fb38ac7e6e9592462e53698d474740b938005d25b02eff202b46a1.scope - libcontainer container 0f7734c076fb38ac7e6e9592462e53698d474740b938005d25b02eff202b46a1. Apr 30 03:24:27.488675 kubelet[2108]: W0430 03:24:27.488335 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.154.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:27.488675 kubelet[2108]: E0430 03:24:27.488390 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.154.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:27.512238 systemd[1]: Started cri-containerd-9163602c679eac3d317a7c3e60e4936297fc5be4e14559f93768c8f00edbd86f.scope - libcontainer container 9163602c679eac3d317a7c3e60e4936297fc5be4e14559f93768c8f00edbd86f. Apr 30 03:24:27.515591 systemd[1]: Started cri-containerd-aeb700b527f0c849a7d595b90aad0bc9c48ffe7a53cbb6b45f6612df10908104.scope - libcontainer container aeb700b527f0c849a7d595b90aad0bc9c48ffe7a53cbb6b45f6612df10908104. Apr 30 03:24:27.572534 kubelet[2108]: W0430 03:24:27.572422 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.154.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:27.572534 kubelet[2108]: E0430 03:24:27.572526 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.154.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:27.590366 containerd[1457]: time="2025-04-30T03:24:27.590325649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-c-cb9001cac8,Uid:b68acc47653b58fdaf7964f36f9059b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f7734c076fb38ac7e6e9592462e53698d474740b938005d25b02eff202b46a1\"" Apr 30 03:24:27.595861 kubelet[2108]: E0430 03:24:27.595817 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:27.603828 containerd[1457]: time="2025-04-30T03:24:27.603783671Z" level=info msg="CreateContainer within sandbox \"0f7734c076fb38ac7e6e9592462e53698d474740b938005d25b02eff202b46a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:24:27.632419 containerd[1457]: time="2025-04-30T03:24:27.632187134Z" level=info msg="CreateContainer within sandbox \"0f7734c076fb38ac7e6e9592462e53698d474740b938005d25b02eff202b46a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7943fb3a00db37cff1b09b901b49d01c1cb43b270ba4ee577fb7b985cac75c12\"" Apr 30 03:24:27.632774 containerd[1457]: time="2025-04-30T03:24:27.632749187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-c-cb9001cac8,Uid:75b1d264b154291a71999d78dbb75796,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb700b527f0c849a7d595b90aad0bc9c48ffe7a53cbb6b45f6612df10908104\"" Apr 30 03:24:27.633909 containerd[1457]: time="2025-04-30T03:24:27.633110633Z" level=info msg="StartContainer for \"7943fb3a00db37cff1b09b901b49d01c1cb43b270ba4ee577fb7b985cac75c12\"" Apr 30 03:24:27.635180 kubelet[2108]: E0430 03:24:27.635151 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:27.637108 containerd[1457]: time="2025-04-30T03:24:27.637077545Z" level=info msg="CreateContainer within sandbox \"aeb700b527f0c849a7d595b90aad0bc9c48ffe7a53cbb6b45f6612df10908104\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:24:27.648465 containerd[1457]: time="2025-04-30T03:24:27.648419416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-c-cb9001cac8,Uid:29b2eaa1179512a69e978db13ed01a32,Namespace:kube-system,Attempt:0,} returns sandbox id \"9163602c679eac3d317a7c3e60e4936297fc5be4e14559f93768c8f00edbd86f\"" Apr 30 03:24:27.649866 kubelet[2108]: E0430 03:24:27.649833 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:27.652690 containerd[1457]: time="2025-04-30T03:24:27.652619490Z" level=info msg="CreateContainer within sandbox \"9163602c679eac3d317a7c3e60e4936297fc5be4e14559f93768c8f00edbd86f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:24:27.654799 containerd[1457]: time="2025-04-30T03:24:27.654584363Z" level=info msg="CreateContainer within sandbox \"aeb700b527f0c849a7d595b90aad0bc9c48ffe7a53cbb6b45f6612df10908104\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2746d221baf38cf50d3cd9a1334151e4359a83fde23675ac7d92cd6f1f1e787a\"" Apr 30 03:24:27.658136 containerd[1457]: time="2025-04-30T03:24:27.657086037Z" level=info msg="StartContainer for \"2746d221baf38cf50d3cd9a1334151e4359a83fde23675ac7d92cd6f1f1e787a\"" Apr 30 03:24:27.674515 containerd[1457]: time="2025-04-30T03:24:27.674475421Z" level=info msg="CreateContainer within sandbox \"9163602c679eac3d317a7c3e60e4936297fc5be4e14559f93768c8f00edbd86f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56ca236f25a4115317bc1dc5ae3cc917edec0a533dab2729216cdc79999bd52b\"" Apr 30 03:24:27.675718 containerd[1457]: time="2025-04-30T03:24:27.675599823Z" level=info msg="StartContainer for \"56ca236f25a4115317bc1dc5ae3cc917edec0a533dab2729216cdc79999bd52b\"" Apr 30 03:24:27.699514 systemd[1]: Started cri-containerd-7943fb3a00db37cff1b09b901b49d01c1cb43b270ba4ee577fb7b985cac75c12.scope - libcontainer container 7943fb3a00db37cff1b09b901b49d01c1cb43b270ba4ee577fb7b985cac75c12. Apr 30 03:24:27.718352 systemd[1]: Started cri-containerd-2746d221baf38cf50d3cd9a1334151e4359a83fde23675ac7d92cd6f1f1e787a.scope - libcontainer container 2746d221baf38cf50d3cd9a1334151e4359a83fde23675ac7d92cd6f1f1e787a. Apr 30 03:24:27.748179 systemd[1]: Started cri-containerd-56ca236f25a4115317bc1dc5ae3cc917edec0a533dab2729216cdc79999bd52b.scope - libcontainer container 56ca236f25a4115317bc1dc5ae3cc917edec0a533dab2729216cdc79999bd52b. Apr 30 03:24:27.770682 kubelet[2108]: E0430 03:24:27.770620 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-c-cb9001cac8?timeout=10s\": dial tcp 209.38.154.103:6443: connect: connection refused" interval="1.6s" Apr 30 03:24:27.801135 containerd[1457]: time="2025-04-30T03:24:27.800955074Z" level=info msg="StartContainer for \"7943fb3a00db37cff1b09b901b49d01c1cb43b270ba4ee577fb7b985cac75c12\" returns successfully" Apr 30 03:24:27.814296 containerd[1457]: time="2025-04-30T03:24:27.813580215Z" level=info msg="StartContainer for \"2746d221baf38cf50d3cd9a1334151e4359a83fde23675ac7d92cd6f1f1e787a\" returns successfully" Apr 30 03:24:27.838773 containerd[1457]: time="2025-04-30T03:24:27.838266636Z" level=info msg="StartContainer for \"56ca236f25a4115317bc1dc5ae3cc917edec0a533dab2729216cdc79999bd52b\" returns successfully" Apr 30 03:24:27.930660 kubelet[2108]: W0430 03:24:27.930368 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.154.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-c-cb9001cac8&limit=500&resourceVersion=0": dial tcp 209.38.154.103:6443: connect: connection refused Apr 30 03:24:27.930660 kubelet[2108]: E0430 03:24:27.930472 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.154.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-c-cb9001cac8&limit=500&resourceVersion=0\": dial tcp 209.38.154.103:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:27.955178 kubelet[2108]: I0430 03:24:27.954518 2108 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:27.955178 kubelet[2108]: E0430 03:24:27.954900 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://209.38.154.103:6443/api/v1/nodes\": dial tcp 209.38.154.103:6443: connect: connection refused" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:28.404757 kubelet[2108]: E0430 03:24:28.404633 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:28.409320 kubelet[2108]: E0430 03:24:28.408768 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:28.412422 kubelet[2108]: E0430 03:24:28.412395 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:29.417146 kubelet[2108]: E0430 03:24:29.417012 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:29.418515 kubelet[2108]: E0430 03:24:29.418427 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:29.556112 kubelet[2108]: I0430 03:24:29.556004 2108 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:29.963571 kubelet[2108]: E0430 03:24:29.963497 2108 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-c-cb9001cac8\" not found" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:30.159927 kubelet[2108]: I0430 03:24:30.159586 2108 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:30.159927 kubelet[2108]: E0430 03:24:30.159657 2108 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.3-c-cb9001cac8\": node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.209311 kubelet[2108]: E0430 03:24:30.209258 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.310121 kubelet[2108]: E0430 03:24:30.309949 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.410597 kubelet[2108]: E0430 03:24:30.410540 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.420135 kubelet[2108]: E0430 03:24:30.420000 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:30.511728 kubelet[2108]: E0430 03:24:30.511650 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.612967 kubelet[2108]: E0430 03:24:30.612758 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.714079 kubelet[2108]: E0430 03:24:30.713995 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.815139 kubelet[2108]: E0430 03:24:30.815064 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:30.916021 kubelet[2108]: E0430 03:24:30.915606 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:31.016922 kubelet[2108]: E0430 03:24:31.016381 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.3-c-cb9001cac8\" not found" Apr 30 03:24:31.334726 kubelet[2108]: I0430 03:24:31.333966 2108 apiserver.go:52] "Watching apiserver" Apr 30 03:24:31.370088 kubelet[2108]: I0430 03:24:31.370021 2108 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:24:32.435641 kubelet[2108]: W0430 03:24:32.435214 2108 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:24:32.435641 kubelet[2108]: E0430 03:24:32.435563 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:32.440176 systemd[1]: Reloading requested from client PID 2382 ('systemctl') (unit session-7.scope)... Apr 30 03:24:32.440197 systemd[1]: Reloading... Apr 30 03:24:32.541643 zram_generator::config[2419]: No configuration found. Apr 30 03:24:32.714873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:24:32.854385 systemd[1]: Reloading finished in 413 ms. Apr 30 03:24:32.905671 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:32.920420 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:24:32.920644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:32.930355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:33.085968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:33.098403 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:24:33.189387 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:33.190505 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:24:33.190505 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:33.194944 kubelet[2472]: I0430 03:24:33.194001 2472 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:24:33.209169 kubelet[2472]: I0430 03:24:33.209127 2472 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:24:33.209412 kubelet[2472]: I0430 03:24:33.209390 2472 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:24:33.209971 kubelet[2472]: I0430 03:24:33.209936 2472 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:24:33.211860 kubelet[2472]: I0430 03:24:33.211828 2472 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:24:33.222186 kubelet[2472]: I0430 03:24:33.222122 2472 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:24:33.228482 kubelet[2472]: E0430 03:24:33.228418 2472 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:24:33.228815 kubelet[2472]: I0430 03:24:33.228796 2472 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:24:33.232520 kubelet[2472]: I0430 03:24:33.232481 2472 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:24:33.232939 kubelet[2472]: I0430 03:24:33.232870 2472 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:24:33.233069 kubelet[2472]: I0430 03:24:33.233027 2472 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:24:33.233278 kubelet[2472]: I0430 03:24:33.233075 2472 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-c-cb9001cac8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:24:33.233278 kubelet[2472]: I0430 03:24:33.233274 2472 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:24:33.233457 kubelet[2472]: I0430 03:24:33.233284 2472 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:24:33.233457 kubelet[2472]: I0430 03:24:33.233322 2472 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:33.233562 kubelet[2472]: I0430 03:24:33.233474 2472 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:24:33.233562 kubelet[2472]: I0430 03:24:33.233488 2472 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:24:33.234045 kubelet[2472]: I0430 03:24:33.234024 2472 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:24:33.234045 kubelet[2472]: I0430 03:24:33.234049 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:24:33.239592 kubelet[2472]: I0430 03:24:33.239554 2472 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:24:33.240053 kubelet[2472]: I0430 03:24:33.240028 2472 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:24:33.240683 kubelet[2472]: I0430 03:24:33.240654 2472 server.go:1269] "Started kubelet" Apr 30 03:24:33.249928 kubelet[2472]: I0430 03:24:33.249482 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:24:33.264913 kubelet[2472]: I0430 03:24:33.264102 2472 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:24:33.266202 kubelet[2472]: I0430 03:24:33.265634 2472 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:24:33.268574 kubelet[2472]: I0430 03:24:33.267792 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:24:33.272624 kubelet[2472]: I0430 03:24:33.267232 2472 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:24:33.272814 kubelet[2472]: I0430 03:24:33.272763 2472 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:24:33.276243 kubelet[2472]: I0430 03:24:33.276193 2472 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:24:33.276844 kubelet[2472]: I0430 03:24:33.267185 2472 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:24:33.278827 kubelet[2472]: I0430 03:24:33.278797 2472 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:24:33.284999 kubelet[2472]: I0430 03:24:33.284768 2472 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:24:33.284999 kubelet[2472]: I0430 03:24:33.284930 2472 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:24:33.286923 kubelet[2472]: I0430 03:24:33.285493 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:24:33.287874 kubelet[2472]: E0430 03:24:33.287790 2472 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:24:33.288603 kubelet[2472]: I0430 03:24:33.288159 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:24:33.288757 kubelet[2472]: I0430 03:24:33.288747 2472 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:24:33.288823 kubelet[2472]: I0430 03:24:33.288815 2472 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:24:33.289140 kubelet[2472]: E0430 03:24:33.288988 2472 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:24:33.298793 kubelet[2472]: I0430 03:24:33.298545 2472 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:24:33.365949 kubelet[2472]: I0430 03:24:33.365784 2472 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:24:33.366700 kubelet[2472]: I0430 03:24:33.366616 2472 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:24:33.366700 kubelet[2472]: I0430 03:24:33.366689 2472 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:33.367096 kubelet[2472]: I0430 03:24:33.367049 2472 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:24:33.367096 kubelet[2472]: I0430 03:24:33.367072 2472 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:24:33.367096 kubelet[2472]: I0430 03:24:33.367100 2472 policy_none.go:49] "None policy: Start" Apr 30 03:24:33.369789 kubelet[2472]: I0430 03:24:33.369736 2472 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:24:33.369789 kubelet[2472]: I0430 03:24:33.369785 2472 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:24:33.371392 kubelet[2472]: I0430 03:24:33.371362 2472 state_mem.go:75] "Updated machine memory state" Apr 30 03:24:33.377944 kubelet[2472]: I0430 03:24:33.377875 2472 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:24:33.378206 kubelet[2472]: I0430 03:24:33.378184 2472 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:24:33.378262 kubelet[2472]: I0430 03:24:33.378211 2472 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:24:33.379266 kubelet[2472]: I0430 03:24:33.379207 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:24:33.409588 kubelet[2472]: W0430 03:24:33.409105 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:24:33.409588 kubelet[2472]: E0430 03:24:33.409208 2472 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-c-cb9001cac8\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.409588 kubelet[2472]: W0430 03:24:33.409405 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:24:33.412914 kubelet[2472]: W0430 03:24:33.412709 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:24:33.444026 sudo[2506]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 03:24:33.444949 sudo[2506]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 03:24:33.479930 kubelet[2472]: I0430 03:24:33.479782 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.479930 kubelet[2472]: I0430 03:24:33.479857 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480137 kubelet[2472]: I0430 03:24:33.480003 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480137 kubelet[2472]: I0430 03:24:33.480059 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75b1d264b154291a71999d78dbb75796-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-c-cb9001cac8\" (UID: \"75b1d264b154291a71999d78dbb75796\") " pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480137 kubelet[2472]: I0430 03:24:33.480082 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75b1d264b154291a71999d78dbb75796-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-c-cb9001cac8\" (UID: \"75b1d264b154291a71999d78dbb75796\") " pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480137 kubelet[2472]: I0430 03:24:33.480104 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480137 kubelet[2472]: I0430 03:24:33.480124 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b68acc47653b58fdaf7964f36f9059b1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-c-cb9001cac8\" (UID: \"b68acc47653b58fdaf7964f36f9059b1\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480286 kubelet[2472]: I0430 03:24:33.480141 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29b2eaa1179512a69e978db13ed01a32-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-c-cb9001cac8\" (UID: \"29b2eaa1179512a69e978db13ed01a32\") " pod="kube-system/kube-scheduler-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.480286 kubelet[2472]: I0430 03:24:33.480167 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75b1d264b154291a71999d78dbb75796-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-c-cb9001cac8\" (UID: \"75b1d264b154291a71999d78dbb75796\") " pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.487949 kubelet[2472]: I0430 03:24:33.487429 2472 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.505172 kubelet[2472]: I0430 03:24:33.504747 2472 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.505172 kubelet[2472]: I0430 03:24:33.504871 2472 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.3-c-cb9001cac8" Apr 30 03:24:33.710749 kubelet[2472]: E0430 03:24:33.710597 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:33.711672 kubelet[2472]: E0430 03:24:33.711602 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:33.715558 kubelet[2472]: E0430 03:24:33.714037 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:34.084206 sudo[2506]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:34.237922 kubelet[2472]: I0430 03:24:34.236299 2472 apiserver.go:52] "Watching apiserver" Apr 30 03:24:34.278113 kubelet[2472]: I0430 03:24:34.277450 2472 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:24:34.342060 kubelet[2472]: E0430 03:24:34.339462 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:34.342060 kubelet[2472]: E0430 03:24:34.339740 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:34.342060 kubelet[2472]: E0430 03:24:34.340102 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:34.416912 kubelet[2472]: I0430 03:24:34.415662 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-c-cb9001cac8" podStartSLOduration=1.415639454 podStartE2EDuration="1.415639454s" podCreationTimestamp="2025-04-30 03:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:34.415396727 +0000 UTC m=+1.307045134" watchObservedRunningTime="2025-04-30 03:24:34.415639454 +0000 UTC m=+1.307287861" Apr 30 03:24:34.417350 kubelet[2472]: I0430 03:24:34.417311 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-c-cb9001cac8" podStartSLOduration=2.417293368 podStartE2EDuration="2.417293368s" podCreationTimestamp="2025-04-30 03:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:34.3854494 +0000 UTC m=+1.277097968" watchObservedRunningTime="2025-04-30 03:24:34.417293368 +0000 UTC m=+1.308941767" Apr 30 03:24:34.475161 kubelet[2472]: I0430 03:24:34.475088 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-c-cb9001cac8" podStartSLOduration=1.475064506 podStartE2EDuration="1.475064506s" podCreationTimestamp="2025-04-30 03:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:34.446158676 +0000 UTC m=+1.337807079" watchObservedRunningTime="2025-04-30 03:24:34.475064506 +0000 UTC m=+1.366712955" Apr 30 03:24:35.343202 kubelet[2472]: E0430 03:24:35.342689 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:35.871474 sudo[1643]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:35.877788 sshd[1640]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:35.882213 systemd[1]: sshd@6-209.38.154.103:22-139.178.89.65:43854.service: Deactivated successfully. Apr 30 03:24:35.885868 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:24:35.886314 systemd[1]: session-7.scope: Consumed 4.940s CPU time, 145.3M memory peak, 0B memory swap peak. Apr 30 03:24:35.888873 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:24:35.890479 systemd-logind[1438]: Removed session 7. Apr 30 03:24:36.346454 kubelet[2472]: E0430 03:24:36.345381 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:38.054792 kubelet[2472]: E0430 03:24:38.054719 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:38.350277 kubelet[2472]: E0430 03:24:38.349260 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:38.891010 kubelet[2472]: I0430 03:24:38.890895 2472 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:24:38.891744 containerd[1457]: time="2025-04-30T03:24:38.891646334Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:24:38.892212 kubelet[2472]: I0430 03:24:38.891944 2472 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:24:39.626011 systemd[1]: Created slice kubepods-besteffort-pod5c09298a_629f_4248_88a3_0b52bc619983.slice - libcontainer container kubepods-besteffort-pod5c09298a_629f_4248_88a3_0b52bc619983.slice. Apr 30 03:24:39.631602 kubelet[2472]: I0430 03:24:39.630853 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c09298a-629f-4248-88a3-0b52bc619983-kube-proxy\") pod \"kube-proxy-z2pf8\" (UID: \"5c09298a-629f-4248-88a3-0b52bc619983\") " pod="kube-system/kube-proxy-z2pf8" Apr 30 03:24:39.632189 kubelet[2472]: I0430 03:24:39.632061 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c09298a-629f-4248-88a3-0b52bc619983-xtables-lock\") pod \"kube-proxy-z2pf8\" (UID: \"5c09298a-629f-4248-88a3-0b52bc619983\") " pod="kube-system/kube-proxy-z2pf8" Apr 30 03:24:39.632189 kubelet[2472]: I0430 03:24:39.632099 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c09298a-629f-4248-88a3-0b52bc619983-lib-modules\") pod \"kube-proxy-z2pf8\" (UID: \"5c09298a-629f-4248-88a3-0b52bc619983\") " pod="kube-system/kube-proxy-z2pf8" Apr 30 03:24:39.632189 kubelet[2472]: I0430 03:24:39.632123 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcfw\" (UniqueName: \"kubernetes.io/projected/5c09298a-629f-4248-88a3-0b52bc619983-kube-api-access-ldcfw\") pod \"kube-proxy-z2pf8\" (UID: \"5c09298a-629f-4248-88a3-0b52bc619983\") " pod="kube-system/kube-proxy-z2pf8" Apr 30 03:24:39.641348 systemd[1]: Created slice kubepods-burstable-podb0b80ed8_4137_48c6_9b28_125ecf526192.slice - libcontainer container kubepods-burstable-podb0b80ed8_4137_48c6_9b28_125ecf526192.slice. Apr 30 03:24:39.733265 kubelet[2472]: I0430 03:24:39.732728 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-hostproc\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733265 kubelet[2472]: I0430 03:24:39.732826 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-kernel\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733265 kubelet[2472]: I0430 03:24:39.732863 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-lib-modules\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733265 kubelet[2472]: I0430 03:24:39.732911 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-net\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733265 kubelet[2472]: I0430 03:24:39.732938 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4np\" (UniqueName: \"kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-kube-api-access-9c4np\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733265 kubelet[2472]: I0430 03:24:39.732964 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-etc-cni-netd\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733580 kubelet[2472]: I0430 03:24:39.732992 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-config-path\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733580 kubelet[2472]: I0430 03:24:39.733034 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-run\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733580 kubelet[2472]: I0430 03:24:39.733060 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-bpf-maps\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733580 kubelet[2472]: I0430 03:24:39.733117 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-cgroup\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733580 kubelet[2472]: I0430 03:24:39.733146 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-xtables-lock\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733580 kubelet[2472]: I0430 03:24:39.733187 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cni-path\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733994 kubelet[2472]: I0430 03:24:39.733212 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0b80ed8-4137-48c6-9b28-125ecf526192-clustermesh-secrets\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.733994 kubelet[2472]: I0430 03:24:39.733240 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-hubble-tls\") pod \"cilium-wkhqb\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " pod="kube-system/cilium-wkhqb" Apr 30 03:24:39.920038 systemd[1]: Created slice kubepods-besteffort-pod6365ecee_d3a4_469e_a7d8_03638203f650.slice - libcontainer container kubepods-besteffort-pod6365ecee_d3a4_469e_a7d8_03638203f650.slice. Apr 30 03:24:39.935672 kubelet[2472]: I0430 03:24:39.935598 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkzrc\" (UniqueName: \"kubernetes.io/projected/6365ecee-d3a4-469e-a7d8-03638203f650-kube-api-access-bkzrc\") pod \"cilium-operator-5d85765b45-bhz7h\" (UID: \"6365ecee-d3a4-469e-a7d8-03638203f650\") " pod="kube-system/cilium-operator-5d85765b45-bhz7h" Apr 30 03:24:39.935672 kubelet[2472]: I0430 03:24:39.935669 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6365ecee-d3a4-469e-a7d8-03638203f650-cilium-config-path\") pod \"cilium-operator-5d85765b45-bhz7h\" (UID: \"6365ecee-d3a4-469e-a7d8-03638203f650\") " pod="kube-system/cilium-operator-5d85765b45-bhz7h" Apr 30 03:24:39.938439 kubelet[2472]: E0430 03:24:39.936809 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:39.940593 containerd[1457]: time="2025-04-30T03:24:39.940492792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2pf8,Uid:5c09298a-629f-4248-88a3-0b52bc619983,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:39.949814 kubelet[2472]: E0430 03:24:39.946962 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:39.951291 containerd[1457]: time="2025-04-30T03:24:39.950816969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wkhqb,Uid:b0b80ed8-4137-48c6-9b28-125ecf526192,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:40.020481 containerd[1457]: time="2025-04-30T03:24:40.020099867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:40.020481 containerd[1457]: time="2025-04-30T03:24:40.020216385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:40.020481 containerd[1457]: time="2025-04-30T03:24:40.020242430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:40.020481 containerd[1457]: time="2025-04-30T03:24:40.020396412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:40.028331 containerd[1457]: time="2025-04-30T03:24:40.027549396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:40.028683 containerd[1457]: time="2025-04-30T03:24:40.028361213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:40.028683 containerd[1457]: time="2025-04-30T03:24:40.028534560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:40.030108 containerd[1457]: time="2025-04-30T03:24:40.028840462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:40.066178 systemd[1]: Started cri-containerd-91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550.scope - libcontainer container 91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550. Apr 30 03:24:40.072285 systemd[1]: Started cri-containerd-27db3d46c6e03bb15d9ce952e5df5d818b759050b4bf31f7823a34bd0f3a7f1f.scope - libcontainer container 27db3d46c6e03bb15d9ce952e5df5d818b759050b4bf31f7823a34bd0f3a7f1f. Apr 30 03:24:40.128952 containerd[1457]: time="2025-04-30T03:24:40.128840331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wkhqb,Uid:b0b80ed8-4137-48c6-9b28-125ecf526192,Namespace:kube-system,Attempt:0,} returns sandbox id \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\"" Apr 30 03:24:40.133094 kubelet[2472]: E0430 03:24:40.133045 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:40.136278 containerd[1457]: time="2025-04-30T03:24:40.136232221Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 03:24:40.144055 containerd[1457]: time="2025-04-30T03:24:40.143852286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2pf8,Uid:5c09298a-629f-4248-88a3-0b52bc619983,Namespace:kube-system,Attempt:0,} returns sandbox id \"27db3d46c6e03bb15d9ce952e5df5d818b759050b4bf31f7823a34bd0f3a7f1f\"" Apr 30 03:24:40.146822 kubelet[2472]: E0430 03:24:40.146738 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:40.150253 containerd[1457]: time="2025-04-30T03:24:40.150102598Z" level=info msg="CreateContainer within sandbox \"27db3d46c6e03bb15d9ce952e5df5d818b759050b4bf31f7823a34bd0f3a7f1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:24:40.173219 containerd[1457]: time="2025-04-30T03:24:40.173035323Z" level=info msg="CreateContainer within sandbox \"27db3d46c6e03bb15d9ce952e5df5d818b759050b4bf31f7823a34bd0f3a7f1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4401706a9f74a567ec12d1a170bba0fefa8c419da248e9d9b14a39837e393b4d\"" Apr 30 03:24:40.175469 containerd[1457]: time="2025-04-30T03:24:40.175418467Z" level=info msg="StartContainer for \"4401706a9f74a567ec12d1a170bba0fefa8c419da248e9d9b14a39837e393b4d\"" Apr 30 03:24:40.231461 kubelet[2472]: E0430 03:24:40.230879 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:40.232311 containerd[1457]: time="2025-04-30T03:24:40.232250849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bhz7h,Uid:6365ecee-d3a4-469e-a7d8-03638203f650,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:40.233205 systemd[1]: Started cri-containerd-4401706a9f74a567ec12d1a170bba0fefa8c419da248e9d9b14a39837e393b4d.scope - libcontainer container 4401706a9f74a567ec12d1a170bba0fefa8c419da248e9d9b14a39837e393b4d. Apr 30 03:24:40.297475 containerd[1457]: time="2025-04-30T03:24:40.295611216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:40.297872 containerd[1457]: time="2025-04-30T03:24:40.297521956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:40.297872 containerd[1457]: time="2025-04-30T03:24:40.297747304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:40.298371 containerd[1457]: time="2025-04-30T03:24:40.298305267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:40.307537 containerd[1457]: time="2025-04-30T03:24:40.307467582Z" level=info msg="StartContainer for \"4401706a9f74a567ec12d1a170bba0fefa8c419da248e9d9b14a39837e393b4d\" returns successfully" Apr 30 03:24:40.342597 systemd[1]: Started cri-containerd-f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8.scope - libcontainer container f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8. Apr 30 03:24:40.365711 kubelet[2472]: E0430 03:24:40.365572 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:40.447633 containerd[1457]: time="2025-04-30T03:24:40.446097718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bhz7h,Uid:6365ecee-d3a4-469e-a7d8-03638203f650,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\"" Apr 30 03:24:40.449061 kubelet[2472]: E0430 03:24:40.448799 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:42.579876 kubelet[2472]: E0430 03:24:42.579836 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:42.604476 kubelet[2472]: I0430 03:24:42.604404 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z2pf8" podStartSLOduration=3.604379326 podStartE2EDuration="3.604379326s" podCreationTimestamp="2025-04-30 03:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:40.392129934 +0000 UTC m=+7.283778335" watchObservedRunningTime="2025-04-30 03:24:42.604379326 +0000 UTC m=+9.496027732" Apr 30 03:24:43.377846 kubelet[2472]: E0430 03:24:43.377792 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:44.806171 kubelet[2472]: E0430 03:24:44.805749 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:46.231779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001763968.mount: Deactivated successfully. Apr 30 03:24:48.846027 containerd[1457]: time="2025-04-30T03:24:48.845735834Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:48.850978 containerd[1457]: time="2025-04-30T03:24:48.850684326Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.714399923s" Apr 30 03:24:48.850978 containerd[1457]: time="2025-04-30T03:24:48.850746483Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 03:24:48.866938 containerd[1457]: time="2025-04-30T03:24:48.866324939Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 03:24:48.868792 containerd[1457]: time="2025-04-30T03:24:48.868744068Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:48.871366 containerd[1457]: time="2025-04-30T03:24:48.871250622Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 03:24:48.876455 containerd[1457]: time="2025-04-30T03:24:48.876397783Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:24:48.968608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192020627.mount: Deactivated successfully. Apr 30 03:24:48.975485 containerd[1457]: time="2025-04-30T03:24:48.975435157Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\"" Apr 30 03:24:48.977137 containerd[1457]: time="2025-04-30T03:24:48.976174200Z" level=info msg="StartContainer for \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\"" Apr 30 03:24:49.072555 systemd[1]: run-containerd-runc-k8s.io-67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b-runc.thRJ2a.mount: Deactivated successfully. Apr 30 03:24:49.080127 systemd[1]: Started cri-containerd-67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b.scope - libcontainer container 67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b. Apr 30 03:24:49.114799 containerd[1457]: time="2025-04-30T03:24:49.114212293Z" level=info msg="StartContainer for \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\" returns successfully" Apr 30 03:24:49.128106 systemd[1]: cri-containerd-67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b.scope: Deactivated successfully. Apr 30 03:24:49.277238 containerd[1457]: time="2025-04-30T03:24:49.243052231Z" level=info msg="shim disconnected" id=67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b namespace=k8s.io Apr 30 03:24:49.277910 containerd[1457]: time="2025-04-30T03:24:49.277521556Z" level=warning msg="cleaning up after shim disconnected" id=67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b namespace=k8s.io Apr 30 03:24:49.277910 containerd[1457]: time="2025-04-30T03:24:49.277547217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:24:49.393955 kubelet[2472]: E0430 03:24:49.393788 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:49.399561 containerd[1457]: time="2025-04-30T03:24:49.399506350Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:24:49.415165 containerd[1457]: time="2025-04-30T03:24:49.415000668Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\"" Apr 30 03:24:49.416061 containerd[1457]: time="2025-04-30T03:24:49.416021472Z" level=info msg="StartContainer for \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\"" Apr 30 03:24:49.460190 systemd[1]: Started cri-containerd-ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267.scope - libcontainer container ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267. Apr 30 03:24:49.499911 containerd[1457]: time="2025-04-30T03:24:49.499749086Z" level=info msg="StartContainer for \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\" returns successfully" Apr 30 03:24:49.511544 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:24:49.512019 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:24:49.512156 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:24:49.520504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:24:49.527365 systemd[1]: cri-containerd-ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267.scope: Deactivated successfully. Apr 30 03:24:49.558260 containerd[1457]: time="2025-04-30T03:24:49.558193709Z" level=info msg="shim disconnected" id=ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267 namespace=k8s.io Apr 30 03:24:49.558655 containerd[1457]: time="2025-04-30T03:24:49.558629283Z" level=warning msg="cleaning up after shim disconnected" id=ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267 namespace=k8s.io Apr 30 03:24:49.558953 containerd[1457]: time="2025-04-30T03:24:49.558687483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:24:49.574729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:24:49.963521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b-rootfs.mount: Deactivated successfully. Apr 30 03:24:50.306210 update_engine[1439]: I20250430 03:24:50.304934 1439 update_attempter.cc:509] Updating boot flags... Apr 30 03:24:50.364301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2995) Apr 30 03:24:50.402676 kubelet[2472]: E0430 03:24:50.402622 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:50.408853 containerd[1457]: time="2025-04-30T03:24:50.408362614Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:24:50.491335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379825338.mount: Deactivated successfully. Apr 30 03:24:50.507716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543731530.mount: Deactivated successfully. Apr 30 03:24:50.529404 containerd[1457]: time="2025-04-30T03:24:50.529231214Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\"" Apr 30 03:24:50.533980 containerd[1457]: time="2025-04-30T03:24:50.533400755Z" level=info msg="StartContainer for \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\"" Apr 30 03:24:50.577246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2999) Apr 30 03:24:50.676125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2999) Apr 30 03:24:50.749719 systemd[1]: Started cri-containerd-7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612.scope - libcontainer container 7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612. Apr 30 03:24:50.831244 containerd[1457]: time="2025-04-30T03:24:50.828399653Z" level=info msg="StartContainer for \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\" returns successfully" Apr 30 03:24:50.830256 systemd[1]: cri-containerd-7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612.scope: Deactivated successfully. Apr 30 03:24:50.868379 containerd[1457]: time="2025-04-30T03:24:50.868265690Z" level=info msg="shim disconnected" id=7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612 namespace=k8s.io Apr 30 03:24:50.868379 containerd[1457]: time="2025-04-30T03:24:50.868345384Z" level=warning msg="cleaning up after shim disconnected" id=7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612 namespace=k8s.io Apr 30 03:24:50.868379 containerd[1457]: time="2025-04-30T03:24:50.868359221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:24:50.963502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612-rootfs.mount: Deactivated successfully. Apr 30 03:24:51.408525 kubelet[2472]: E0430 03:24:51.407995 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:51.413828 containerd[1457]: time="2025-04-30T03:24:51.413494854Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:24:51.434455 containerd[1457]: time="2025-04-30T03:24:51.434312682Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\"" Apr 30 03:24:51.435000 containerd[1457]: time="2025-04-30T03:24:51.434961323Z" level=info msg="StartContainer for \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\"" Apr 30 03:24:51.488226 systemd[1]: Started cri-containerd-61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3.scope - libcontainer container 61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3. Apr 30 03:24:51.531089 systemd[1]: cri-containerd-61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3.scope: Deactivated successfully. Apr 30 03:24:51.539010 containerd[1457]: time="2025-04-30T03:24:51.537030764Z" level=info msg="StartContainer for \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\" returns successfully" Apr 30 03:24:51.570418 containerd[1457]: time="2025-04-30T03:24:51.570338687Z" level=info msg="shim disconnected" id=61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3 namespace=k8s.io Apr 30 03:24:51.570819 containerd[1457]: time="2025-04-30T03:24:51.570785441Z" level=warning msg="cleaning up after shim disconnected" id=61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3 namespace=k8s.io Apr 30 03:24:51.570987 containerd[1457]: time="2025-04-30T03:24:51.570962913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:24:51.964063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3-rootfs.mount: Deactivated successfully. Apr 30 03:24:52.420404 kubelet[2472]: E0430 03:24:52.420358 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:52.429309 containerd[1457]: time="2025-04-30T03:24:52.428030152Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:24:52.488051 containerd[1457]: time="2025-04-30T03:24:52.487430896Z" level=info msg="CreateContainer within sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\"" Apr 30 03:24:52.488306 containerd[1457]: time="2025-04-30T03:24:52.488176729Z" level=info msg="StartContainer for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\"" Apr 30 03:24:52.557230 systemd[1]: Started cri-containerd-5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091.scope - libcontainer container 5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091. Apr 30 03:24:52.618512 containerd[1457]: time="2025-04-30T03:24:52.618011463Z" level=info msg="StartContainer for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" returns successfully" Apr 30 03:24:52.877492 kubelet[2472]: I0430 03:24:52.877050 2472 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 03:24:52.935067 systemd[1]: Created slice kubepods-burstable-pod96133b2f_565b_4ab8_9cd2_3faf30017bc7.slice - libcontainer container kubepods-burstable-pod96133b2f_565b_4ab8_9cd2_3faf30017bc7.slice. Apr 30 03:24:52.943326 kubelet[2472]: I0430 03:24:52.943227 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/770d8c9b-faa8-451c-9950-478096def294-config-volume\") pod \"coredns-6f6b679f8f-qg7vn\" (UID: \"770d8c9b-faa8-451c-9950-478096def294\") " pod="kube-system/coredns-6f6b679f8f-qg7vn" Apr 30 03:24:52.943745 kubelet[2472]: I0430 03:24:52.943719 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdvjx\" (UniqueName: \"kubernetes.io/projected/96133b2f-565b-4ab8-9cd2-3faf30017bc7-kube-api-access-kdvjx\") pod \"coredns-6f6b679f8f-lg8fn\" (UID: \"96133b2f-565b-4ab8-9cd2-3faf30017bc7\") " pod="kube-system/coredns-6f6b679f8f-lg8fn" Apr 30 03:24:52.944017 kubelet[2472]: I0430 03:24:52.943972 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24nk\" (UniqueName: \"kubernetes.io/projected/770d8c9b-faa8-451c-9950-478096def294-kube-api-access-x24nk\") pod \"coredns-6f6b679f8f-qg7vn\" (UID: \"770d8c9b-faa8-451c-9950-478096def294\") " pod="kube-system/coredns-6f6b679f8f-qg7vn" Apr 30 03:24:52.944250 kubelet[2472]: I0430 03:24:52.944233 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96133b2f-565b-4ab8-9cd2-3faf30017bc7-config-volume\") pod \"coredns-6f6b679f8f-lg8fn\" (UID: \"96133b2f-565b-4ab8-9cd2-3faf30017bc7\") " pod="kube-system/coredns-6f6b679f8f-lg8fn" Apr 30 03:24:52.949393 systemd[1]: Created slice kubepods-burstable-pod770d8c9b_faa8_451c_9950_478096def294.slice - libcontainer container kubepods-burstable-pod770d8c9b_faa8_451c_9950_478096def294.slice. Apr 30 03:24:53.027625 containerd[1457]: time="2025-04-30T03:24:53.026981257Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:53.029720 containerd[1457]: time="2025-04-30T03:24:53.029503009Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 03:24:53.032963 containerd[1457]: time="2025-04-30T03:24:53.032904020Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:53.036921 containerd[1457]: time="2025-04-30T03:24:53.036752124Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.165429732s" Apr 30 03:24:53.037414 containerd[1457]: time="2025-04-30T03:24:53.037231257Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 03:24:53.045558 containerd[1457]: time="2025-04-30T03:24:53.045243025Z" level=info msg="CreateContainer within sandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 03:24:53.071594 containerd[1457]: time="2025-04-30T03:24:53.070490389Z" level=info msg="CreateContainer within sandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\"" Apr 30 03:24:53.074008 containerd[1457]: time="2025-04-30T03:24:53.073922708Z" level=info msg="StartContainer for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\"" Apr 30 03:24:53.158613 systemd[1]: Started cri-containerd-6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3.scope - libcontainer container 6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3. Apr 30 03:24:53.244467 kubelet[2472]: E0430 03:24:53.244414 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:53.248179 containerd[1457]: time="2025-04-30T03:24:53.248097558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lg8fn,Uid:96133b2f-565b-4ab8-9cd2-3faf30017bc7,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:53.254951 kubelet[2472]: E0430 03:24:53.253737 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:53.257109 containerd[1457]: time="2025-04-30T03:24:53.256465227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qg7vn,Uid:770d8c9b-faa8-451c-9950-478096def294,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:53.300952 containerd[1457]: time="2025-04-30T03:24:53.300874956Z" level=info msg="StartContainer for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" returns successfully" Apr 30 03:24:53.468897 kubelet[2472]: E0430 03:24:53.468730 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:53.475812 kubelet[2472]: E0430 03:24:53.475763 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:53.530396 kubelet[2472]: I0430 03:24:53.530317 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wkhqb" podStartSLOduration=5.794991269 podStartE2EDuration="14.53029254s" podCreationTimestamp="2025-04-30 03:24:39 +0000 UTC" firstStartedPulling="2025-04-30 03:24:40.134234177 +0000 UTC m=+7.025882595" lastFinishedPulling="2025-04-30 03:24:48.869535461 +0000 UTC m=+15.761183866" observedRunningTime="2025-04-30 03:24:53.507077467 +0000 UTC m=+20.398725874" watchObservedRunningTime="2025-04-30 03:24:53.53029254 +0000 UTC m=+20.421940949" Apr 30 03:24:54.479816 kubelet[2472]: E0430 03:24:54.479763 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:54.480524 kubelet[2472]: E0430 03:24:54.480168 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:55.480370 kubelet[2472]: E0430 03:24:55.480292 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:57.315507 systemd-networkd[1365]: cilium_host: Link UP Apr 30 03:24:57.315731 systemd-networkd[1365]: cilium_net: Link UP Apr 30 03:24:57.315735 systemd-networkd[1365]: cilium_net: Gained carrier Apr 30 03:24:57.316691 systemd-networkd[1365]: cilium_host: Gained carrier Apr 30 03:24:57.425176 systemd-networkd[1365]: cilium_net: Gained IPv6LL Apr 30 03:24:57.475354 systemd-networkd[1365]: cilium_vxlan: Link UP Apr 30 03:24:57.475367 systemd-networkd[1365]: cilium_vxlan: Gained carrier Apr 30 03:24:57.917923 kernel: NET: Registered PF_ALG protocol family Apr 30 03:24:58.266681 systemd-networkd[1365]: cilium_host: Gained IPv6LL Apr 30 03:24:58.818277 systemd-networkd[1365]: lxc_health: Link UP Apr 30 03:24:58.842150 systemd-networkd[1365]: lxc_health: Gained carrier Apr 30 03:24:59.162019 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL Apr 30 03:24:59.435574 kernel: eth0: renamed from tmpd7ca5 Apr 30 03:24:59.442346 systemd-networkd[1365]: lxc601cd1529011: Link UP Apr 30 03:24:59.447801 systemd-networkd[1365]: lxc601cd1529011: Gained carrier Apr 30 03:24:59.461967 systemd-networkd[1365]: lxc1c58f71c02ea: Link UP Apr 30 03:24:59.468961 kernel: eth0: renamed from tmp75e30 Apr 30 03:24:59.469457 systemd-networkd[1365]: lxc1c58f71c02ea: Gained carrier Apr 30 03:24:59.951792 kubelet[2472]: E0430 03:24:59.950717 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:24:59.981314 kubelet[2472]: I0430 03:24:59.980467 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bhz7h" podStartSLOduration=8.390648157 podStartE2EDuration="20.98044949s" podCreationTimestamp="2025-04-30 03:24:39 +0000 UTC" firstStartedPulling="2025-04-30 03:24:40.449944523 +0000 UTC m=+7.341592910" lastFinishedPulling="2025-04-30 03:24:53.039745842 +0000 UTC m=+19.931394243" observedRunningTime="2025-04-30 03:24:53.530921777 +0000 UTC m=+20.422570186" watchObservedRunningTime="2025-04-30 03:24:59.98044949 +0000 UTC m=+26.872097898" Apr 30 03:24:59.993153 systemd-networkd[1365]: lxc_health: Gained IPv6LL Apr 30 03:25:01.337432 systemd-networkd[1365]: lxc601cd1529011: Gained IPv6LL Apr 30 03:25:01.339062 systemd-networkd[1365]: lxc1c58f71c02ea: Gained IPv6LL Apr 30 03:25:02.946788 kubelet[2472]: I0430 03:25:02.945206 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:25:02.946788 kubelet[2472]: E0430 03:25:02.946174 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:03.506754 kubelet[2472]: E0430 03:25:03.506701 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:05.667961 containerd[1457]: time="2025-04-30T03:25:05.667179717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:25:05.668567 containerd[1457]: time="2025-04-30T03:25:05.667483566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:25:05.668567 containerd[1457]: time="2025-04-30T03:25:05.667547800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:05.668567 containerd[1457]: time="2025-04-30T03:25:05.667783765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:05.688945 containerd[1457]: time="2025-04-30T03:25:05.683832274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:25:05.688945 containerd[1457]: time="2025-04-30T03:25:05.684075988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:25:05.688945 containerd[1457]: time="2025-04-30T03:25:05.684151858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:05.688945 containerd[1457]: time="2025-04-30T03:25:05.684343110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:05.724558 systemd[1]: run-containerd-runc-k8s.io-d7ca5db6f7c0232cd68275730d503793fd63b26385e08a52fb29262a2e23383b-runc.UNOLO6.mount: Deactivated successfully. Apr 30 03:25:05.750912 systemd[1]: Started cri-containerd-d7ca5db6f7c0232cd68275730d503793fd63b26385e08a52fb29262a2e23383b.scope - libcontainer container d7ca5db6f7c0232cd68275730d503793fd63b26385e08a52fb29262a2e23383b. Apr 30 03:25:05.764240 systemd[1]: Started cri-containerd-75e3012c50258d8209337d785a438387f9466d3b2a2e53e44b313a45da417daa.scope - libcontainer container 75e3012c50258d8209337d785a438387f9466d3b2a2e53e44b313a45da417daa. Apr 30 03:25:05.872663 containerd[1457]: time="2025-04-30T03:25:05.872574776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lg8fn,Uid:96133b2f-565b-4ab8-9cd2-3faf30017bc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7ca5db6f7c0232cd68275730d503793fd63b26385e08a52fb29262a2e23383b\"" Apr 30 03:25:05.876232 kubelet[2472]: E0430 03:25:05.875323 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:05.883841 containerd[1457]: time="2025-04-30T03:25:05.881747209Z" level=info msg="CreateContainer within sandbox \"d7ca5db6f7c0232cd68275730d503793fd63b26385e08a52fb29262a2e23383b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:25:05.927845 containerd[1457]: time="2025-04-30T03:25:05.926998799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qg7vn,Uid:770d8c9b-faa8-451c-9950-478096def294,Namespace:kube-system,Attempt:0,} returns sandbox id \"75e3012c50258d8209337d785a438387f9466d3b2a2e53e44b313a45da417daa\"" Apr 30 03:25:05.936354 kubelet[2472]: E0430 03:25:05.936310 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:05.945853 containerd[1457]: time="2025-04-30T03:25:05.945726414Z" level=info msg="CreateContainer within sandbox \"75e3012c50258d8209337d785a438387f9466d3b2a2e53e44b313a45da417daa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:25:05.972102 containerd[1457]: time="2025-04-30T03:25:05.971730720Z" level=info msg="CreateContainer within sandbox \"d7ca5db6f7c0232cd68275730d503793fd63b26385e08a52fb29262a2e23383b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef6f07e57780c30484942dbbe5a16bb74dfcfe06fe27041cdf75f87e5c80a1a9\"" Apr 30 03:25:05.973769 containerd[1457]: time="2025-04-30T03:25:05.973654679Z" level=info msg="StartContainer for \"ef6f07e57780c30484942dbbe5a16bb74dfcfe06fe27041cdf75f87e5c80a1a9\"" Apr 30 03:25:05.974906 containerd[1457]: time="2025-04-30T03:25:05.974832725Z" level=info msg="CreateContainer within sandbox \"75e3012c50258d8209337d785a438387f9466d3b2a2e53e44b313a45da417daa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2161a08d007b7a6b6a0c6e92bc56456a31cc01346a99468d92bd4f106fe4e14\"" Apr 30 03:25:05.977547 containerd[1457]: time="2025-04-30T03:25:05.977144680Z" level=info msg="StartContainer for \"f2161a08d007b7a6b6a0c6e92bc56456a31cc01346a99468d92bd4f106fe4e14\"" Apr 30 03:25:06.053918 systemd[1]: Started cri-containerd-ef6f07e57780c30484942dbbe5a16bb74dfcfe06fe27041cdf75f87e5c80a1a9.scope - libcontainer container ef6f07e57780c30484942dbbe5a16bb74dfcfe06fe27041cdf75f87e5c80a1a9. Apr 30 03:25:06.073235 systemd[1]: Started cri-containerd-f2161a08d007b7a6b6a0c6e92bc56456a31cc01346a99468d92bd4f106fe4e14.scope - libcontainer container f2161a08d007b7a6b6a0c6e92bc56456a31cc01346a99468d92bd4f106fe4e14. Apr 30 03:25:06.129849 containerd[1457]: time="2025-04-30T03:25:06.129737596Z" level=info msg="StartContainer for \"ef6f07e57780c30484942dbbe5a16bb74dfcfe06fe27041cdf75f87e5c80a1a9\" returns successfully" Apr 30 03:25:06.143132 containerd[1457]: time="2025-04-30T03:25:06.143019880Z" level=info msg="StartContainer for \"f2161a08d007b7a6b6a0c6e92bc56456a31cc01346a99468d92bd4f106fe4e14\" returns successfully" Apr 30 03:25:06.518854 kubelet[2472]: E0430 03:25:06.518811 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:06.525309 kubelet[2472]: E0430 03:25:06.525245 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:06.550065 kubelet[2472]: I0430 03:25:06.548373 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qg7vn" podStartSLOduration=27.548347159 podStartE2EDuration="27.548347159s" podCreationTimestamp="2025-04-30 03:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:25:06.546228895 +0000 UTC m=+33.437877302" watchObservedRunningTime="2025-04-30 03:25:06.548347159 +0000 UTC m=+33.439995566" Apr 30 03:25:06.584985 kubelet[2472]: I0430 03:25:06.584831 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lg8fn" podStartSLOduration=27.584803206 podStartE2EDuration="27.584803206s" podCreationTimestamp="2025-04-30 03:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:25:06.584418283 +0000 UTC m=+33.476066694" watchObservedRunningTime="2025-04-30 03:25:06.584803206 +0000 UTC m=+33.476451613" Apr 30 03:25:06.685552 systemd[1]: run-containerd-runc-k8s.io-75e3012c50258d8209337d785a438387f9466d3b2a2e53e44b313a45da417daa-runc.tuOW4s.mount: Deactivated successfully. Apr 30 03:25:07.527972 kubelet[2472]: E0430 03:25:07.527864 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:07.528817 kubelet[2472]: E0430 03:25:07.528503 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:08.530466 kubelet[2472]: E0430 03:25:08.530189 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:08.530466 kubelet[2472]: E0430 03:25:08.530362 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:15.102859 systemd[1]: Started sshd@7-209.38.154.103:22-139.178.89.65:49862.service - OpenSSH per-connection server daemon (139.178.89.65:49862). Apr 30 03:25:15.198487 sshd[3864]: Accepted publickey for core from 139.178.89.65 port 49862 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:15.200926 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:15.209172 systemd-logind[1438]: New session 8 of user core. Apr 30 03:25:15.215250 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:25:15.911942 sshd[3864]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:15.918518 systemd[1]: sshd@7-209.38.154.103:22-139.178.89.65:49862.service: Deactivated successfully. Apr 30 03:25:15.922642 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:25:15.926361 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:25:15.930338 systemd-logind[1438]: Removed session 8. Apr 30 03:25:20.929361 systemd[1]: Started sshd@8-209.38.154.103:22-139.178.89.65:58594.service - OpenSSH per-connection server daemon (139.178.89.65:58594). Apr 30 03:25:20.993742 sshd[3878]: Accepted publickey for core from 139.178.89.65 port 58594 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:20.995559 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:21.003178 systemd-logind[1438]: New session 9 of user core. Apr 30 03:25:21.008312 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:25:21.165556 sshd[3878]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:21.168922 systemd[1]: sshd@8-209.38.154.103:22-139.178.89.65:58594.service: Deactivated successfully. Apr 30 03:25:21.171577 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:25:21.174238 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:25:21.175413 systemd-logind[1438]: Removed session 9. Apr 30 03:25:26.189837 systemd[1]: Started sshd@9-209.38.154.103:22-139.178.89.65:58596.service - OpenSSH per-connection server daemon (139.178.89.65:58596). Apr 30 03:25:26.237008 sshd[3893]: Accepted publickey for core from 139.178.89.65 port 58596 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:26.238998 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:26.244787 systemd-logind[1438]: New session 10 of user core. Apr 30 03:25:26.253277 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:25:26.391041 sshd[3893]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:26.397949 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:25:26.398776 systemd[1]: sshd@9-209.38.154.103:22-139.178.89.65:58596.service: Deactivated successfully. Apr 30 03:25:26.401553 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:25:26.403435 systemd-logind[1438]: Removed session 10. Apr 30 03:25:31.412404 systemd[1]: Started sshd@10-209.38.154.103:22-139.178.89.65:46662.service - OpenSSH per-connection server daemon (139.178.89.65:46662). Apr 30 03:25:31.472459 sshd[3907]: Accepted publickey for core from 139.178.89.65 port 46662 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:31.474834 sshd[3907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:31.481767 systemd-logind[1438]: New session 11 of user core. Apr 30 03:25:31.487208 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:25:31.627767 sshd[3907]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:31.638666 systemd[1]: sshd@10-209.38.154.103:22-139.178.89.65:46662.service: Deactivated successfully. Apr 30 03:25:31.641367 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:25:31.643956 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:25:31.649333 systemd[1]: Started sshd@11-209.38.154.103:22-139.178.89.65:46678.service - OpenSSH per-connection server daemon (139.178.89.65:46678). Apr 30 03:25:31.651775 systemd-logind[1438]: Removed session 11. Apr 30 03:25:31.706935 sshd[3920]: Accepted publickey for core from 139.178.89.65 port 46678 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:31.709539 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:31.717074 systemd-logind[1438]: New session 12 of user core. Apr 30 03:25:31.722260 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:25:31.954108 sshd[3920]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:31.968839 systemd[1]: sshd@11-209.38.154.103:22-139.178.89.65:46678.service: Deactivated successfully. Apr 30 03:25:31.975875 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:25:31.983025 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:25:31.992583 systemd[1]: Started sshd@12-209.38.154.103:22-139.178.89.65:46692.service - OpenSSH per-connection server daemon (139.178.89.65:46692). Apr 30 03:25:31.998151 systemd-logind[1438]: Removed session 12. Apr 30 03:25:32.081299 sshd[3931]: Accepted publickey for core from 139.178.89.65 port 46692 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:32.083651 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:32.090051 systemd-logind[1438]: New session 13 of user core. Apr 30 03:25:32.093149 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:25:32.249967 sshd[3931]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:32.256833 systemd[1]: sshd@12-209.38.154.103:22-139.178.89.65:46692.service: Deactivated successfully. Apr 30 03:25:32.259848 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:25:32.261456 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:25:32.262453 systemd-logind[1438]: Removed session 13. Apr 30 03:25:37.268422 systemd[1]: Started sshd@13-209.38.154.103:22-139.178.89.65:49950.service - OpenSSH per-connection server daemon (139.178.89.65:49950). Apr 30 03:25:37.333810 sshd[3945]: Accepted publickey for core from 139.178.89.65 port 49950 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:37.336138 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:37.348261 systemd-logind[1438]: New session 14 of user core. Apr 30 03:25:37.356223 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:25:37.490477 sshd[3945]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:37.496081 systemd[1]: sshd@13-209.38.154.103:22-139.178.89.65:49950.service: Deactivated successfully. Apr 30 03:25:37.498831 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:25:37.500093 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:25:37.501874 systemd-logind[1438]: Removed session 14. Apr 30 03:25:42.514411 systemd[1]: Started sshd@14-209.38.154.103:22-139.178.89.65:49952.service - OpenSSH per-connection server daemon (139.178.89.65:49952). Apr 30 03:25:42.579030 sshd[3960]: Accepted publickey for core from 139.178.89.65 port 49952 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:42.580866 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:42.587318 systemd-logind[1438]: New session 15 of user core. Apr 30 03:25:42.598232 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:25:42.732870 sshd[3960]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:42.738056 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:25:42.738307 systemd[1]: sshd@14-209.38.154.103:22-139.178.89.65:49952.service: Deactivated successfully. Apr 30 03:25:42.740374 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:25:42.743043 systemd-logind[1438]: Removed session 15. Apr 30 03:25:46.290176 kubelet[2472]: E0430 03:25:46.290120 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:47.291312 kubelet[2472]: E0430 03:25:47.290547 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:47.748186 systemd[1]: Started sshd@15-209.38.154.103:22-139.178.89.65:51652.service - OpenSSH per-connection server daemon (139.178.89.65:51652). Apr 30 03:25:47.808624 sshd[3973]: Accepted publickey for core from 139.178.89.65 port 51652 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:47.810586 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:47.816367 systemd-logind[1438]: New session 16 of user core. Apr 30 03:25:47.821183 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:25:47.965921 sshd[3973]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:47.978367 systemd[1]: sshd@15-209.38.154.103:22-139.178.89.65:51652.service: Deactivated successfully. Apr 30 03:25:47.980609 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:25:47.981857 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:25:47.990337 systemd[1]: Started sshd@16-209.38.154.103:22-139.178.89.65:51654.service - OpenSSH per-connection server daemon (139.178.89.65:51654). Apr 30 03:25:47.991814 systemd-logind[1438]: Removed session 16. Apr 30 03:25:48.043871 sshd[3985]: Accepted publickey for core from 139.178.89.65 port 51654 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:48.045563 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:48.052777 systemd-logind[1438]: New session 17 of user core. Apr 30 03:25:48.059236 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:25:48.381039 sshd[3985]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:48.395377 systemd[1]: sshd@16-209.38.154.103:22-139.178.89.65:51654.service: Deactivated successfully. Apr 30 03:25:48.398572 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:25:48.401042 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:25:48.407348 systemd[1]: Started sshd@17-209.38.154.103:22-139.178.89.65:51664.service - OpenSSH per-connection server daemon (139.178.89.65:51664). Apr 30 03:25:48.409339 systemd-logind[1438]: Removed session 17. Apr 30 03:25:48.465407 systemd[1]: Started sshd@18-209.38.154.103:22-180.108.64.6:33816.service - OpenSSH per-connection server daemon (180.108.64.6:33816). Apr 30 03:25:48.475571 sshd[3995]: Accepted publickey for core from 139.178.89.65 port 51664 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:48.478604 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:48.485341 systemd-logind[1438]: New session 18 of user core. Apr 30 03:25:48.489148 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:25:49.895561 sshd[3998]: Invalid user rodrigo from 180.108.64.6 port 33816 Apr 30 03:25:50.058938 sshd[3998]: Received disconnect from 180.108.64.6 port 33816:11: Bye Bye [preauth] Apr 30 03:25:50.058938 sshd[3998]: Disconnected from invalid user rodrigo 180.108.64.6 port 33816 [preauth] Apr 30 03:25:50.059587 systemd[1]: sshd@18-209.38.154.103:22-180.108.64.6:33816.service: Deactivated successfully. Apr 30 03:25:50.476340 sshd[3995]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:50.490049 systemd[1]: sshd@17-209.38.154.103:22-139.178.89.65:51664.service: Deactivated successfully. Apr 30 03:25:50.496048 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:25:50.502206 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:25:50.509394 systemd[1]: Started sshd@19-209.38.154.103:22-139.178.89.65:51668.service - OpenSSH per-connection server daemon (139.178.89.65:51668). Apr 30 03:25:50.512698 systemd-logind[1438]: Removed session 18. Apr 30 03:25:50.580050 sshd[4017]: Accepted publickey for core from 139.178.89.65 port 51668 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:50.582279 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:50.588164 systemd-logind[1438]: New session 19 of user core. Apr 30 03:25:50.593177 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:25:50.903502 sshd[4017]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:50.918792 systemd[1]: sshd@19-209.38.154.103:22-139.178.89.65:51668.service: Deactivated successfully. Apr 30 03:25:50.924919 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:25:50.928917 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:25:50.938452 systemd[1]: Started sshd@20-209.38.154.103:22-139.178.89.65:51682.service - OpenSSH per-connection server daemon (139.178.89.65:51682). Apr 30 03:25:50.939947 systemd-logind[1438]: Removed session 19. Apr 30 03:25:51.004203 sshd[4029]: Accepted publickey for core from 139.178.89.65 port 51682 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:51.006400 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:51.012067 systemd-logind[1438]: New session 20 of user core. Apr 30 03:25:51.017261 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:25:51.164310 sshd[4029]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:51.170670 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:25:51.171627 systemd[1]: sshd@20-209.38.154.103:22-139.178.89.65:51682.service: Deactivated successfully. Apr 30 03:25:51.175676 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:25:51.177459 systemd-logind[1438]: Removed session 20. Apr 30 03:25:52.290984 kubelet[2472]: E0430 03:25:52.290597 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:25:56.185713 systemd[1]: Started sshd@21-209.38.154.103:22-139.178.89.65:51696.service - OpenSSH per-connection server daemon (139.178.89.65:51696). Apr 30 03:25:56.242937 sshd[4045]: Accepted publickey for core from 139.178.89.65 port 51696 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:25:56.245505 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:56.253380 systemd-logind[1438]: New session 21 of user core. Apr 30 03:25:56.260274 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:25:56.430292 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:56.436447 systemd[1]: sshd@21-209.38.154.103:22-139.178.89.65:51696.service: Deactivated successfully. Apr 30 03:25:56.439985 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:25:56.442107 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:25:56.443760 systemd-logind[1438]: Removed session 21. Apr 30 03:26:01.456568 systemd[1]: Started sshd@22-209.38.154.103:22-139.178.89.65:56544.service - OpenSSH per-connection server daemon (139.178.89.65:56544). Apr 30 03:26:01.534087 sshd[4058]: Accepted publickey for core from 139.178.89.65 port 56544 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:01.537037 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:01.546748 systemd-logind[1438]: New session 22 of user core. Apr 30 03:26:01.553847 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:26:01.748261 sshd[4058]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:01.755489 systemd[1]: sshd@22-209.38.154.103:22-139.178.89.65:56544.service: Deactivated successfully. Apr 30 03:26:01.760439 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:26:01.762631 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:26:01.765453 systemd-logind[1438]: Removed session 22. Apr 30 03:26:06.766555 systemd[1]: Started sshd@23-209.38.154.103:22-139.178.89.65:56196.service - OpenSSH per-connection server daemon (139.178.89.65:56196). Apr 30 03:26:06.826159 sshd[4072]: Accepted publickey for core from 139.178.89.65 port 56196 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:06.828118 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:06.834426 systemd-logind[1438]: New session 23 of user core. Apr 30 03:26:06.838165 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:26:06.978245 sshd[4072]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:06.983687 systemd[1]: sshd@23-209.38.154.103:22-139.178.89.65:56196.service: Deactivated successfully. Apr 30 03:26:06.986849 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:26:06.988433 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:26:06.989745 systemd-logind[1438]: Removed session 23. Apr 30 03:26:07.291686 kubelet[2472]: E0430 03:26:07.290656 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:10.289943 kubelet[2472]: E0430 03:26:10.289667 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:10.289943 kubelet[2472]: E0430 03:26:10.289695 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:11.998303 systemd[1]: Started sshd@24-209.38.154.103:22-139.178.89.65:56212.service - OpenSSH per-connection server daemon (139.178.89.65:56212). Apr 30 03:26:12.058392 sshd[4086]: Accepted publickey for core from 139.178.89.65 port 56212 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:12.060350 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:12.067874 systemd-logind[1438]: New session 24 of user core. Apr 30 03:26:12.075302 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:26:12.211243 sshd[4086]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:12.220837 systemd[1]: sshd@24-209.38.154.103:22-139.178.89.65:56212.service: Deactivated successfully. Apr 30 03:26:12.224107 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:26:12.227370 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:26:12.235007 systemd[1]: Started sshd@25-209.38.154.103:22-139.178.89.65:56214.service - OpenSSH per-connection server daemon (139.178.89.65:56214). Apr 30 03:26:12.237653 systemd-logind[1438]: Removed session 24. Apr 30 03:26:12.291137 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 56214 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:12.291677 kubelet[2472]: E0430 03:26:12.290754 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:12.298556 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:12.299487 systemd[1]: Started sshd@26-209.38.154.103:22-112.196.28.139:54500.service - OpenSSH per-connection server daemon (112.196.28.139:54500). Apr 30 03:26:12.309705 systemd-logind[1438]: New session 25 of user core. Apr 30 03:26:12.312490 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:26:13.682365 sshd[4101]: Invalid user samsung from 112.196.28.139 port 54500 Apr 30 03:26:13.793156 systemd[1]: run-containerd-runc-k8s.io-5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091-runc.dSiqQd.mount: Deactivated successfully. Apr 30 03:26:13.818431 containerd[1457]: time="2025-04-30T03:26:13.818347053Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:26:13.823899 containerd[1457]: time="2025-04-30T03:26:13.823269129Z" level=info msg="StopContainer for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" with timeout 30 (s)" Apr 30 03:26:13.824602 containerd[1457]: time="2025-04-30T03:26:13.824390991Z" level=info msg="Stop container \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" with signal terminated" Apr 30 03:26:13.825288 containerd[1457]: time="2025-04-30T03:26:13.825175457Z" level=info msg="StopContainer for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" with timeout 2 (s)" Apr 30 03:26:13.825841 containerd[1457]: time="2025-04-30T03:26:13.825773332Z" level=info msg="Stop container \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" with signal terminated" Apr 30 03:26:13.841034 systemd-networkd[1365]: lxc_health: Link DOWN Apr 30 03:26:13.841046 systemd-networkd[1365]: lxc_health: Lost carrier Apr 30 03:26:13.843316 systemd[1]: cri-containerd-6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3.scope: Deactivated successfully. Apr 30 03:26:13.869697 systemd[1]: cri-containerd-5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091.scope: Deactivated successfully. Apr 30 03:26:13.869924 systemd[1]: cri-containerd-5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091.scope: Consumed 9.946s CPU time. Apr 30 03:26:13.910090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3-rootfs.mount: Deactivated successfully. Apr 30 03:26:13.919151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091-rootfs.mount: Deactivated successfully. Apr 30 03:26:13.922128 containerd[1457]: time="2025-04-30T03:26:13.922042096Z" level=info msg="shim disconnected" id=5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091 namespace=k8s.io Apr 30 03:26:13.922128 containerd[1457]: time="2025-04-30T03:26:13.922123206Z" level=warning msg="cleaning up after shim disconnected" id=5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091 namespace=k8s.io Apr 30 03:26:13.922551 containerd[1457]: time="2025-04-30T03:26:13.922135218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:13.922968 containerd[1457]: time="2025-04-30T03:26:13.922646708Z" level=info msg="shim disconnected" id=6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3 namespace=k8s.io Apr 30 03:26:13.922968 containerd[1457]: time="2025-04-30T03:26:13.922704902Z" level=warning msg="cleaning up after shim disconnected" id=6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3 namespace=k8s.io Apr 30 03:26:13.922968 containerd[1457]: time="2025-04-30T03:26:13.922723154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:13.943711 sshd[4101]: Received disconnect from 112.196.28.139 port 54500:11: Bye Bye [preauth] Apr 30 03:26:13.943711 sshd[4101]: Disconnected from invalid user samsung 112.196.28.139 port 54500 [preauth] Apr 30 03:26:13.948355 systemd[1]: sshd@26-209.38.154.103:22-112.196.28.139:54500.service: Deactivated successfully. Apr 30 03:26:13.957277 containerd[1457]: time="2025-04-30T03:26:13.956921295Z" level=info msg="StopContainer for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" returns successfully" Apr 30 03:26:13.957277 containerd[1457]: time="2025-04-30T03:26:13.957217452Z" level=info msg="StopContainer for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" returns successfully" Apr 30 03:26:13.958093 containerd[1457]: time="2025-04-30T03:26:13.958060583Z" level=info msg="StopPodSandbox for \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\"" Apr 30 03:26:13.958293 containerd[1457]: time="2025-04-30T03:26:13.958268699Z" level=info msg="Container to stop \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:26:13.958778 containerd[1457]: time="2025-04-30T03:26:13.958413507Z" level=info msg="Container to stop \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:26:13.958778 containerd[1457]: time="2025-04-30T03:26:13.958441089Z" level=info msg="Container to stop \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:26:13.958778 containerd[1457]: time="2025-04-30T03:26:13.958457629Z" level=info msg="Container to stop \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:26:13.958778 containerd[1457]: time="2025-04-30T03:26:13.958472352Z" level=info msg="Container to stop \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:26:13.958778 containerd[1457]: time="2025-04-30T03:26:13.958268907Z" level=info msg="StopPodSandbox for \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\"" Apr 30 03:26:13.958778 containerd[1457]: time="2025-04-30T03:26:13.958536890Z" level=info msg="Container to stop \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:26:13.962189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8-shm.mount: Deactivated successfully. Apr 30 03:26:13.962414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550-shm.mount: Deactivated successfully. Apr 30 03:26:13.974758 systemd[1]: cri-containerd-f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8.scope: Deactivated successfully. Apr 30 03:26:13.979622 systemd[1]: cri-containerd-91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550.scope: Deactivated successfully. Apr 30 03:26:14.027281 containerd[1457]: time="2025-04-30T03:26:14.027132571Z" level=info msg="shim disconnected" id=f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8 namespace=k8s.io Apr 30 03:26:14.027281 containerd[1457]: time="2025-04-30T03:26:14.027208445Z" level=warning msg="cleaning up after shim disconnected" id=f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8 namespace=k8s.io Apr 30 03:26:14.027281 containerd[1457]: time="2025-04-30T03:26:14.027220835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:14.040567 containerd[1457]: time="2025-04-30T03:26:14.040271396Z" level=info msg="shim disconnected" id=91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550 namespace=k8s.io Apr 30 03:26:14.040567 containerd[1457]: time="2025-04-30T03:26:14.040341969Z" level=warning msg="cleaning up after shim disconnected" id=91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550 namespace=k8s.io Apr 30 03:26:14.040567 containerd[1457]: time="2025-04-30T03:26:14.040354785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:14.068204 containerd[1457]: time="2025-04-30T03:26:14.067077290Z" level=info msg="TearDown network for sandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" successfully" Apr 30 03:26:14.068204 containerd[1457]: time="2025-04-30T03:26:14.067136946Z" level=info msg="StopPodSandbox for \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" returns successfully" Apr 30 03:26:14.069319 containerd[1457]: time="2025-04-30T03:26:14.069053150Z" level=info msg="TearDown network for sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" successfully" Apr 30 03:26:14.069319 containerd[1457]: time="2025-04-30T03:26:14.069088734Z" level=info msg="StopPodSandbox for \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" returns successfully" Apr 30 03:26:14.225572 kubelet[2472]: I0430 03:26:14.224839 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-lib-modules\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.225572 kubelet[2472]: I0430 03:26:14.224956 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-bpf-maps\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.225572 kubelet[2472]: I0430 03:26:14.224985 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-kernel\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.225572 kubelet[2472]: I0430 03:26:14.225083 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cni-path\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.225572 kubelet[2472]: I0430 03:26:14.225109 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-hostproc\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.225572 kubelet[2472]: I0430 03:26:14.225139 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c4np\" (UniqueName: \"kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-kube-api-access-9c4np\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226493 kubelet[2472]: I0430 03:26:14.225155 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-xtables-lock\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226493 kubelet[2472]: I0430 03:26:14.225168 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-run\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226493 kubelet[2472]: I0430 03:26:14.225183 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-cgroup\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226493 kubelet[2472]: I0430 03:26:14.225210 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6365ecee-d3a4-469e-a7d8-03638203f650-cilium-config-path\") pod \"6365ecee-d3a4-469e-a7d8-03638203f650\" (UID: \"6365ecee-d3a4-469e-a7d8-03638203f650\") " Apr 30 03:26:14.226493 kubelet[2472]: I0430 03:26:14.225228 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-config-path\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226493 kubelet[2472]: I0430 03:26:14.225251 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0b80ed8-4137-48c6-9b28-125ecf526192-clustermesh-secrets\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226687 kubelet[2472]: I0430 03:26:14.225269 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkzrc\" (UniqueName: \"kubernetes.io/projected/6365ecee-d3a4-469e-a7d8-03638203f650-kube-api-access-bkzrc\") pod \"6365ecee-d3a4-469e-a7d8-03638203f650\" (UID: \"6365ecee-d3a4-469e-a7d8-03638203f650\") " Apr 30 03:26:14.226687 kubelet[2472]: I0430 03:26:14.225289 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-etc-cni-netd\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226687 kubelet[2472]: I0430 03:26:14.225318 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-hubble-tls\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.226687 kubelet[2472]: I0430 03:26:14.225337 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-net\") pod \"b0b80ed8-4137-48c6-9b28-125ecf526192\" (UID: \"b0b80ed8-4137-48c6-9b28-125ecf526192\") " Apr 30 03:26:14.234973 kubelet[2472]: I0430 03:26:14.233733 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.234973 kubelet[2472]: I0430 03:26:14.233789 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.234973 kubelet[2472]: I0430 03:26:14.233837 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.234973 kubelet[2472]: I0430 03:26:14.233848 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.234973 kubelet[2472]: I0430 03:26:14.233863 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.237086 kubelet[2472]: I0430 03:26:14.236947 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6365ecee-d3a4-469e-a7d8-03638203f650-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6365ecee-d3a4-469e-a7d8-03638203f650" (UID: "6365ecee-d3a4-469e-a7d8-03638203f650"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:26:14.237222 kubelet[2472]: I0430 03:26:14.237200 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.237922 kubelet[2472]: I0430 03:26:14.237555 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:26:14.241507 kubelet[2472]: I0430 03:26:14.241441 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-kube-api-access-9c4np" (OuterVolumeSpecName: "kube-api-access-9c4np") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "kube-api-access-9c4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:26:14.241653 kubelet[2472]: I0430 03:26:14.241547 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.241653 kubelet[2472]: I0430 03:26:14.241572 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.241653 kubelet[2472]: I0430 03:26:14.241593 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.244982 kubelet[2472]: I0430 03:26:14.244767 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6365ecee-d3a4-469e-a7d8-03638203f650-kube-api-access-bkzrc" (OuterVolumeSpecName: "kube-api-access-bkzrc") pod "6365ecee-d3a4-469e-a7d8-03638203f650" (UID: "6365ecee-d3a4-469e-a7d8-03638203f650"). InnerVolumeSpecName "kube-api-access-bkzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:26:14.244982 kubelet[2472]: I0430 03:26:14.244802 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:26:14.244982 kubelet[2472]: I0430 03:26:14.244761 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:26:14.244982 kubelet[2472]: I0430 03:26:14.244867 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b80ed8-4137-48c6-9b28-125ecf526192-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0b80ed8-4137-48c6-9b28-125ecf526192" (UID: "b0b80ed8-4137-48c6-9b28-125ecf526192"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:26:14.328447 kubelet[2472]: I0430 03:26:14.328390 2472 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-etc-cni-netd\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328447 kubelet[2472]: I0430 03:26:14.328448 2472 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0b80ed8-4137-48c6-9b28-125ecf526192-clustermesh-secrets\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328447 kubelet[2472]: I0430 03:26:14.328464 2472 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bkzrc\" (UniqueName: \"kubernetes.io/projected/6365ecee-d3a4-469e-a7d8-03638203f650-kube-api-access-bkzrc\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328447 kubelet[2472]: I0430 03:26:14.328477 2472 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-hubble-tls\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328489 2472 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-net\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328504 2472 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-lib-modules\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328515 2472 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-bpf-maps\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328528 2472 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-hostproc\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328540 2472 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-host-proc-sys-kernel\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328549 2472 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cni-path\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328560 2472 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6365ecee-d3a4-469e-a7d8-03638203f650-cilium-config-path\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.328827 kubelet[2472]: I0430 03:26:14.328573 2472 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9c4np\" (UniqueName: \"kubernetes.io/projected/b0b80ed8-4137-48c6-9b28-125ecf526192-kube-api-access-9c4np\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.329348 kubelet[2472]: I0430 03:26:14.328582 2472 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-xtables-lock\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.329348 kubelet[2472]: I0430 03:26:14.328590 2472 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-run\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.329348 kubelet[2472]: I0430 03:26:14.328597 2472 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-cgroup\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.329348 kubelet[2472]: I0430 03:26:14.328605 2472 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0b80ed8-4137-48c6-9b28-125ecf526192-cilium-config-path\") on node \"ci-4081.3.3-c-cb9001cac8\" DevicePath \"\"" Apr 30 03:26:14.709706 systemd[1]: Removed slice kubepods-burstable-podb0b80ed8_4137_48c6_9b28_125ecf526192.slice - libcontainer container kubepods-burstable-podb0b80ed8_4137_48c6_9b28_125ecf526192.slice. Apr 30 03:26:14.710756 systemd[1]: kubepods-burstable-podb0b80ed8_4137_48c6_9b28_125ecf526192.slice: Consumed 10.053s CPU time. Apr 30 03:26:14.718630 kubelet[2472]: I0430 03:26:14.718572 2472 scope.go:117] "RemoveContainer" containerID="5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091" Apr 30 03:26:14.722198 containerd[1457]: time="2025-04-30T03:26:14.720994836Z" level=info msg="RemoveContainer for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\"" Apr 30 03:26:14.727283 systemd[1]: Removed slice kubepods-besteffort-pod6365ecee_d3a4_469e_a7d8_03638203f650.slice - libcontainer container kubepods-besteffort-pod6365ecee_d3a4_469e_a7d8_03638203f650.slice. Apr 30 03:26:14.730275 containerd[1457]: time="2025-04-30T03:26:14.730233186Z" level=info msg="RemoveContainer for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" returns successfully" Apr 30 03:26:14.730867 kubelet[2472]: I0430 03:26:14.730795 2472 scope.go:117] "RemoveContainer" containerID="61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3" Apr 30 03:26:14.737766 containerd[1457]: time="2025-04-30T03:26:14.737136668Z" level=info msg="RemoveContainer for \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\"" Apr 30 03:26:14.742633 containerd[1457]: time="2025-04-30T03:26:14.742554844Z" level=info msg="RemoveContainer for \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\" returns successfully" Apr 30 03:26:14.746234 kubelet[2472]: I0430 03:26:14.746134 2472 scope.go:117] "RemoveContainer" containerID="7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612" Apr 30 03:26:14.755783 containerd[1457]: time="2025-04-30T03:26:14.755410222Z" level=info msg="RemoveContainer for \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\"" Apr 30 03:26:14.764536 containerd[1457]: time="2025-04-30T03:26:14.764489835Z" level=info msg="RemoveContainer for \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\" returns successfully" Apr 30 03:26:14.765124 kubelet[2472]: I0430 03:26:14.765069 2472 scope.go:117] "RemoveContainer" containerID="ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267" Apr 30 03:26:14.768457 containerd[1457]: time="2025-04-30T03:26:14.768014826Z" level=info msg="RemoveContainer for \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\"" Apr 30 03:26:14.770710 containerd[1457]: time="2025-04-30T03:26:14.770662087Z" level=info msg="RemoveContainer for \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\" returns successfully" Apr 30 03:26:14.772080 kubelet[2472]: I0430 03:26:14.771863 2472 scope.go:117] "RemoveContainer" containerID="67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b" Apr 30 03:26:14.774934 containerd[1457]: time="2025-04-30T03:26:14.774502054Z" level=info msg="RemoveContainer for \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\"" Apr 30 03:26:14.777757 containerd[1457]: time="2025-04-30T03:26:14.777700750Z" level=info msg="RemoveContainer for \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\" returns successfully" Apr 30 03:26:14.778292 kubelet[2472]: I0430 03:26:14.778262 2472 scope.go:117] "RemoveContainer" containerID="5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091" Apr 30 03:26:14.786266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8-rootfs.mount: Deactivated successfully. Apr 30 03:26:14.786476 systemd[1]: var-lib-kubelet-pods-6365ecee\x2dd3a4\x2d469e\x2da7d8\x2d03638203f650-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbkzrc.mount: Deactivated successfully. Apr 30 03:26:14.786574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550-rootfs.mount: Deactivated successfully. Apr 30 03:26:14.786658 systemd[1]: var-lib-kubelet-pods-b0b80ed8\x2d4137\x2d48c6\x2d9b28\x2d125ecf526192-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9c4np.mount: Deactivated successfully. Apr 30 03:26:14.786746 systemd[1]: var-lib-kubelet-pods-b0b80ed8\x2d4137\x2d48c6\x2d9b28\x2d125ecf526192-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 03:26:14.786841 systemd[1]: var-lib-kubelet-pods-b0b80ed8\x2d4137\x2d48c6\x2d9b28\x2d125ecf526192-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 03:26:14.799528 containerd[1457]: time="2025-04-30T03:26:14.784320023Z" level=error msg="ContainerStatus for \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\": not found" Apr 30 03:26:14.801553 kubelet[2472]: E0430 03:26:14.801343 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\": not found" containerID="5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091" Apr 30 03:26:14.801717 kubelet[2472]: I0430 03:26:14.801469 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091"} err="failed to get container status \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e22dbff65a93ce3ada3a91f7ebc643ca208d665ac60b8c841e057dd7bac3091\": not found" Apr 30 03:26:14.801717 kubelet[2472]: I0430 03:26:14.801608 2472 scope.go:117] "RemoveContainer" containerID="61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3" Apr 30 03:26:14.802319 containerd[1457]: time="2025-04-30T03:26:14.802087255Z" level=error msg="ContainerStatus for \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\": not found" Apr 30 03:26:14.808844 kubelet[2472]: E0430 03:26:14.808783 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\": not found" containerID="61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3" Apr 30 03:26:14.809231 kubelet[2472]: I0430 03:26:14.808848 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3"} err="failed to get container status \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"61063b01e8d3d956105662b04a9c92d92e97949df30698a46c01980f6127bfd3\": not found" Apr 30 03:26:14.809231 kubelet[2472]: I0430 03:26:14.808914 2472 scope.go:117] "RemoveContainer" containerID="7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612" Apr 30 03:26:14.809745 containerd[1457]: time="2025-04-30T03:26:14.809609420Z" level=error msg="ContainerStatus for \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\": not found" Apr 30 03:26:14.809926 kubelet[2472]: E0430 03:26:14.809861 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\": not found" containerID="7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612" Apr 30 03:26:14.810007 kubelet[2472]: I0430 03:26:14.809934 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612"} err="failed to get container status \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\": rpc error: code = NotFound desc = an error occurred when try to find container \"7476f89c6c5a1109982b4cbfc652cdf32f78be6b20aa62dc738e5e42e512d612\": not found" Apr 30 03:26:14.810007 kubelet[2472]: I0430 03:26:14.809968 2472 scope.go:117] "RemoveContainer" containerID="ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267" Apr 30 03:26:14.810438 containerd[1457]: time="2025-04-30T03:26:14.810345560Z" level=error msg="ContainerStatus for \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\": not found" Apr 30 03:26:14.810842 kubelet[2472]: E0430 03:26:14.810792 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\": not found" containerID="ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267" Apr 30 03:26:14.810957 kubelet[2472]: I0430 03:26:14.810850 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267"} err="failed to get container status \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca5b7ba7a136edc94d4f0a04c2a9ef7c5686d1328c6779576b44365abc926267\": not found" Apr 30 03:26:14.810957 kubelet[2472]: I0430 03:26:14.810877 2472 scope.go:117] "RemoveContainer" containerID="67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b" Apr 30 03:26:14.811563 containerd[1457]: time="2025-04-30T03:26:14.811222312Z" level=error msg="ContainerStatus for \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\": not found" Apr 30 03:26:14.811654 kubelet[2472]: E0430 03:26:14.811401 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\": not found" containerID="67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b" Apr 30 03:26:14.811654 kubelet[2472]: I0430 03:26:14.811438 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b"} err="failed to get container status \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\": rpc error: code = NotFound desc = an error occurred when try to find container \"67ae3612ad96e759fc4696dbb18c85c016d5a2fceee6f0c1010c2077c5a2e61b\": not found" Apr 30 03:26:14.811654 kubelet[2472]: I0430 03:26:14.811463 2472 scope.go:117] "RemoveContainer" containerID="6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3" Apr 30 03:26:14.813969 containerd[1457]: time="2025-04-30T03:26:14.813692878Z" level=info msg="RemoveContainer for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\"" Apr 30 03:26:14.820327 containerd[1457]: time="2025-04-30T03:26:14.820274265Z" level=info msg="RemoveContainer for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" returns successfully" Apr 30 03:26:14.820862 kubelet[2472]: I0430 03:26:14.820685 2472 scope.go:117] "RemoveContainer" containerID="6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3" Apr 30 03:26:14.821271 containerd[1457]: time="2025-04-30T03:26:14.821203023Z" level=error msg="ContainerStatus for \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\": not found" Apr 30 03:26:14.821544 kubelet[2472]: E0430 03:26:14.821511 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\": not found" containerID="6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3" Apr 30 03:26:14.821616 kubelet[2472]: I0430 03:26:14.821579 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3"} err="failed to get container status \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ada1e733962a754867d5bd50e151e059afc1656d56649b6d583541f6e24d0f3\": not found" Apr 30 03:26:15.294203 kubelet[2472]: I0430 03:26:15.294064 2472 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6365ecee-d3a4-469e-a7d8-03638203f650" path="/var/lib/kubelet/pods/6365ecee-d3a4-469e-a7d8-03638203f650/volumes" Apr 30 03:26:15.295812 kubelet[2472]: I0430 03:26:15.295252 2472 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" path="/var/lib/kubelet/pods/b0b80ed8-4137-48c6-9b28-125ecf526192/volumes" Apr 30 03:26:15.675957 sshd[4098]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:15.685319 systemd[1]: sshd@25-209.38.154.103:22-139.178.89.65:56214.service: Deactivated successfully. Apr 30 03:26:15.687517 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:26:15.689285 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:26:15.693575 systemd[1]: Started sshd@27-209.38.154.103:22-139.178.89.65:56226.service - OpenSSH per-connection server daemon (139.178.89.65:56226). Apr 30 03:26:15.697417 systemd-logind[1438]: Removed session 25. Apr 30 03:26:15.759208 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 56226 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:15.761365 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:15.767226 systemd-logind[1438]: New session 26 of user core. Apr 30 03:26:15.775255 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:26:16.351210 sshd[4263]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:16.367009 systemd[1]: sshd@27-209.38.154.103:22-139.178.89.65:56226.service: Deactivated successfully. Apr 30 03:26:16.371621 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:26:16.377398 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:26:16.391356 systemd[1]: Started sshd@28-209.38.154.103:22-139.178.89.65:56242.service - OpenSSH per-connection server daemon (139.178.89.65:56242). Apr 30 03:26:16.393563 systemd-logind[1438]: Removed session 26. Apr 30 03:26:16.400008 kubelet[2472]: E0430 03:26:16.398947 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" containerName="apply-sysctl-overwrites" Apr 30 03:26:16.401915 kubelet[2472]: E0430 03:26:16.400984 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" containerName="mount-bpf-fs" Apr 30 03:26:16.401915 kubelet[2472]: E0430 03:26:16.401029 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6365ecee-d3a4-469e-a7d8-03638203f650" containerName="cilium-operator" Apr 30 03:26:16.401915 kubelet[2472]: E0430 03:26:16.401039 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" containerName="mount-cgroup" Apr 30 03:26:16.401915 kubelet[2472]: E0430 03:26:16.401045 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" containerName="clean-cilium-state" Apr 30 03:26:16.401915 kubelet[2472]: E0430 03:26:16.401052 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" containerName="cilium-agent" Apr 30 03:26:16.401915 kubelet[2472]: I0430 03:26:16.401109 2472 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b80ed8-4137-48c6-9b28-125ecf526192" containerName="cilium-agent" Apr 30 03:26:16.401915 kubelet[2472]: I0430 03:26:16.401117 2472 memory_manager.go:354] "RemoveStaleState removing state" podUID="6365ecee-d3a4-469e-a7d8-03638203f650" containerName="cilium-operator" Apr 30 03:26:16.430838 systemd[1]: Created slice kubepods-burstable-podb2b26a8c_6458_4df0_9cec_de3288bcf921.slice - libcontainer container kubepods-burstable-podb2b26a8c_6458_4df0_9cec_de3288bcf921.slice. Apr 30 03:26:16.474016 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 56242 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:16.479975 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:16.490597 systemd-logind[1438]: New session 27 of user core. Apr 30 03:26:16.495657 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:26:16.542402 kubelet[2472]: I0430 03:26:16.542325 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-host-proc-sys-net\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.542620 kubelet[2472]: I0430 03:26:16.542420 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfb4x\" (UniqueName: \"kubernetes.io/projected/b2b26a8c-6458-4df0-9cec-de3288bcf921-kube-api-access-xfb4x\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.542620 kubelet[2472]: I0430 03:26:16.542500 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2b26a8c-6458-4df0-9cec-de3288bcf921-cilium-config-path\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.542620 kubelet[2472]: I0430 03:26:16.542566 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-cilium-run\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.542620 kubelet[2472]: I0430 03:26:16.542595 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-cilium-cgroup\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543183 kubelet[2472]: I0430 03:26:16.542651 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2b26a8c-6458-4df0-9cec-de3288bcf921-clustermesh-secrets\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543183 kubelet[2472]: I0430 03:26:16.542679 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-etc-cni-netd\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543183 kubelet[2472]: I0430 03:26:16.542730 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b2b26a8c-6458-4df0-9cec-de3288bcf921-cilium-ipsec-secrets\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543183 kubelet[2472]: I0430 03:26:16.542760 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-host-proc-sys-kernel\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543183 kubelet[2472]: I0430 03:26:16.542821 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-lib-modules\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543183 kubelet[2472]: I0430 03:26:16.542851 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-hostproc\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543390 kubelet[2472]: I0430 03:26:16.542918 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-bpf-maps\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543390 kubelet[2472]: I0430 03:26:16.542944 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-cni-path\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543390 kubelet[2472]: I0430 03:26:16.543142 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2b26a8c-6458-4df0-9cec-de3288bcf921-xtables-lock\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.543390 kubelet[2472]: I0430 03:26:16.543172 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2b26a8c-6458-4df0-9cec-de3288bcf921-hubble-tls\") pod \"cilium-zqklw\" (UID: \"b2b26a8c-6458-4df0-9cec-de3288bcf921\") " pod="kube-system/cilium-zqklw" Apr 30 03:26:16.566510 sshd[4274]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:16.579512 systemd[1]: sshd@28-209.38.154.103:22-139.178.89.65:56242.service: Deactivated successfully. Apr 30 03:26:16.583652 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:26:16.587840 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:26:16.595457 systemd[1]: Started sshd@29-209.38.154.103:22-139.178.89.65:58496.service - OpenSSH per-connection server daemon (139.178.89.65:58496). Apr 30 03:26:16.597703 systemd-logind[1438]: Removed session 27. Apr 30 03:26:16.655068 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 58496 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:26:16.667191 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:26:16.704759 systemd-logind[1438]: New session 28 of user core. Apr 30 03:26:16.715359 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:26:16.750214 kubelet[2472]: E0430 03:26:16.750160 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:16.751442 containerd[1457]: time="2025-04-30T03:26:16.750914983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqklw,Uid:b2b26a8c-6458-4df0-9cec-de3288bcf921,Namespace:kube-system,Attempt:0,}" Apr 30 03:26:16.788917 containerd[1457]: time="2025-04-30T03:26:16.787360974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:26:16.788917 containerd[1457]: time="2025-04-30T03:26:16.787432761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:26:16.788917 containerd[1457]: time="2025-04-30T03:26:16.787444186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:16.788917 containerd[1457]: time="2025-04-30T03:26:16.787549172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:26:16.818571 systemd[1]: Started cri-containerd-cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f.scope - libcontainer container cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f. Apr 30 03:26:16.867451 containerd[1457]: time="2025-04-30T03:26:16.867030336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zqklw,Uid:b2b26a8c-6458-4df0-9cec-de3288bcf921,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\"" Apr 30 03:26:16.869663 kubelet[2472]: E0430 03:26:16.869624 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:16.879180 containerd[1457]: time="2025-04-30T03:26:16.879024844Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:26:16.894094 containerd[1457]: time="2025-04-30T03:26:16.894032534Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75\"" Apr 30 03:26:16.895230 containerd[1457]: time="2025-04-30T03:26:16.895093974Z" level=info msg="StartContainer for \"4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75\"" Apr 30 03:26:16.948268 systemd[1]: Started cri-containerd-4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75.scope - libcontainer container 4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75. Apr 30 03:26:16.982833 containerd[1457]: time="2025-04-30T03:26:16.982699463Z" level=info msg="StartContainer for \"4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75\" returns successfully" Apr 30 03:26:17.002105 systemd[1]: cri-containerd-4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75.scope: Deactivated successfully. Apr 30 03:26:17.046476 containerd[1457]: time="2025-04-30T03:26:17.046384747Z" level=info msg="shim disconnected" id=4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75 namespace=k8s.io Apr 30 03:26:17.046476 containerd[1457]: time="2025-04-30T03:26:17.046452864Z" level=warning msg="cleaning up after shim disconnected" id=4bbf5e9a48b224d482193cab3b19516dcd38ec9ec72bf0bd856a94f503226b75 namespace=k8s.io Apr 30 03:26:17.046476 containerd[1457]: time="2025-04-30T03:26:17.046463344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:17.734509 kubelet[2472]: E0430 03:26:17.734467 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:17.738135 containerd[1457]: time="2025-04-30T03:26:17.738083650Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:26:17.766342 containerd[1457]: time="2025-04-30T03:26:17.766271462Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800\"" Apr 30 03:26:17.768308 containerd[1457]: time="2025-04-30T03:26:17.767854311Z" level=info msg="StartContainer for \"69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800\"" Apr 30 03:26:17.810212 systemd[1]: Started cri-containerd-69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800.scope - libcontainer container 69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800. Apr 30 03:26:17.846017 containerd[1457]: time="2025-04-30T03:26:17.844182378Z" level=info msg="StartContainer for \"69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800\" returns successfully" Apr 30 03:26:17.854766 systemd[1]: cri-containerd-69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800.scope: Deactivated successfully. Apr 30 03:26:17.890678 containerd[1457]: time="2025-04-30T03:26:17.890599052Z" level=info msg="shim disconnected" id=69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800 namespace=k8s.io Apr 30 03:26:17.890968 containerd[1457]: time="2025-04-30T03:26:17.890947699Z" level=warning msg="cleaning up after shim disconnected" id=69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800 namespace=k8s.io Apr 30 03:26:17.891040 containerd[1457]: time="2025-04-30T03:26:17.891028366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:18.461270 kubelet[2472]: E0430 03:26:18.461188 2472 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:26:18.652649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e0679f158cfa23a4dc0685c479070d3d6e6fdfd79ae56c7f6b478a882ba800-rootfs.mount: Deactivated successfully. Apr 30 03:26:18.740438 kubelet[2472]: E0430 03:26:18.740306 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:18.749980 containerd[1457]: time="2025-04-30T03:26:18.748778552Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:26:18.775453 containerd[1457]: time="2025-04-30T03:26:18.775039299Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1\"" Apr 30 03:26:18.779490 containerd[1457]: time="2025-04-30T03:26:18.779020469Z" level=info msg="StartContainer for \"249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1\"" Apr 30 03:26:18.826435 systemd[1]: Started cri-containerd-249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1.scope - libcontainer container 249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1. Apr 30 03:26:18.884195 containerd[1457]: time="2025-04-30T03:26:18.884025633Z" level=info msg="StartContainer for \"249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1\" returns successfully" Apr 30 03:26:18.909795 systemd[1]: cri-containerd-249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1.scope: Deactivated successfully. Apr 30 03:26:18.968936 containerd[1457]: time="2025-04-30T03:26:18.967740075Z" level=info msg="shim disconnected" id=249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1 namespace=k8s.io Apr 30 03:26:18.968936 containerd[1457]: time="2025-04-30T03:26:18.967813951Z" level=warning msg="cleaning up after shim disconnected" id=249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1 namespace=k8s.io Apr 30 03:26:18.968936 containerd[1457]: time="2025-04-30T03:26:18.967823402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:19.653615 systemd[1]: run-containerd-runc-k8s.io-249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1-runc.4znT7F.mount: Deactivated successfully. Apr 30 03:26:19.653791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-249e886c10719b04107cbab166ab813fd531de8ba9b9cd2a5b8417d5aa29f8c1-rootfs.mount: Deactivated successfully. Apr 30 03:26:19.745483 kubelet[2472]: E0430 03:26:19.744686 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:19.751271 containerd[1457]: time="2025-04-30T03:26:19.751106431Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:26:19.781577 containerd[1457]: time="2025-04-30T03:26:19.781517895Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9\"" Apr 30 03:26:19.787229 containerd[1457]: time="2025-04-30T03:26:19.785129915Z" level=info msg="StartContainer for \"a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9\"" Apr 30 03:26:19.789827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656559546.mount: Deactivated successfully. Apr 30 03:26:19.832205 systemd[1]: Started cri-containerd-a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9.scope - libcontainer container a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9. Apr 30 03:26:19.866364 systemd[1]: cri-containerd-a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9.scope: Deactivated successfully. Apr 30 03:26:19.875629 containerd[1457]: time="2025-04-30T03:26:19.867013975Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b26a8c_6458_4df0_9cec_de3288bcf921.slice/cri-containerd-a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9.scope/memory.events\": no such file or directory" Apr 30 03:26:19.878173 containerd[1457]: time="2025-04-30T03:26:19.877970023Z" level=info msg="StartContainer for \"a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9\" returns successfully" Apr 30 03:26:19.911299 containerd[1457]: time="2025-04-30T03:26:19.911017308Z" level=info msg="shim disconnected" id=a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9 namespace=k8s.io Apr 30 03:26:19.911299 containerd[1457]: time="2025-04-30T03:26:19.911095539Z" level=warning msg="cleaning up after shim disconnected" id=a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9 namespace=k8s.io Apr 30 03:26:19.911299 containerd[1457]: time="2025-04-30T03:26:19.911108924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:20.653721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7f5bcb8241f3e8979b145783f9f88982b5a8df0dec882390632bce10dcf3da9-rootfs.mount: Deactivated successfully. Apr 30 03:26:20.752408 kubelet[2472]: E0430 03:26:20.750921 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:20.758762 containerd[1457]: time="2025-04-30T03:26:20.758714031Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:26:20.795032 containerd[1457]: time="2025-04-30T03:26:20.794961689Z" level=info msg="CreateContainer within sandbox \"cf8d4c1acd3efbb797ee5089766682ffc1c639076ff7cf73a038ba5b6230598f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a\"" Apr 30 03:26:20.798351 containerd[1457]: time="2025-04-30T03:26:20.797795352Z" level=info msg="StartContainer for \"af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a\"" Apr 30 03:26:20.852445 systemd[1]: Started cri-containerd-af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a.scope - libcontainer container af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a. Apr 30 03:26:20.889758 containerd[1457]: time="2025-04-30T03:26:20.889701827Z" level=info msg="StartContainer for \"af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a\" returns successfully" Apr 30 03:26:21.418977 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 03:26:21.653194 systemd[1]: run-containerd-runc-k8s.io-af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a-runc.I9udTJ.mount: Deactivated successfully. Apr 30 03:26:21.757912 kubelet[2472]: E0430 03:26:21.757764 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:21.783699 kubelet[2472]: I0430 03:26:21.782722 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zqklw" podStartSLOduration=5.782696033 podStartE2EDuration="5.782696033s" podCreationTimestamp="2025-04-30 03:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:21.781669213 +0000 UTC m=+108.673317621" watchObservedRunningTime="2025-04-30 03:26:21.782696033 +0000 UTC m=+108.674344440" Apr 30 03:26:22.760811 kubelet[2472]: E0430 03:26:22.760699 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:23.500638 systemd[1]: run-containerd-runc-k8s.io-af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a-runc.k4PWtD.mount: Deactivated successfully. Apr 30 03:26:24.290719 kubelet[2472]: E0430 03:26:24.290602 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:24.957614 systemd-networkd[1365]: lxc_health: Link UP Apr 30 03:26:25.000365 systemd-networkd[1365]: lxc_health: Gained carrier Apr 30 03:26:25.686772 systemd[1]: run-containerd-runc-k8s.io-af9a98dfb9d5e8f7fb993b20c07e8f75690992ede3784d76dcb18d6c2f67e91a-runc.oz5bKK.mount: Deactivated successfully. Apr 30 03:26:26.137156 systemd-networkd[1365]: lxc_health: Gained IPv6LL Apr 30 03:26:26.751868 kubelet[2472]: E0430 03:26:26.751794 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:26.769959 kubelet[2472]: E0430 03:26:26.769652 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:27.771426 kubelet[2472]: E0430 03:26:27.771378 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 03:26:32.533840 sshd[4282]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:32.540589 systemd[1]: sshd@29-209.38.154.103:22-139.178.89.65:58496.service: Deactivated successfully. Apr 30 03:26:32.544258 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:26:32.546857 systemd-logind[1438]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:26:32.549216 systemd-logind[1438]: Removed session 28. Apr 30 03:26:33.288776 containerd[1457]: time="2025-04-30T03:26:33.288447369Z" level=info msg="StopPodSandbox for \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\"" Apr 30 03:26:33.288776 containerd[1457]: time="2025-04-30T03:26:33.288565817Z" level=info msg="TearDown network for sandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" successfully" Apr 30 03:26:33.288776 containerd[1457]: time="2025-04-30T03:26:33.288577077Z" level=info msg="StopPodSandbox for \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" returns successfully" Apr 30 03:26:33.291656 containerd[1457]: time="2025-04-30T03:26:33.290126178Z" level=info msg="RemovePodSandbox for \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\"" Apr 30 03:26:33.291656 containerd[1457]: time="2025-04-30T03:26:33.290182637Z" level=info msg="Forcibly stopping sandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\"" Apr 30 03:26:33.291656 containerd[1457]: time="2025-04-30T03:26:33.290257105Z" level=info msg="TearDown network for sandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" successfully" Apr 30 03:26:33.293690 containerd[1457]: time="2025-04-30T03:26:33.293633402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:26:33.294096 containerd[1457]: time="2025-04-30T03:26:33.294069554Z" level=info msg="RemovePodSandbox \"f8e0543135dd4e9b42af2000df7f21d21ce18102ef469a0575a5fc007622e8d8\" returns successfully" Apr 30 03:26:33.294913 containerd[1457]: time="2025-04-30T03:26:33.294858598Z" level=info msg="StopPodSandbox for \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\"" Apr 30 03:26:33.295055 containerd[1457]: time="2025-04-30T03:26:33.294976170Z" level=info msg="TearDown network for sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" successfully" Apr 30 03:26:33.295055 containerd[1457]: time="2025-04-30T03:26:33.294988212Z" level=info msg="StopPodSandbox for \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" returns successfully" Apr 30 03:26:33.296022 containerd[1457]: time="2025-04-30T03:26:33.295584960Z" level=info msg="RemovePodSandbox for \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\"" Apr 30 03:26:33.296022 containerd[1457]: time="2025-04-30T03:26:33.295615389Z" level=info msg="Forcibly stopping sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\"" Apr 30 03:26:33.296022 containerd[1457]: time="2025-04-30T03:26:33.295673042Z" level=info msg="TearDown network for sandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" successfully" Apr 30 03:26:33.298527 containerd[1457]: time="2025-04-30T03:26:33.298466817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:26:33.298722 containerd[1457]: time="2025-04-30T03:26:33.298536816Z" level=info msg="RemovePodSandbox \"91be602106b62eb4432f1e0e62ba07a67a2f506ec535a3b40a8d3dcef706d550\" returns successfully"