Apr 30 03:28:33.994784 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:33.994826 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:33.994840 kernel: BIOS-provided physical RAM map: Apr 30 03:28:33.994847 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:28:33.994854 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:28:33.994864 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:28:33.994877 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 03:28:33.994889 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 03:28:33.994900 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:28:33.994912 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:28:33.994919 kernel: NX (Execute Disable) protection: active Apr 30 03:28:33.994942 kernel: APIC: Static calls initialized Apr 30 03:28:33.994960 kernel: SMBIOS 2.8 present. Apr 30 03:28:33.994971 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 03:28:33.994983 kernel: Hypervisor detected: KVM Apr 30 03:28:33.994999 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:28:33.995025 kernel: kvm-clock: using sched offset of 3232154049 cycles Apr 30 03:28:33.995037 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:28:33.995065 kernel: tsc: Detected 1999.999 MHz processor Apr 30 03:28:33.995078 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:33.995091 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:33.995102 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 03:28:33.995113 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:28:33.995125 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:33.995143 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:33.995168 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 03:28:33.995178 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995189 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995200 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995211 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 03:28:33.995223 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995236 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995248 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995265 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:33.995278 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 03:28:33.995290 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 03:28:33.995303 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 03:28:33.995314 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 03:28:33.995321 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 03:28:33.995329 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 03:28:33.995339 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 03:28:33.995350 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:28:33.995357 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:28:33.995365 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 03:28:33.995372 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 03:28:33.995384 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 03:28:33.995392 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 03:28:33.995403 kernel: Zone ranges: Apr 30 03:28:33.995411 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:33.995418 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 03:28:33.995425 kernel: Normal empty Apr 30 03:28:33.995432 kernel: Movable zone start for each node Apr 30 03:28:33.995440 kernel: Early memory node ranges Apr 30 03:28:33.995447 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:28:33.995454 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 03:28:33.995461 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 03:28:33.995472 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:33.995480 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:28:33.995491 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 03:28:33.995498 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:28:33.995505 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:28:33.995512 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:33.995520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:28:33.995527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:28:33.995535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:33.995545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:28:33.995552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:28:33.995560 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:33.995567 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:28:33.995575 kernel: TSC deadline timer available Apr 30 03:28:33.995582 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:28:33.995589 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:28:33.995596 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 03:28:33.995607 kernel: Booting paravirtualized kernel on KVM Apr 30 03:28:33.995615 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:33.995625 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:28:33.995633 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:28:33.995640 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:28:33.995647 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:28:33.995655 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:28:33.995664 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:33.995672 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:33.995679 kernel: random: crng init done Apr 30 03:28:33.995689 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:28:33.995699 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:28:33.995710 kernel: Fallback order for Node 0: 0 Apr 30 03:28:33.995720 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 03:28:33.995731 kernel: Policy zone: DMA32 Apr 30 03:28:33.995741 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:33.995752 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125148K reserved, 0K cma-reserved) Apr 30 03:28:33.995765 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:28:33.995781 kernel: Kernel/User page tables isolation: enabled Apr 30 03:28:33.995795 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:33.995807 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:33.995818 kernel: Dynamic Preempt: voluntary Apr 30 03:28:33.995830 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:33.995841 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:33.995853 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:28:33.995865 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:33.995876 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:33.995888 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:33.995903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:33.995915 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:28:33.995925 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:28:33.995937 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:33.995956 kernel: Console: colour VGA+ 80x25 Apr 30 03:28:33.995969 kernel: printk: console [tty0] enabled Apr 30 03:28:33.995982 kernel: printk: console [ttyS0] enabled Apr 30 03:28:33.995990 kernel: ACPI: Core revision 20230628 Apr 30 03:28:33.995997 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:28:33.996009 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:33.996022 kernel: x2apic enabled Apr 30 03:28:33.996035 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:28:33.996697 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:28:33.996720 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Apr 30 03:28:33.996734 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Apr 30 03:28:33.996748 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 03:28:33.996761 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 03:28:33.996805 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:33.996819 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:33.996833 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:33.996849 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:33.996861 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 03:28:33.996873 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:28:33.996885 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:28:33.996898 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 03:28:33.996910 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:28:33.996933 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:33.996947 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:33.996959 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:33.996972 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:33.996986 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 03:28:33.996998 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:33.997009 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:33.997021 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:33.997037 kernel: landlock: Up and running. Apr 30 03:28:33.997081 kernel: SELinux: Initializing. Apr 30 03:28:33.997094 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:28:33.997106 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:28:33.997118 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 03:28:33.997131 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:33.997144 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:33.997157 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:28:33.997168 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 03:28:33.997185 kernel: signal: max sigframe size: 1776 Apr 30 03:28:33.997199 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:33.997212 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:33.997224 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:28:33.997236 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:33.997248 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:33.997259 kernel: .... node #0, CPUs: #1 Apr 30 03:28:33.997272 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:28:33.997294 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:33.997309 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 30 03:28:33.997321 kernel: devtmpfs: initialized Apr 30 03:28:33.997335 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:33.997347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:33.997359 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:28:33.997372 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:33.997385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:33.997397 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:33.997410 kernel: audit: type=2000 audit(1745983712.950:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:33.997426 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:33.997438 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:33.997451 kernel: cpuidle: using governor menu Apr 30 03:28:33.997463 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:33.997475 kernel: dca service started, version 1.12.1 Apr 30 03:28:33.997486 kernel: PCI: Using configuration type 1 for base access Apr 30 03:28:33.997500 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:33.997511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:33.997523 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:33.997540 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:33.997552 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:33.997564 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:33.997578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:33.997591 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:28:33.997602 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:33.997615 kernel: ACPI: Interpreter enabled Apr 30 03:28:33.997627 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:28:33.997641 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:33.997658 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:33.997671 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:28:33.997683 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:28:33.997697 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:28:33.998027 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:28:33.998251 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:28:33.998382 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:28:33.998400 kernel: acpiphp: Slot [3] registered Apr 30 03:28:33.998410 kernel: acpiphp: Slot [4] registered Apr 30 03:28:33.998424 kernel: acpiphp: Slot [5] registered Apr 30 03:28:33.998437 kernel: acpiphp: Slot [6] registered Apr 30 03:28:33.998452 kernel: acpiphp: Slot [7] registered Apr 30 03:28:33.998461 kernel: acpiphp: Slot [8] registered Apr 30 03:28:33.998469 kernel: acpiphp: Slot [9] registered Apr 30 03:28:33.998477 kernel: acpiphp: Slot [10] registered Apr 30 03:28:33.998486 kernel: acpiphp: Slot [11] registered Apr 30 03:28:33.998496 kernel: acpiphp: Slot [12] registered Apr 30 03:28:33.998510 kernel: acpiphp: Slot [13] registered Apr 30 03:28:33.998524 kernel: acpiphp: Slot [14] registered Apr 30 03:28:33.998538 kernel: acpiphp: Slot [15] registered Apr 30 03:28:33.998550 kernel: acpiphp: Slot [16] registered Apr 30 03:28:33.998558 kernel: acpiphp: Slot [17] registered Apr 30 03:28:33.998566 kernel: acpiphp: Slot [18] registered Apr 30 03:28:33.998574 kernel: acpiphp: Slot [19] registered Apr 30 03:28:33.998583 kernel: acpiphp: Slot [20] registered Apr 30 03:28:33.998597 kernel: acpiphp: Slot [21] registered Apr 30 03:28:33.998614 kernel: acpiphp: Slot [22] registered Apr 30 03:28:33.998628 kernel: acpiphp: Slot [23] registered Apr 30 03:28:33.998642 kernel: acpiphp: Slot [24] registered Apr 30 03:28:33.998654 kernel: acpiphp: Slot [25] registered Apr 30 03:28:33.998662 kernel: acpiphp: Slot [26] registered Apr 30 03:28:33.998676 kernel: acpiphp: Slot [27] registered Apr 30 03:28:33.998690 kernel: acpiphp: Slot [28] registered Apr 30 03:28:33.998704 kernel: acpiphp: Slot [29] registered Apr 30 03:28:33.998718 kernel: acpiphp: Slot [30] registered Apr 30 03:28:33.998734 kernel: acpiphp: Slot [31] registered Apr 30 03:28:33.998748 kernel: PCI host bridge to bus 0000:00 Apr 30 03:28:33.998921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:28:33.999042 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:28:33.999148 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:28:33.999243 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:28:33.999328 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 03:28:33.999410 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:28:33.999567 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:28:33.999721 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:28:33.999871 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 03:28:34.000010 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 03:28:34.000200 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 03:28:34.000340 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 03:28:34.000453 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 03:28:34.000546 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 03:28:34.000646 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 03:28:34.000740 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 03:28:34.000886 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 03:28:34.000980 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 03:28:34.001105 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 03:28:34.001261 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:28:34.001383 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 03:28:34.001484 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 03:28:34.001613 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 03:28:34.001722 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 03:28:34.001814 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:28:34.001976 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:28:34.002139 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 03:28:34.002274 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 03:28:34.002413 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 03:28:34.002541 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:28:34.002636 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 03:28:34.002730 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 03:28:34.002870 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 03:28:34.002988 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 03:28:34.004622 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 03:28:34.004750 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 03:28:34.004874 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 03:28:34.005011 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:28:34.006262 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:28:34.006466 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 03:28:34.006576 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 03:28:34.006738 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:28:34.006870 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 03:28:34.007045 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 03:28:34.008289 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 03:28:34.008471 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 03:28:34.008632 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 03:28:34.008748 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 03:28:34.008759 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:28:34.008786 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:28:34.008800 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:28:34.008812 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:28:34.008825 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:28:34.008852 kernel: iommu: Default domain type: Translated Apr 30 03:28:34.008866 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:34.008879 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:34.008892 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:28:34.008904 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:28:34.008916 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 03:28:34.009046 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 03:28:34.009194 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 03:28:34.009336 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:28:34.009353 kernel: vgaarb: loaded Apr 30 03:28:34.009365 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:28:34.009378 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:28:34.009390 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:28:34.009401 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:34.009414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:34.009426 kernel: pnp: PnP ACPI init Apr 30 03:28:34.009437 kernel: pnp: PnP ACPI: found 4 devices Apr 30 03:28:34.009466 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:34.009479 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:34.009494 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:28:34.009507 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:28:34.011193 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:34.011210 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:28:34.011224 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:28:34.011238 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:28:34.011251 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:28:34.011277 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:28:34.011290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:34.011304 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:34.011509 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:28:34.011703 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:28:34.011849 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:28:34.011987 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:28:34.013340 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 03:28:34.013540 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 03:28:34.013690 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:28:34.013711 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:28:34.013861 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 34725 usecs Apr 30 03:28:34.013881 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:34.013895 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:28:34.013910 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Apr 30 03:28:34.013925 kernel: Initialise system trusted keyrings Apr 30 03:28:34.013956 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:28:34.013970 kernel: Key type asymmetric registered Apr 30 03:28:34.013984 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:34.013998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:34.014013 kernel: io scheduler mq-deadline registered Apr 30 03:28:34.014028 kernel: io scheduler kyber registered Apr 30 03:28:34.014043 kernel: io scheduler bfq registered Apr 30 03:28:34.015153 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:34.015166 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 03:28:34.015175 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 03:28:34.015196 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 03:28:34.015204 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:34.015213 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:34.015221 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:28:34.015229 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:28:34.015238 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:28:34.015246 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:28:34.015451 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:28:34.015575 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:28:34.015662 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:28:33 UTC (1745983713) Apr 30 03:28:34.015746 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 03:28:34.015757 kernel: intel_pstate: CPU model not supported Apr 30 03:28:34.015765 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:34.015775 kernel: Segment Routing with IPv6 Apr 30 03:28:34.015789 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:34.015801 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:34.015824 kernel: Key type dns_resolver registered Apr 30 03:28:34.015838 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:34.015851 kernel: sched_clock: Marking stable (1057008518, 148825844)->(1328990584, -123156222) Apr 30 03:28:34.015864 kernel: registered taskstats version 1 Apr 30 03:28:34.015878 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:34.015892 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:34.015901 kernel: Key type .fscrypt registered Apr 30 03:28:34.015909 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:34.015919 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:34.015943 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:34.015957 kernel: ima: No architecture policies found Apr 30 03:28:34.015969 kernel: clk: Disabling unused clocks Apr 30 03:28:34.015977 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:34.015986 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:34.016045 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:34.016619 kernel: Run /init as init process Apr 30 03:28:34.016629 kernel: with arguments: Apr 30 03:28:34.016638 kernel: /init Apr 30 03:28:34.016661 kernel: with environment: Apr 30 03:28:34.016669 kernel: HOME=/ Apr 30 03:28:34.016678 kernel: TERM=linux Apr 30 03:28:34.016692 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:34.016708 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:34.016734 systemd[1]: Detected virtualization kvm. Apr 30 03:28:34.016743 systemd[1]: Detected architecture x86-64. Apr 30 03:28:34.016752 systemd[1]: Running in initrd. Apr 30 03:28:34.016768 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:34.016795 systemd[1]: Hostname set to . Apr 30 03:28:34.016805 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:28:34.016813 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:34.016823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:34.016832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:34.016843 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:34.016852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:34.016875 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:34.016895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:34.016907 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:34.016934 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:34.016949 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:34.016966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:34.016985 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:34.016994 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:34.017003 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:34.017018 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:34.017027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:34.017036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:34.017074 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:34.017083 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:34.017092 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:34.017106 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:34.017117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:34.017132 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:34.017147 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:34.017157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:34.017173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:34.017182 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:34.017191 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:34.017200 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:34.017209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:34.017252 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 03:28:34.017282 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:34.017291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:34.017300 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:34.017311 systemd-journald[183]: Journal started Apr 30 03:28:34.017338 systemd-journald[183]: Runtime Journal (/run/log/journal/2c9c2a6f5dcf46fab0afdc2c2366cf29) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:28:34.021086 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:34.022120 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 03:28:34.029346 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:34.043272 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:34.087897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:34.087952 kernel: Bridge firewalling registered Apr 30 03:28:34.079678 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 03:28:34.095012 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:34.096133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:34.100573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:34.111470 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:34.114292 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:34.118413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:34.121855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:34.138182 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:34.150443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:34.151746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:34.155086 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:34.167251 dracut-cmdline[214]: dracut-dracut-053 Apr 30 03:28:34.167421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:34.173714 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:34.211685 systemd-resolved[222]: Positive Trust Anchors: Apr 30 03:28:34.211711 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:34.211756 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:34.216296 systemd-resolved[222]: Defaulting to hostname 'linux'. Apr 30 03:28:34.218162 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:34.218872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:34.295152 kernel: SCSI subsystem initialized Apr 30 03:28:34.308127 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:34.323301 kernel: iscsi: registered transport (tcp) Apr 30 03:28:34.354423 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:34.354536 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:34.415699 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:34.423480 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:34.460647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:34.460867 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:34.460938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:34.516129 kernel: raid6: avx2x4 gen() 24943 MB/s Apr 30 03:28:34.533138 kernel: raid6: avx2x2 gen() 25048 MB/s Apr 30 03:28:34.550314 kernel: raid6: avx2x1 gen() 14876 MB/s Apr 30 03:28:34.550460 kernel: raid6: using algorithm avx2x2 gen() 25048 MB/s Apr 30 03:28:34.569144 kernel: raid6: .... xor() 14177 MB/s, rmw enabled Apr 30 03:28:34.569261 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:28:34.602127 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:34.793127 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:34.810130 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:34.818425 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:34.844896 systemd-udevd[402]: Using default interface naming scheme 'v255'. Apr 30 03:28:34.851433 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:34.859573 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:34.882150 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 30 03:28:34.924391 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:34.937413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:35.003249 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:35.010344 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:35.039854 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:35.043712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:35.045346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:35.048514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:35.058554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:35.094947 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:35.117092 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 03:28:35.168598 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:28:35.173121 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:35.173173 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 03:28:35.173501 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:28:35.173524 kernel: GPT:9289727 != 125829119 Apr 30 03:28:35.173540 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:28:35.175356 kernel: GPT:9289727 != 125829119 Apr 30 03:28:35.175793 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:28:35.175818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:35.175836 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 03:28:35.177285 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Apr 30 03:28:35.182767 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:35.183032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:35.186793 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:35.187645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:35.188809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:35.190449 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:35.201643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:35.211117 kernel: ACPI: bus type USB registered Apr 30 03:28:35.211259 kernel: usbcore: registered new interface driver usbfs Apr 30 03:28:35.214163 kernel: libata version 3.00 loaded. Apr 30 03:28:35.224438 kernel: usbcore: registered new interface driver hub Apr 30 03:28:35.224558 kernel: usbcore: registered new device driver usb Apr 30 03:28:35.236141 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:35.242717 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 03:28:35.256638 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:35.256672 kernel: scsi host1: ata_piix Apr 30 03:28:35.256986 kernel: scsi host2: ata_piix Apr 30 03:28:35.257550 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 03:28:35.257576 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 03:28:35.327118 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Apr 30 03:28:35.330333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:28:35.334227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:35.343177 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (462) Apr 30 03:28:35.351096 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:28:35.356968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:28:35.361855 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:28:35.362640 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:28:35.370361 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:35.372188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:35.382005 disk-uuid[533]: Primary Header is updated. Apr 30 03:28:35.382005 disk-uuid[533]: Secondary Entries is updated. Apr 30 03:28:35.382005 disk-uuid[533]: Secondary Header is updated. Apr 30 03:28:35.388143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:35.398108 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:35.406988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:35.493712 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 03:28:35.504308 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 03:28:35.504531 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 03:28:35.504725 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 03:28:35.504912 kernel: hub 1-0:1.0: USB hub found Apr 30 03:28:35.506301 kernel: hub 1-0:1.0: 2 ports detected Apr 30 03:28:36.400103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:36.401475 disk-uuid[534]: The operation has completed successfully. Apr 30 03:28:36.451825 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:36.451942 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:36.466394 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:36.471705 sh[562]: Success Apr 30 03:28:36.489338 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:28:36.575926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:36.594300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:36.596875 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:36.633393 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:36.638120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:36.638228 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:36.638246 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:36.640471 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:36.650216 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:36.651922 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:36.661497 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:36.667430 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:36.678407 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:36.678561 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:36.680680 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:28:36.686088 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:28:36.705633 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:36.705195 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:36.714893 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:36.722457 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:36.889933 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:36.901588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:36.912147 ignition[640]: Ignition 2.19.0 Apr 30 03:28:36.912161 ignition[640]: Stage: fetch-offline Apr 30 03:28:36.918017 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:36.912226 ignition[640]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:36.912243 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:36.912405 ignition[640]: parsed url from cmdline: "" Apr 30 03:28:36.912411 ignition[640]: no config URL provided Apr 30 03:28:36.912420 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:36.912433 ignition[640]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:36.912442 ignition[640]: failed to fetch config: resource requires networking Apr 30 03:28:36.912738 ignition[640]: Ignition finished successfully Apr 30 03:28:36.944314 systemd-networkd[749]: lo: Link UP Apr 30 03:28:36.944333 systemd-networkd[749]: lo: Gained carrier Apr 30 03:28:36.947585 systemd-networkd[749]: Enumeration completed Apr 30 03:28:36.947787 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:36.949022 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:28:36.949028 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 03:28:36.950173 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:36.950178 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:36.950933 systemd-networkd[749]: eth0: Link UP Apr 30 03:28:36.950939 systemd-networkd[749]: eth0: Gained carrier Apr 30 03:28:36.950953 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 03:28:36.952559 systemd[1]: Reached target network.target - Network. Apr 30 03:28:36.956530 systemd-networkd[749]: eth1: Link UP Apr 30 03:28:36.956535 systemd-networkd[749]: eth1: Gained carrier Apr 30 03:28:36.956552 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:36.960035 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:28:36.972893 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.27/20 acquired from 169.254.169.253 Apr 30 03:28:36.977268 systemd-networkd[749]: eth0: DHCPv4 address 164.92.87.160/20, gateway 164.92.80.1 acquired from 169.254.169.253 Apr 30 03:28:36.997204 ignition[752]: Ignition 2.19.0 Apr 30 03:28:36.997220 ignition[752]: Stage: fetch Apr 30 03:28:36.997444 ignition[752]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:36.997458 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:36.997565 ignition[752]: parsed url from cmdline: "" Apr 30 03:28:36.997569 ignition[752]: no config URL provided Apr 30 03:28:36.997576 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:36.997585 ignition[752]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:36.997608 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 03:28:37.016709 ignition[752]: GET result: OK Apr 30 03:28:37.017132 ignition[752]: parsing config with SHA512: 8fcbd0fa7143bed9bbb2512e34cfd8a6a87aa23778ee44b835f2ce054ff4e24c45169f4ce4d87e64d54ab740992fed2239dad454dd519f62ac11510f7a89778d Apr 30 03:28:37.022786 unknown[752]: fetched base config from "system" Apr 30 03:28:37.022801 unknown[752]: fetched base config from "system" Apr 30 03:28:37.023335 ignition[752]: fetch: fetch complete Apr 30 03:28:37.022808 unknown[752]: fetched user config from "digitalocean" Apr 30 03:28:37.023341 ignition[752]: fetch: fetch passed Apr 30 03:28:37.025312 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:28:37.023392 ignition[752]: Ignition finished successfully Apr 30 03:28:37.034419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:37.055790 ignition[760]: Ignition 2.19.0 Apr 30 03:28:37.055808 ignition[760]: Stage: kargs Apr 30 03:28:37.056240 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:37.056259 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:37.061166 ignition[760]: kargs: kargs passed Apr 30 03:28:37.061258 ignition[760]: Ignition finished successfully Apr 30 03:28:37.063017 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:37.073344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:37.098458 ignition[766]: Ignition 2.19.0 Apr 30 03:28:37.098472 ignition[766]: Stage: disks Apr 30 03:28:37.098733 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:37.098746 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:37.101804 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:37.100073 ignition[766]: disks: disks passed Apr 30 03:28:37.103335 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:37.100140 ignition[766]: Ignition finished successfully Apr 30 03:28:37.108327 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:37.109658 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:37.110755 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:37.112129 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:37.123477 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:37.140705 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:28:37.144395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:37.168493 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:37.287113 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:37.287396 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:37.288623 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:37.296272 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:37.303428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:37.308432 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Apr 30 03:28:37.312921 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:28:37.314972 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:37.315203 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:37.325847 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Apr 30 03:28:37.330736 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:37.330811 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:37.330824 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:28:37.335382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:37.344925 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:37.360087 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:28:37.375575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:37.408990 coreos-metadata[787]: Apr 30 03:28:37.408 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:28:37.421134 coreos-metadata[786]: Apr 30 03:28:37.420 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:28:37.425078 coreos-metadata[787]: Apr 30 03:28:37.423 INFO Fetch successful Apr 30 03:28:37.428856 coreos-metadata[787]: Apr 30 03:28:37.428 INFO wrote hostname ci-4081.3.3-a-32b52f0300 to /sysroot/etc/hostname Apr 30 03:28:37.430715 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:37.433106 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:37.435953 coreos-metadata[786]: Apr 30 03:28:37.435 INFO Fetch successful Apr 30 03:28:37.439814 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:37.443985 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Apr 30 03:28:37.444159 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Apr 30 03:28:37.451363 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:37.457349 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:37.568671 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:37.574276 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:37.577267 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:37.592100 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:37.624949 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:37.634126 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:37.637909 ignition[906]: INFO : Ignition 2.19.0 Apr 30 03:28:37.637909 ignition[906]: INFO : Stage: mount Apr 30 03:28:37.639840 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:37.639840 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:37.642144 ignition[906]: INFO : mount: mount passed Apr 30 03:28:37.642144 ignition[906]: INFO : Ignition finished successfully Apr 30 03:28:37.642227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:37.650265 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:37.669364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:37.679281 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Apr 30 03:28:37.679380 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:37.681839 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:37.681907 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:28:37.688107 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:28:37.689147 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:37.722953 ignition[935]: INFO : Ignition 2.19.0 Apr 30 03:28:37.724101 ignition[935]: INFO : Stage: files Apr 30 03:28:37.724101 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:37.725415 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:37.726267 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:37.727555 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:37.727555 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:37.731728 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:37.733267 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:37.733267 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:37.732290 unknown[935]: wrote ssh authorized keys file for user: core Apr 30 03:28:37.736021 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:37.736021 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:37.776210 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:37.888178 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:37.888178 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:28:37.890495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:38.547368 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:28:38.633829 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:28:38.633829 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:38.636407 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:38.820466 systemd-networkd[749]: eth1: Gained IPv6LL Apr 30 03:28:38.948717 systemd-networkd[749]: eth0: Gained IPv6LL Apr 30 03:28:39.108194 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:28:39.499092 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:39.499092 ignition[935]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 03:28:39.503363 ignition[935]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:39.503363 ignition[935]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:39.503363 ignition[935]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 03:28:39.503363 ignition[935]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:39.510682 ignition[935]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:39.510682 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:39.510682 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:39.510682 ignition[935]: INFO : files: files passed Apr 30 03:28:39.510682 ignition[935]: INFO : Ignition finished successfully Apr 30 03:28:39.506097 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:39.516462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:39.522250 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:39.527965 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:39.528265 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:39.551156 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:39.551156 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:39.555919 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:39.558071 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:39.561569 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:39.572565 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:39.619894 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:39.620092 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:39.622610 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:39.624378 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:39.626186 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:39.638328 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:39.657207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:39.661407 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:39.686364 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:39.688429 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:39.690459 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:39.691580 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:39.691793 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:39.693528 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:39.694735 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:39.696256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:39.698022 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:39.699340 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:39.700867 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:39.702324 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:39.704114 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:39.705808 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:39.707304 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:39.708498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:39.708662 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:39.710537 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:39.711854 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:39.713258 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:39.713389 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:39.714959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:39.715170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:39.717001 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:39.717231 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:39.718920 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:39.719044 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:39.720313 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:28:39.720546 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:28:39.732565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:39.736027 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:39.736417 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:39.739380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:39.741465 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:39.742383 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:39.744795 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:39.747182 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:39.759496 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:39.762156 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:39.777149 ignition[987]: INFO : Ignition 2.19.0 Apr 30 03:28:39.777149 ignition[987]: INFO : Stage: umount Apr 30 03:28:39.793548 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:39.793548 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 03:28:39.793548 ignition[987]: INFO : umount: umount passed Apr 30 03:28:39.793548 ignition[987]: INFO : Ignition finished successfully Apr 30 03:28:39.786328 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:39.786496 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:39.788005 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:39.788144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:39.788999 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:39.789316 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:39.796507 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:28:39.796601 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:28:39.797871 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:39.798621 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:39.798741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:39.800471 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:39.801984 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:39.802119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:39.803567 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:39.805031 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:39.806377 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:39.806444 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:39.808117 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:39.808186 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:39.809583 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:39.809683 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:39.810941 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:39.811021 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:39.812599 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:39.814236 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:39.817171 systemd-networkd[749]: eth1: DHCPv6 lease lost Apr 30 03:28:39.817861 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:39.819968 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:39.820164 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:39.821191 systemd-networkd[749]: eth0: DHCPv6 lease lost Apr 30 03:28:39.823335 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:39.823482 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:39.825732 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:39.825922 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:39.830607 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:39.831130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:39.834777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:39.834864 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:39.843271 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:39.844629 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:39.844766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:39.846412 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:39.846508 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:39.848871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:39.848949 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:39.850317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:39.850399 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:39.852245 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:39.871537 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:39.873370 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:39.875222 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:39.875443 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:39.877752 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:39.877857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:39.879349 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:39.879403 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:39.880880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:39.880960 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:39.883225 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:39.883302 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:39.885579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:39.885667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:39.894348 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:39.895121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:39.895201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:39.901726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:39.901827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:39.911612 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:39.911852 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:39.913642 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:39.920442 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:39.936476 systemd[1]: Switching root. Apr 30 03:28:40.021464 systemd-journald[183]: Journal stopped Apr 30 03:28:41.732428 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:41.732533 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:41.732559 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:41.732583 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:41.732604 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:41.732631 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:41.732648 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:41.732675 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:41.732697 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:41.732728 kernel: audit: type=1403 audit(1745983720.368:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:41.732748 systemd[1]: Successfully loaded SELinux policy in 74.835ms. Apr 30 03:28:41.732783 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.878ms. Apr 30 03:28:41.732799 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:41.732816 systemd[1]: Detected virtualization kvm. Apr 30 03:28:41.732829 systemd[1]: Detected architecture x86-64. Apr 30 03:28:41.732851 systemd[1]: Detected first boot. Apr 30 03:28:41.732868 systemd[1]: Hostname set to . Apr 30 03:28:41.732886 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:28:41.732902 zram_generator::config[1029]: No configuration found. Apr 30 03:28:41.732928 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:41.732945 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:28:41.732967 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:28:41.732983 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:41.733003 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:41.733028 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:41.733044 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:41.733174 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:41.733200 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:41.733219 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:41.733244 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:41.733264 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:41.733283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:41.733303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:41.733315 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:41.733326 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:41.733339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:41.733351 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:41.733363 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:41.733379 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:41.733391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:28:41.733403 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:28:41.733415 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:41.733428 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:41.733441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:41.733453 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:41.733468 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:41.733481 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:41.733493 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:41.733511 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:41.733529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:41.733550 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:41.733563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:41.733575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:41.733586 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:41.733601 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:41.733613 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:41.733626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:41.733646 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:41.733667 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:41.733683 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:41.733702 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:41.733726 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:41.733754 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:41.733774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:41.733793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:41.733812 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:41.733831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:41.733848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:41.733866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:41.733884 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:41.733899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:41.733917 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:41.733930 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:28:41.733942 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:28:41.733953 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:28:41.733966 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:28:41.733978 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:41.733989 kernel: fuse: init (API version 7.39) Apr 30 03:28:41.734003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:41.734015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:41.734105 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:41.734118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:41.734131 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:28:41.734143 systemd[1]: Stopped verity-setup.service. Apr 30 03:28:41.734155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:41.734167 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:41.734178 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:41.734195 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:41.734214 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:41.734226 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:41.734237 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:41.734250 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:41.734262 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:41.734279 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:41.734291 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:41.734302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:41.734317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:41.734332 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:41.734355 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:41.734367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:41.734380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:41.734392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:41.734404 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:41.734415 kernel: loop: module loaded Apr 30 03:28:41.734428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:41.734440 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:41.734497 systemd-journald[1102]: Collecting audit messages is disabled. Apr 30 03:28:41.734531 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:41.734544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:41.734558 systemd-journald[1102]: Journal started Apr 30 03:28:41.734587 systemd-journald[1102]: Runtime Journal (/run/log/journal/2c9c2a6f5dcf46fab0afdc2c2366cf29) is 4.9M, max 39.3M, 34.4M free. Apr 30 03:28:41.243205 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:41.267097 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:28:41.267953 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:28:41.741098 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:41.743144 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:41.770879 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:41.783301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:41.800328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:41.801403 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:41.801482 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:41.805581 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:41.814507 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:41.820510 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:41.822927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:41.836791 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:41.847379 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:41.848369 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:41.852318 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:41.853312 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:41.856118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:41.865436 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:41.870226 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:41.872540 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:41.873943 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:41.897395 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:41.948188 systemd-journald[1102]: Time spent on flushing to /var/log/journal/2c9c2a6f5dcf46fab0afdc2c2366cf29 is 109.474ms for 989 entries. Apr 30 03:28:41.948188 systemd-journald[1102]: System Journal (/var/log/journal/2c9c2a6f5dcf46fab0afdc2c2366cf29) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:28:42.089492 systemd-journald[1102]: Received client request to flush runtime journal. Apr 30 03:28:42.089606 kernel: loop0: detected capacity change from 0 to 8 Apr 30 03:28:42.089650 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:42.089679 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 03:28:41.946466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:41.981501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:41.984950 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:42.003452 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:42.082405 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:42.090437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:42.104510 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:42.106083 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:42.112006 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:42.115815 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:42.120290 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:28:42.156530 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:42.168182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:42.184203 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 03:28:42.186944 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:28:42.252545 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Apr 30 03:28:42.252575 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Apr 30 03:28:42.276553 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:42.308092 kernel: loop4: detected capacity change from 0 to 8 Apr 30 03:28:42.315131 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 03:28:42.338295 kernel: loop6: detected capacity change from 0 to 140768 Apr 30 03:28:42.376332 kernel: loop7: detected capacity change from 0 to 142488 Apr 30 03:28:42.381169 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 03:28:42.384135 (sd-merge)[1175]: Merged extensions into '/usr'. Apr 30 03:28:42.394185 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:42.394211 systemd[1]: Reloading... Apr 30 03:28:42.639152 zram_generator::config[1201]: No configuration found. Apr 30 03:28:42.722843 ldconfig[1140]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:42.877263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:42.968152 systemd[1]: Reloading finished in 573 ms. Apr 30 03:28:42.995161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:42.996627 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:43.008369 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:43.014028 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:43.030905 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:43.030924 systemd[1]: Reloading... Apr 30 03:28:43.092577 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:43.096353 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:43.098904 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:43.100797 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Apr 30 03:28:43.100911 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Apr 30 03:28:43.111377 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:43.111391 systemd-tmpfiles[1245]: Skipping /boot Apr 30 03:28:43.146480 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:43.149138 systemd-tmpfiles[1245]: Skipping /boot Apr 30 03:28:43.219120 zram_generator::config[1279]: No configuration found. Apr 30 03:28:43.377447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:43.437014 systemd[1]: Reloading finished in 405 ms. Apr 30 03:28:43.459943 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:43.467010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:43.481369 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:43.486306 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:43.490765 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:43.499302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:43.502017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:43.505424 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:43.518866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.521308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:43.530734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:43.534695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:43.542497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:43.543570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:43.543803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.557520 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:43.561373 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.561685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:43.561950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:43.563155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.572997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.573505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:43.582529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:43.583665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:43.583912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.586645 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:43.598534 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:28:43.599782 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:43.622079 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:43.624683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:43.624965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:43.638335 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:43.639778 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:43.640886 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:43.642775 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:43.643921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:43.647967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:43.665379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:43.667237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:43.668521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:43.669638 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Apr 30 03:28:43.687561 augenrules[1352]: No rules Apr 30 03:28:43.691149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:43.697011 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:43.704239 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:43.707415 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:43.710192 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:43.720161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:43.729299 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:43.810599 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:28:43.811759 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:43.879249 systemd-resolved[1322]: Positive Trust Anchors: Apr 30 03:28:43.880074 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:43.880237 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:43.888724 systemd-resolved[1322]: Using system hostname 'ci-4081.3.3-a-32b52f0300'. Apr 30 03:28:43.889627 systemd-networkd[1368]: lo: Link UP Apr 30 03:28:43.889640 systemd-networkd[1368]: lo: Gained carrier Apr 30 03:28:43.891638 systemd-networkd[1368]: Enumeration completed Apr 30 03:28:43.891775 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:43.894273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:43.894924 systemd[1]: Reached target network.target - Network. Apr 30 03:28:43.895443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:43.906919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:43.908461 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:28:43.924923 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 03:28:43.925922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.926124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:43.935094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1369) Apr 30 03:28:43.938390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:43.950293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:43.954250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:43.956297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:43.956368 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:43.956398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:43.984460 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:43.985176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:44.001636 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:44.001845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:44.006269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:44.006500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:44.012080 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 03:28:44.015770 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 03:28:44.018165 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:44.018281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:44.097154 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:28:44.107112 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:28:44.123090 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 03:28:44.143597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:28:44.147096 systemd-networkd[1368]: eth1: Configuring with /run/systemd/network/10-6e:d1:8a:fd:fe:48.network. Apr 30 03:28:44.148561 systemd-networkd[1368]: eth1: Link UP Apr 30 03:28:44.148569 systemd-networkd[1368]: eth1: Gained carrier Apr 30 03:28:44.150418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:44.154259 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:44.180666 systemd-networkd[1368]: eth0: Configuring with /run/systemd/network/10-d6:40:98:0f:05:52.network. Apr 30 03:28:44.182207 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:28:44.182355 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:44.182723 systemd-networkd[1368]: eth0: Link UP Apr 30 03:28:44.182732 systemd-networkd[1368]: eth0: Gained carrier Apr 30 03:28:44.186798 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:44.190564 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:44.210104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:44.275253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:44.280164 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 03:28:44.280279 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:44.284116 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 03:28:44.296101 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:28:44.303445 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:28:44.303553 kernel: [drm] features: -context_init Apr 30 03:28:44.313185 kernel: [drm] number of scanouts: 1 Apr 30 03:28:44.318094 kernel: [drm] number of cap sets: 0 Apr 30 03:28:44.320608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:44.320844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:44.322269 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 03:28:44.331364 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:28:44.331476 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:28:44.332432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:44.344083 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:28:44.389409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:44.389881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:44.442035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:44.531544 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:28:44.544908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:44.556790 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:44.574482 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:44.589329 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:44.623509 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:44.624437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:44.624587 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:44.624929 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:44.625242 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:44.627905 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:44.628747 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:44.628917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:44.629003 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:44.629042 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:44.630417 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:44.631971 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:44.634233 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:44.640399 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:44.644151 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:44.647831 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:44.648574 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:44.651283 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:44.651793 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:44.651828 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:44.663304 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:44.666416 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:28:44.674165 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:44.675471 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:44.680319 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:44.694156 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:44.696148 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:44.701653 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:44.712374 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:44.715414 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:44.723162 jq[1436]: false Apr 30 03:28:44.726476 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:44.734855 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:44.737244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:28:44.737911 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:44.745999 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:44.753263 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:44.759323 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:44.766210 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:44.766421 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:44.779021 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:44.779726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:44.804011 dbus-daemon[1435]: [system] SELinux support is enabled Apr 30 03:28:44.808118 extend-filesystems[1437]: Found loop4 Apr 30 03:28:44.806187 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:44.812041 extend-filesystems[1437]: Found loop5 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found loop6 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found loop7 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda1 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda2 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda3 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found usr Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda4 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda6 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda7 Apr 30 03:28:44.812041 extend-filesystems[1437]: Found vda9 Apr 30 03:28:44.812041 extend-filesystems[1437]: Checking size of /dev/vda9 Apr 30 03:28:44.814123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:44.870208 update_engine[1445]: I20250430 03:28:44.859977 1445 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:44.870525 jq[1446]: true Apr 30 03:28:44.870599 coreos-metadata[1434]: Apr 30 03:28:44.828 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:28:44.870599 coreos-metadata[1434]: Apr 30 03:28:44.845 INFO Fetch successful Apr 30 03:28:44.814168 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:44.823331 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:44.879437 tar[1449]: linux-amd64/helm Apr 30 03:28:44.885728 update_engine[1445]: I20250430 03:28:44.877661 1445 update_check_scheduler.cc:74] Next update check in 7m42s Apr 30 03:28:44.823439 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 03:28:44.823468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:44.877154 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:44.881572 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:44.891354 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:44.921593 extend-filesystems[1437]: Resized partition /dev/vda9 Apr 30 03:28:44.929204 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:28:44.932364 jq[1460]: true Apr 30 03:28:44.935598 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 03:28:44.947839 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:44.948285 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:44.986412 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1369) Apr 30 03:28:45.036203 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:28:45.038648 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:45.108465 systemd-logind[1444]: New seat seat0. Apr 30 03:28:45.116622 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 03:28:45.136082 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:28:45.136082 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 03:28:45.136082 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 03:28:45.146883 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Apr 30 03:28:45.146883 extend-filesystems[1437]: Found vdb Apr 30 03:28:45.136380 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:28:45.136404 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:45.137813 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:45.142238 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:45.142453 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:45.207519 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:45.200116 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:45.218957 systemd[1]: Starting sshkeys.service... Apr 30 03:28:45.225504 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:45.273932 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:28:45.285977 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:28:45.376104 coreos-metadata[1507]: Apr 30 03:28:45.371 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 03:28:45.388545 coreos-metadata[1507]: Apr 30 03:28:45.388 INFO Fetch successful Apr 30 03:28:45.404424 unknown[1507]: wrote ssh authorized keys file for user: core Apr 30 03:28:45.452863 update-ssh-keys[1514]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:45.454526 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:28:45.460600 systemd[1]: Finished sshkeys.service. Apr 30 03:28:45.502250 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:45.517333 containerd[1459]: time="2025-04-30T03:28:45.517227525Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:45.578552 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:45.589613 containerd[1459]: time="2025-04-30T03:28:45.589204191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.590588 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:45.598411 containerd[1459]: time="2025-04-30T03:28:45.598255278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:45.598411 containerd[1459]: time="2025-04-30T03:28:45.598311766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:45.598411 containerd[1459]: time="2025-04-30T03:28:45.598332259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:45.598610 containerd[1459]: time="2025-04-30T03:28:45.598522691Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:45.598610 containerd[1459]: time="2025-04-30T03:28:45.598551512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.598648 containerd[1459]: time="2025-04-30T03:28:45.598616135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:45.598648 containerd[1459]: time="2025-04-30T03:28:45.598629098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.599751518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.599789350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.599806097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.599816612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.599940585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.600224142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.600840625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:45.600942 containerd[1459]: time="2025-04-30T03:28:45.600881632Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:45.601185 containerd[1459]: time="2025-04-30T03:28:45.600996538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:45.601272 containerd[1459]: time="2025-04-30T03:28:45.601246760Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:45.611084 containerd[1459]: time="2025-04-30T03:28:45.609351776Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:45.611084 containerd[1459]: time="2025-04-30T03:28:45.609455908Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:45.611084 containerd[1459]: time="2025-04-30T03:28:45.609806955Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:45.611084 containerd[1459]: time="2025-04-30T03:28:45.609863707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:45.611084 containerd[1459]: time="2025-04-30T03:28:45.609894497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:45.611084 containerd[1459]: time="2025-04-30T03:28:45.610108170Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:45.611340 containerd[1459]: time="2025-04-30T03:28:45.611245035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:45.611478 containerd[1459]: time="2025-04-30T03:28:45.611455014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:45.611510 containerd[1459]: time="2025-04-30T03:28:45.611484166Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:45.611510 containerd[1459]: time="2025-04-30T03:28:45.611498560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:45.611570 containerd[1459]: time="2025-04-30T03:28:45.611512033Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611570 containerd[1459]: time="2025-04-30T03:28:45.611525137Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611570 containerd[1459]: time="2025-04-30T03:28:45.611539492Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611570 containerd[1459]: time="2025-04-30T03:28:45.611554882Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611570 containerd[1459]: time="2025-04-30T03:28:45.611569614Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611582506Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611594988Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611610139Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611638202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611652690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611665198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611688 containerd[1459]: time="2025-04-30T03:28:45.611678870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611690310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611702730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611722557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611737912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611751146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611765576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611778200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611789184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611818154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611836016Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:45.611868 containerd[1459]: time="2025-04-30T03:28:45.611857258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.612543 containerd[1459]: time="2025-04-30T03:28:45.612513279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.612591 containerd[1459]: time="2025-04-30T03:28:45.612542873Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:45.613156 containerd[1459]: time="2025-04-30T03:28:45.613045171Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:45.613247 containerd[1459]: time="2025-04-30T03:28:45.613186207Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:45.613247 containerd[1459]: time="2025-04-30T03:28:45.613204481Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:45.613247 containerd[1459]: time="2025-04-30T03:28:45.613218555Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:45.613247 containerd[1459]: time="2025-04-30T03:28:45.613229487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.613247 containerd[1459]: time="2025-04-30T03:28:45.613242792Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:45.613340 containerd[1459]: time="2025-04-30T03:28:45.613253776Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:45.613340 containerd[1459]: time="2025-04-30T03:28:45.613264960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:45.613674 containerd[1459]: time="2025-04-30T03:28:45.613604771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:45.613674 containerd[1459]: time="2025-04-30T03:28:45.613673495Z" level=info msg="Connect containerd service" Apr 30 03:28:45.613855 containerd[1459]: time="2025-04-30T03:28:45.613717541Z" level=info msg="using legacy CRI server" Apr 30 03:28:45.613855 containerd[1459]: time="2025-04-30T03:28:45.613725011Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:45.613929 containerd[1459]: time="2025-04-30T03:28:45.613905937Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:45.616520 containerd[1459]: time="2025-04-30T03:28:45.616473477Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:45.616659 containerd[1459]: time="2025-04-30T03:28:45.616617883Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:45.616718 containerd[1459]: time="2025-04-30T03:28:45.616696664Z" level=info msg="Start recovering state" Apr 30 03:28:45.616970 containerd[1459]: time="2025-04-30T03:28:45.616793191Z" level=info msg="Start event monitor" Apr 30 03:28:45.616970 containerd[1459]: time="2025-04-30T03:28:45.616827995Z" level=info msg="Start snapshots syncer" Apr 30 03:28:45.616970 containerd[1459]: time="2025-04-30T03:28:45.616842720Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:45.616970 containerd[1459]: time="2025-04-30T03:28:45.616853653Z" level=info msg="Start streaming server" Apr 30 03:28:45.619131 containerd[1459]: time="2025-04-30T03:28:45.618905421Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:45.619131 containerd[1459]: time="2025-04-30T03:28:45.618990481Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:45.620132 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:45.620536 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:45.623702 containerd[1459]: time="2025-04-30T03:28:45.621151617Z" level=info msg="containerd successfully booted in 0.104985s" Apr 30 03:28:45.622378 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:45.633467 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:45.654528 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:45.663555 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:45.668401 systemd-networkd[1368]: eth0: Gained IPv6LL Apr 30 03:28:45.670309 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:45.670534 systemd-networkd[1368]: eth1: Gained IPv6LL Apr 30 03:28:45.673218 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:45.673468 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:45.676629 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:45.678469 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:45.681848 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:45.693602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:45.697545 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:45.737199 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:45.983928 tar[1449]: linux-amd64/LICENSE Apr 30 03:28:45.983928 tar[1449]: linux-amd64/README.md Apr 30 03:28:45.998693 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:46.801783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:46.803285 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:46.805392 systemd[1]: Startup finished in 1.225s (kernel) + 6.588s (initrd) + 6.509s (userspace) = 14.323s. Apr 30 03:28:46.814707 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:47.610504 kubelet[1557]: E0430 03:28:47.610382 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:47.613754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:47.613992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:47.614364 systemd[1]: kubelet.service: Consumed 1.519s CPU time. Apr 30 03:28:47.636472 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:47.651648 systemd[1]: Started sshd@0-164.92.87.160:22-139.178.89.65:52804.service - OpenSSH per-connection server daemon (139.178.89.65:52804). Apr 30 03:28:47.737075 sshd[1571]: Accepted publickey for core from 139.178.89.65 port 52804 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:47.740615 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:47.754865 systemd-logind[1444]: New session 1 of user core. Apr 30 03:28:47.756923 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:47.772877 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:47.794382 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:47.803148 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:47.812199 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:47.986469 systemd[1575]: Queued start job for default target default.target. Apr 30 03:28:47.998014 systemd[1575]: Created slice app.slice - User Application Slice. Apr 30 03:28:47.998593 systemd[1575]: Reached target paths.target - Paths. Apr 30 03:28:47.998619 systemd[1575]: Reached target timers.target - Timers. Apr 30 03:28:48.000844 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:48.024513 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:48.024720 systemd[1575]: Reached target sockets.target - Sockets. Apr 30 03:28:48.024744 systemd[1575]: Reached target basic.target - Basic System. Apr 30 03:28:48.024804 systemd[1575]: Reached target default.target - Main User Target. Apr 30 03:28:48.024848 systemd[1575]: Startup finished in 202ms. Apr 30 03:28:48.025361 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:48.034746 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:48.111215 systemd[1]: Started sshd@1-164.92.87.160:22-139.178.89.65:52814.service - OpenSSH per-connection server daemon (139.178.89.65:52814). Apr 30 03:28:48.168327 sshd[1586]: Accepted publickey for core from 139.178.89.65 port 52814 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:48.170446 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:48.177338 systemd-logind[1444]: New session 2 of user core. Apr 30 03:28:48.188866 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:48.251931 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:48.261275 systemd[1]: sshd@1-164.92.87.160:22-139.178.89.65:52814.service: Deactivated successfully. Apr 30 03:28:48.263535 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:28:48.265675 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:28:48.277611 systemd[1]: Started sshd@2-164.92.87.160:22-139.178.89.65:52816.service - OpenSSH per-connection server daemon (139.178.89.65:52816). Apr 30 03:28:48.280518 systemd-logind[1444]: Removed session 2. Apr 30 03:28:48.325971 sshd[1593]: Accepted publickey for core from 139.178.89.65 port 52816 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:48.327970 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:48.334335 systemd-logind[1444]: New session 3 of user core. Apr 30 03:28:48.351427 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:48.410462 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:48.422333 systemd[1]: sshd@2-164.92.87.160:22-139.178.89.65:52816.service: Deactivated successfully. Apr 30 03:28:48.424174 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:28:48.426277 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:28:48.430532 systemd[1]: Started sshd@3-164.92.87.160:22-139.178.89.65:52822.service - OpenSSH per-connection server daemon (139.178.89.65:52822). Apr 30 03:28:48.432467 systemd-logind[1444]: Removed session 3. Apr 30 03:28:48.492984 sshd[1600]: Accepted publickey for core from 139.178.89.65 port 52822 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:48.495244 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:48.501827 systemd-logind[1444]: New session 4 of user core. Apr 30 03:28:48.509641 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:48.578098 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:48.590556 systemd[1]: sshd@3-164.92.87.160:22-139.178.89.65:52822.service: Deactivated successfully. Apr 30 03:28:48.592506 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:48.595332 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:48.601520 systemd[1]: Started sshd@4-164.92.87.160:22-139.178.89.65:52838.service - OpenSSH per-connection server daemon (139.178.89.65:52838). Apr 30 03:28:48.603833 systemd-logind[1444]: Removed session 4. Apr 30 03:28:48.649552 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 52838 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:48.651754 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:48.661366 systemd-logind[1444]: New session 5 of user core. Apr 30 03:28:48.664804 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:48.739675 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:48.740232 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:48.754175 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:48.758578 sshd[1607]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:48.772558 systemd[1]: sshd@4-164.92.87.160:22-139.178.89.65:52838.service: Deactivated successfully. Apr 30 03:28:48.775327 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:48.777232 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:48.781569 systemd[1]: Started sshd@5-164.92.87.160:22-139.178.89.65:52840.service - OpenSSH per-connection server daemon (139.178.89.65:52840). Apr 30 03:28:48.783548 systemd-logind[1444]: Removed session 5. Apr 30 03:28:48.836422 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 52840 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:48.838681 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:48.844117 systemd-logind[1444]: New session 6 of user core. Apr 30 03:28:48.855442 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:28:48.918647 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:28:48.919425 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:48.924041 sudo[1619]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:48.931414 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:28:48.931799 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:48.952532 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:48.955276 auditctl[1622]: No rules Apr 30 03:28:48.956701 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:28:48.957011 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:48.959381 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:49.000145 augenrules[1640]: No rules Apr 30 03:28:49.002514 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:49.004584 sudo[1618]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:49.010376 sshd[1615]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:49.020603 systemd[1]: sshd@5-164.92.87.160:22-139.178.89.65:52840.service: Deactivated successfully. Apr 30 03:28:49.023631 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:28:49.026924 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:28:49.036532 systemd[1]: Started sshd@6-164.92.87.160:22-139.178.89.65:52842.service - OpenSSH per-connection server daemon (139.178.89.65:52842). Apr 30 03:28:49.037979 systemd-logind[1444]: Removed session 6. Apr 30 03:28:49.089217 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 52842 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:28:49.091043 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:49.098493 systemd-logind[1444]: New session 7 of user core. Apr 30 03:28:49.109438 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:28:49.172347 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:28:49.172721 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:49.704819 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:28:49.705381 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:28:50.331185 dockerd[1667]: time="2025-04-30T03:28:50.331099448Z" level=info msg="Starting up" Apr 30 03:28:50.605882 dockerd[1667]: time="2025-04-30T03:28:50.605396925Z" level=info msg="Loading containers: start." Apr 30 03:28:50.758527 kernel: Initializing XFRM netlink socket Apr 30 03:28:50.795229 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:50.874959 systemd-networkd[1368]: docker0: Link UP Apr 30 03:28:50.876611 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Apr 30 03:28:50.902725 dockerd[1667]: time="2025-04-30T03:28:50.902565997Z" level=info msg="Loading containers: done." Apr 30 03:28:50.927671 dockerd[1667]: time="2025-04-30T03:28:50.927556282Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:28:50.927935 dockerd[1667]: time="2025-04-30T03:28:50.927786136Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:28:50.928098 dockerd[1667]: time="2025-04-30T03:28:50.928016794Z" level=info msg="Daemon has completed initialization" Apr 30 03:28:50.987497 dockerd[1667]: time="2025-04-30T03:28:50.987313988Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:28:50.988032 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:28:52.055250 containerd[1459]: time="2025-04-30T03:28:52.055144164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:28:52.761761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987191385.mount: Deactivated successfully. Apr 30 03:28:54.351886 containerd[1459]: time="2025-04-30T03:28:54.351825675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.354921 containerd[1459]: time="2025-04-30T03:28:54.354787041Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:28:54.358201 containerd[1459]: time="2025-04-30T03:28:54.358139055Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.360301 containerd[1459]: time="2025-04-30T03:28:54.360238631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:54.361701 containerd[1459]: time="2025-04-30T03:28:54.361658672Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.306462858s" Apr 30 03:28:54.361811 containerd[1459]: time="2025-04-30T03:28:54.361734718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:28:54.394532 containerd[1459]: time="2025-04-30T03:28:54.394479322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:28:56.164033 containerd[1459]: time="2025-04-30T03:28:56.163951063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.165973 containerd[1459]: time="2025-04-30T03:28:56.165897906Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:28:56.167265 containerd[1459]: time="2025-04-30T03:28:56.166505606Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.172798 containerd[1459]: time="2025-04-30T03:28:56.172725083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:56.175014 containerd[1459]: time="2025-04-30T03:28:56.174951212Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.780418244s" Apr 30 03:28:56.175255 containerd[1459]: time="2025-04-30T03:28:56.175230031Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:28:56.225359 containerd[1459]: time="2025-04-30T03:28:56.225297224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:28:57.477095 containerd[1459]: time="2025-04-30T03:28:57.475081795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.477095 containerd[1459]: time="2025-04-30T03:28:57.476446694Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:28:57.477095 containerd[1459]: time="2025-04-30T03:28:57.476908705Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.481665 containerd[1459]: time="2025-04-30T03:28:57.481601089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:57.483868 containerd[1459]: time="2025-04-30T03:28:57.483800372Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.258030044s" Apr 30 03:28:57.484083 containerd[1459]: time="2025-04-30T03:28:57.484043671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:28:57.522657 containerd[1459]: time="2025-04-30T03:28:57.522609005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:28:57.658405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:57.669571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:57.810236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:57.827732 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:57.899779 kubelet[1903]: E0430 03:28:57.899685 1903 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:57.904809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:57.905023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:58.735800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826847016.mount: Deactivated successfully. Apr 30 03:28:59.324577 containerd[1459]: time="2025-04-30T03:28:59.324450890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.325908 containerd[1459]: time="2025-04-30T03:28:59.325833809Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:28:59.326871 containerd[1459]: time="2025-04-30T03:28:59.326798051Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.329512 containerd[1459]: time="2025-04-30T03:28:59.329137716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:59.329989 containerd[1459]: time="2025-04-30T03:28:59.329959565Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.807015878s" Apr 30 03:28:59.330067 containerd[1459]: time="2025-04-30T03:28:59.329993974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:28:59.360867 containerd[1459]: time="2025-04-30T03:28:59.359894987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:28:59.363609 systemd-resolved[1322]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 03:28:59.954596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912430233.mount: Deactivated successfully. Apr 30 03:29:01.063576 containerd[1459]: time="2025-04-30T03:29:01.063494762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.065610 containerd[1459]: time="2025-04-30T03:29:01.065489475Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:29:01.067105 containerd[1459]: time="2025-04-30T03:29:01.066473172Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.072036 containerd[1459]: time="2025-04-30T03:29:01.071965441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.073956 containerd[1459]: time="2025-04-30T03:29:01.073872088Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.712711217s" Apr 30 03:29:01.073956 containerd[1459]: time="2025-04-30T03:29:01.073941420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:29:01.120627 containerd[1459]: time="2025-04-30T03:29:01.120486460Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:29:01.641520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611781285.mount: Deactivated successfully. Apr 30 03:29:01.650225 containerd[1459]: time="2025-04-30T03:29:01.650140070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.651665 containerd[1459]: time="2025-04-30T03:29:01.651578036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:29:01.655094 containerd[1459]: time="2025-04-30T03:29:01.652977877Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.660373 containerd[1459]: time="2025-04-30T03:29:01.660290535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:01.667866 containerd[1459]: time="2025-04-30T03:29:01.667797395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 547.248007ms" Apr 30 03:29:01.668188 containerd[1459]: time="2025-04-30T03:29:01.668161834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:29:01.718966 containerd[1459]: time="2025-04-30T03:29:01.718826590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:29:02.408372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193926549.mount: Deactivated successfully. Apr 30 03:29:02.436373 systemd-resolved[1322]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 03:29:05.625870 containerd[1459]: time="2025-04-30T03:29:05.623940474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.627567 containerd[1459]: time="2025-04-30T03:29:05.627322040Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:29:05.630092 containerd[1459]: time="2025-04-30T03:29:05.628813247Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.634198 containerd[1459]: time="2025-04-30T03:29:05.634124670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:05.636125 containerd[1459]: time="2025-04-30T03:29:05.636035157Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.917156134s" Apr 30 03:29:05.636418 containerd[1459]: time="2025-04-30T03:29:05.636381144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:29:07.908360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:29:07.918244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:08.096464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:08.105129 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:29:08.183633 kubelet[2093]: E0430 03:29:08.183034 2093 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:29:08.186810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:29:08.187026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:29:09.340100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:09.348692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:09.381682 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-7.scope)... Apr 30 03:29:09.381920 systemd[1]: Reloading... Apr 30 03:29:09.540121 zram_generator::config[2146]: No configuration found. Apr 30 03:29:09.690010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:09.773103 systemd[1]: Reloading finished in 390 ms. Apr 30 03:29:09.838677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:09.843829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:09.848092 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:09.848836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:09.855706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:10.002117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:10.014012 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:10.071142 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:10.071586 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:10.071637 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:10.074160 kubelet[2202]: I0430 03:29:10.074027 2202 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:10.509582 kubelet[2202]: I0430 03:29:10.509515 2202 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:10.509582 kubelet[2202]: I0430 03:29:10.509562 2202 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:10.509860 kubelet[2202]: I0430 03:29:10.509835 2202 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:10.534862 kubelet[2202]: E0430 03:29:10.534298 2202 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.87.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.534862 kubelet[2202]: I0430 03:29:10.534401 2202 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:10.554964 kubelet[2202]: I0430 03:29:10.554914 2202 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:10.555647 kubelet[2202]: I0430 03:29:10.555598 2202 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:10.556013 kubelet[2202]: I0430 03:29:10.555758 2202 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-32b52f0300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:10.557090 kubelet[2202]: I0430 03:29:10.556977 2202 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:10.557090 kubelet[2202]: I0430 03:29:10.557007 2202 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:10.557532 kubelet[2202]: I0430 03:29:10.557369 2202 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:10.558471 kubelet[2202]: I0430 03:29:10.558434 2202 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:10.558869 kubelet[2202]: I0430 03:29:10.558713 2202 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:10.558869 kubelet[2202]: I0430 03:29:10.558746 2202 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:10.558869 kubelet[2202]: I0430 03:29:10.558770 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:10.560229 kubelet[2202]: W0430 03:29:10.560119 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.87.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-32b52f0300&limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.560229 kubelet[2202]: E0430 03:29:10.560202 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.87.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-32b52f0300&limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.564234 kubelet[2202]: W0430 03:29:10.563994 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.87.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.564234 kubelet[2202]: E0430 03:29:10.564087 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.87.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.565231 kubelet[2202]: I0430 03:29:10.564967 2202 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:10.567615 kubelet[2202]: I0430 03:29:10.566929 2202 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:10.567615 kubelet[2202]: W0430 03:29:10.567036 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:29:10.567900 kubelet[2202]: I0430 03:29:10.567885 2202 server.go:1264] "Started kubelet" Apr 30 03:29:10.573043 kubelet[2202]: I0430 03:29:10.572387 2202 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:10.576642 kubelet[2202]: I0430 03:29:10.574907 2202 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:10.576642 kubelet[2202]: I0430 03:29:10.576110 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:10.576642 kubelet[2202]: I0430 03:29:10.576518 2202 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:10.579017 kubelet[2202]: I0430 03:29:10.578571 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:10.580706 kubelet[2202]: E0430 03:29:10.577910 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.87.160:6443/api/v1/namespaces/default/events\": dial tcp 164.92.87.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-32b52f0300.183afaf874338795 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-32b52f0300,UID:ci-4081.3.3-a-32b52f0300,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-32b52f0300,},FirstTimestamp:2025-04-30 03:29:10.567856021 +0000 UTC m=+0.548048315,LastTimestamp:2025-04-30 03:29:10.567856021 +0000 UTC m=+0.548048315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-32b52f0300,}" Apr 30 03:29:10.588400 kubelet[2202]: E0430 03:29:10.588342 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:10.590192 kubelet[2202]: I0430 03:29:10.590164 2202 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:10.590442 kubelet[2202]: I0430 03:29:10.590429 2202 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:10.590502 kubelet[2202]: E0430 03:29:10.590255 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-32b52f0300?timeout=10s\": dial tcp 164.92.87.160:6443: connect: connection refused" interval="200ms" Apr 30 03:29:10.590609 kubelet[2202]: I0430 03:29:10.590596 2202 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:10.590710 kubelet[2202]: W0430 03:29:10.590658 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.87.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.590757 kubelet[2202]: E0430 03:29:10.590720 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.87.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.591613 kubelet[2202]: I0430 03:29:10.591590 2202 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:10.591702 kubelet[2202]: I0430 03:29:10.591691 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:10.592821 kubelet[2202]: E0430 03:29:10.592376 2202 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:10.593622 kubelet[2202]: I0430 03:29:10.593600 2202 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:10.622326 kubelet[2202]: I0430 03:29:10.621772 2202 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:10.622326 kubelet[2202]: I0430 03:29:10.621799 2202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:10.622326 kubelet[2202]: I0430 03:29:10.621827 2202 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:10.629014 kubelet[2202]: I0430 03:29:10.628901 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:10.630893 kubelet[2202]: I0430 03:29:10.630848 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:10.630893 kubelet[2202]: I0430 03:29:10.630903 2202 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:10.631166 kubelet[2202]: I0430 03:29:10.630939 2202 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:10.631166 kubelet[2202]: E0430 03:29:10.631033 2202 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:10.632079 kubelet[2202]: I0430 03:29:10.631318 2202 policy_none.go:49] "None policy: Start" Apr 30 03:29:10.639316 kubelet[2202]: W0430 03:29:10.639231 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.87.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.639316 kubelet[2202]: E0430 03:29:10.639322 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.87.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:10.641162 kubelet[2202]: I0430 03:29:10.640081 2202 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:10.641162 kubelet[2202]: I0430 03:29:10.640119 2202 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:10.652288 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:29:10.666355 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:29:10.670268 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:29:10.683825 kubelet[2202]: I0430 03:29:10.682555 2202 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:10.683825 kubelet[2202]: I0430 03:29:10.682792 2202 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:10.683825 kubelet[2202]: I0430 03:29:10.682931 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:10.687163 kubelet[2202]: E0430 03:29:10.687124 2202 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:10.691856 kubelet[2202]: I0430 03:29:10.691821 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.692817 kubelet[2202]: E0430 03:29:10.692764 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.87.160:6443/api/v1/nodes\": dial tcp 164.92.87.160:6443: connect: connection refused" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.732169 kubelet[2202]: I0430 03:29:10.732026 2202 topology_manager.go:215] "Topology Admit Handler" podUID="8cc4a85a0fa87ae1cad23b58cbda0244" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.733695 kubelet[2202]: I0430 03:29:10.733437 2202 topology_manager.go:215] "Topology Admit Handler" podUID="435ef49768032014a72d38adcdf993de" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.735742 kubelet[2202]: I0430 03:29:10.735229 2202 topology_manager.go:215] "Topology Admit Handler" podUID="0b63497f7deb34b109c4496d822a99ec" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.743173 systemd[1]: Created slice kubepods-burstable-pod8cc4a85a0fa87ae1cad23b58cbda0244.slice - libcontainer container kubepods-burstable-pod8cc4a85a0fa87ae1cad23b58cbda0244.slice. Apr 30 03:29:10.761153 systemd[1]: Created slice kubepods-burstable-pod435ef49768032014a72d38adcdf993de.slice - libcontainer container kubepods-burstable-pod435ef49768032014a72d38adcdf993de.slice. Apr 30 03:29:10.767763 systemd[1]: Created slice kubepods-burstable-pod0b63497f7deb34b109c4496d822a99ec.slice - libcontainer container kubepods-burstable-pod0b63497f7deb34b109c4496d822a99ec.slice. Apr 30 03:29:10.791932 kubelet[2202]: E0430 03:29:10.791856 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-32b52f0300?timeout=10s\": dial tcp 164.92.87.160:6443: connect: connection refused" interval="400ms" Apr 30 03:29:10.891520 kubelet[2202]: I0430 03:29:10.891417 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cc4a85a0fa87ae1cad23b58cbda0244-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" (UID: \"8cc4a85a0fa87ae1cad23b58cbda0244\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.891520 kubelet[2202]: I0430 03:29:10.891484 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.891520 kubelet[2202]: I0430 03:29:10.891506 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b63497f7deb34b109c4496d822a99ec-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-32b52f0300\" (UID: \"0b63497f7deb34b109c4496d822a99ec\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.891520 kubelet[2202]: I0430 03:29:10.891522 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cc4a85a0fa87ae1cad23b58cbda0244-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" (UID: \"8cc4a85a0fa87ae1cad23b58cbda0244\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.891520 kubelet[2202]: I0430 03:29:10.891540 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cc4a85a0fa87ae1cad23b58cbda0244-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" (UID: \"8cc4a85a0fa87ae1cad23b58cbda0244\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.892028 kubelet[2202]: I0430 03:29:10.891559 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.892028 kubelet[2202]: I0430 03:29:10.891586 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.892028 kubelet[2202]: I0430 03:29:10.891625 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.892028 kubelet[2202]: I0430 03:29:10.891653 2202 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.895365 kubelet[2202]: I0430 03:29:10.895327 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:10.896112 kubelet[2202]: E0430 03:29:10.896074 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.87.160:6443/api/v1/nodes\": dial tcp 164.92.87.160:6443: connect: connection refused" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:11.053613 kubelet[2202]: E0430 03:29:11.053458 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:11.054410 containerd[1459]: time="2025-04-30T03:29:11.054322838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-32b52f0300,Uid:8cc4a85a0fa87ae1cad23b58cbda0244,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:11.057008 systemd-resolved[1322]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Apr 30 03:29:11.065362 kubelet[2202]: E0430 03:29:11.065318 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:11.070376 containerd[1459]: time="2025-04-30T03:29:11.070071056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-32b52f0300,Uid:435ef49768032014a72d38adcdf993de,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:11.070667 kubelet[2202]: E0430 03:29:11.070427 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:11.071511 containerd[1459]: time="2025-04-30T03:29:11.071230403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-32b52f0300,Uid:0b63497f7deb34b109c4496d822a99ec,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:11.194090 kubelet[2202]: E0430 03:29:11.193503 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-32b52f0300?timeout=10s\": dial tcp 164.92.87.160:6443: connect: connection refused" interval="800ms" Apr 30 03:29:11.297639 kubelet[2202]: I0430 03:29:11.297539 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:11.298123 kubelet[2202]: E0430 03:29:11.298087 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.87.160:6443/api/v1/nodes\": dial tcp 164.92.87.160:6443: connect: connection refused" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:11.401811 kubelet[2202]: W0430 03:29:11.401601 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.87.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:11.401811 kubelet[2202]: E0430 03:29:11.401697 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.87.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:11.573625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294603245.mount: Deactivated successfully. Apr 30 03:29:11.581836 containerd[1459]: time="2025-04-30T03:29:11.581750730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.584605 containerd[1459]: time="2025-04-30T03:29:11.584513751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:11.585866 containerd[1459]: time="2025-04-30T03:29:11.585787783Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.588499 containerd[1459]: time="2025-04-30T03:29:11.587521782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.589523 containerd[1459]: time="2025-04-30T03:29:11.589395107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:29:11.590412 containerd[1459]: time="2025-04-30T03:29:11.590174606Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.593631 containerd[1459]: time="2025-04-30T03:29:11.593426009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:29:11.593631 containerd[1459]: time="2025-04-30T03:29:11.593574627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:29:11.595089 containerd[1459]: time="2025-04-30T03:29:11.594345224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.17579ms" Apr 30 03:29:11.598271 containerd[1459]: time="2025-04-30T03:29:11.598224912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.933662ms" Apr 30 03:29:11.600393 containerd[1459]: time="2025-04-30T03:29:11.600348122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.918981ms" Apr 30 03:29:11.759468 containerd[1459]: time="2025-04-30T03:29:11.759301723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:11.759468 containerd[1459]: time="2025-04-30T03:29:11.759365552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:11.759468 containerd[1459]: time="2025-04-30T03:29:11.759376635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.760379 containerd[1459]: time="2025-04-30T03:29:11.759567229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.762017 containerd[1459]: time="2025-04-30T03:29:11.761782125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:11.762017 containerd[1459]: time="2025-04-30T03:29:11.761929938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:11.762017 containerd[1459]: time="2025-04-30T03:29:11.761982568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.762399 containerd[1459]: time="2025-04-30T03:29:11.762203552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.764996 containerd[1459]: time="2025-04-30T03:29:11.764866942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:11.766656 containerd[1459]: time="2025-04-30T03:29:11.766216459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:11.766656 containerd[1459]: time="2025-04-30T03:29:11.766244798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.766656 containerd[1459]: time="2025-04-30T03:29:11.766341255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:11.787598 kubelet[2202]: W0430 03:29:11.787500 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.87.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-32b52f0300&limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:11.787598 kubelet[2202]: E0430 03:29:11.787569 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.87.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-32b52f0300&limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:11.803391 systemd[1]: Started cri-containerd-421563ca17c9a93b2bef69ed4654b5343a4aa72ddeea611741e07315096095a2.scope - libcontainer container 421563ca17c9a93b2bef69ed4654b5343a4aa72ddeea611741e07315096095a2. Apr 30 03:29:11.805529 systemd[1]: Started cri-containerd-9d8317f99dbd9a3066124556b7935ef008ee3592b8d204b44a0ae512e62a34fe.scope - libcontainer container 9d8317f99dbd9a3066124556b7935ef008ee3592b8d204b44a0ae512e62a34fe. Apr 30 03:29:11.810815 systemd[1]: Started cri-containerd-0514a3bbbb558ee738ac98200eeb53d06118b75268f57c36d8dc5aa1a33184e0.scope - libcontainer container 0514a3bbbb558ee738ac98200eeb53d06118b75268f57c36d8dc5aa1a33184e0. Apr 30 03:29:11.905092 containerd[1459]: time="2025-04-30T03:29:11.900627647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-32b52f0300,Uid:8cc4a85a0fa87ae1cad23b58cbda0244,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d8317f99dbd9a3066124556b7935ef008ee3592b8d204b44a0ae512e62a34fe\"" Apr 30 03:29:11.905303 kubelet[2202]: E0430 03:29:11.902689 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:11.915118 containerd[1459]: time="2025-04-30T03:29:11.914988493Z" level=info msg="CreateContainer within sandbox \"9d8317f99dbd9a3066124556b7935ef008ee3592b8d204b44a0ae512e62a34fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:29:11.928498 containerd[1459]: time="2025-04-30T03:29:11.928429877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-32b52f0300,Uid:435ef49768032014a72d38adcdf993de,Namespace:kube-system,Attempt:0,} returns sandbox id \"421563ca17c9a93b2bef69ed4654b5343a4aa72ddeea611741e07315096095a2\"" Apr 30 03:29:11.930136 kubelet[2202]: E0430 03:29:11.930094 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:11.934127 containerd[1459]: time="2025-04-30T03:29:11.934045700Z" level=info msg="CreateContainer within sandbox \"421563ca17c9a93b2bef69ed4654b5343a4aa72ddeea611741e07315096095a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:29:11.941866 containerd[1459]: time="2025-04-30T03:29:11.941658706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-32b52f0300,Uid:0b63497f7deb34b109c4496d822a99ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"0514a3bbbb558ee738ac98200eeb53d06118b75268f57c36d8dc5aa1a33184e0\"" Apr 30 03:29:11.943615 kubelet[2202]: E0430 03:29:11.943319 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:11.945472 containerd[1459]: time="2025-04-30T03:29:11.945415816Z" level=info msg="CreateContainer within sandbox \"9d8317f99dbd9a3066124556b7935ef008ee3592b8d204b44a0ae512e62a34fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66692c6c19b012a2c10366742178a1c7bb0ccb5aba87b398642d9b59ee0e0a1b\"" Apr 30 03:29:11.946650 containerd[1459]: time="2025-04-30T03:29:11.946486036Z" level=info msg="StartContainer for \"66692c6c19b012a2c10366742178a1c7bb0ccb5aba87b398642d9b59ee0e0a1b\"" Apr 30 03:29:11.946986 containerd[1459]: time="2025-04-30T03:29:11.946944244Z" level=info msg="CreateContainer within sandbox \"0514a3bbbb558ee738ac98200eeb53d06118b75268f57c36d8dc5aa1a33184e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:29:11.958848 containerd[1459]: time="2025-04-30T03:29:11.958525812Z" level=info msg="CreateContainer within sandbox \"421563ca17c9a93b2bef69ed4654b5343a4aa72ddeea611741e07315096095a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40005a4e0e5f3366c3c6442460d9bf152f1632cbbc5118c13d61c884cc00da2f\"" Apr 30 03:29:11.960086 containerd[1459]: time="2025-04-30T03:29:11.959892779Z" level=info msg="StartContainer for \"40005a4e0e5f3366c3c6442460d9bf152f1632cbbc5118c13d61c884cc00da2f\"" Apr 30 03:29:11.970612 containerd[1459]: time="2025-04-30T03:29:11.970152401Z" level=info msg="CreateContainer within sandbox \"0514a3bbbb558ee738ac98200eeb53d06118b75268f57c36d8dc5aa1a33184e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41f6edce6f62488e47d47e77b822e94b9d1d22c4a6a26b1854e6fa88dcf314ba\"" Apr 30 03:29:11.970804 containerd[1459]: time="2025-04-30T03:29:11.970748838Z" level=info msg="StartContainer for \"41f6edce6f62488e47d47e77b822e94b9d1d22c4a6a26b1854e6fa88dcf314ba\"" Apr 30 03:29:11.995758 kubelet[2202]: E0430 03:29:11.994722 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.87.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-32b52f0300?timeout=10s\": dial tcp 164.92.87.160:6443: connect: connection refused" interval="1.6s" Apr 30 03:29:12.002305 systemd[1]: Started cri-containerd-66692c6c19b012a2c10366742178a1c7bb0ccb5aba87b398642d9b59ee0e0a1b.scope - libcontainer container 66692c6c19b012a2c10366742178a1c7bb0ccb5aba87b398642d9b59ee0e0a1b. Apr 30 03:29:12.035266 kubelet[2202]: W0430 03:29:12.035030 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.87.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:12.035698 kubelet[2202]: E0430 03:29:12.035665 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.87.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:12.036680 systemd[1]: Started cri-containerd-40005a4e0e5f3366c3c6442460d9bf152f1632cbbc5118c13d61c884cc00da2f.scope - libcontainer container 40005a4e0e5f3366c3c6442460d9bf152f1632cbbc5118c13d61c884cc00da2f. Apr 30 03:29:12.050785 systemd[1]: Started cri-containerd-41f6edce6f62488e47d47e77b822e94b9d1d22c4a6a26b1854e6fa88dcf314ba.scope - libcontainer container 41f6edce6f62488e47d47e77b822e94b9d1d22c4a6a26b1854e6fa88dcf314ba. Apr 30 03:29:12.101242 containerd[1459]: time="2025-04-30T03:29:12.100698173Z" level=info msg="StartContainer for \"66692c6c19b012a2c10366742178a1c7bb0ccb5aba87b398642d9b59ee0e0a1b\" returns successfully" Apr 30 03:29:12.105864 kubelet[2202]: I0430 03:29:12.105307 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:12.106395 kubelet[2202]: E0430 03:29:12.106280 2202 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.87.160:6443/api/v1/nodes\": dial tcp 164.92.87.160:6443: connect: connection refused" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:12.133638 containerd[1459]: time="2025-04-30T03:29:12.133574130Z" level=info msg="StartContainer for \"40005a4e0e5f3366c3c6442460d9bf152f1632cbbc5118c13d61c884cc00da2f\" returns successfully" Apr 30 03:29:12.163335 kubelet[2202]: W0430 03:29:12.163226 2202 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.87.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:12.163335 kubelet[2202]: E0430 03:29:12.163295 2202 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.87.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.87.160:6443: connect: connection refused Apr 30 03:29:12.187954 containerd[1459]: time="2025-04-30T03:29:12.187536646Z" level=info msg="StartContainer for \"41f6edce6f62488e47d47e77b822e94b9d1d22c4a6a26b1854e6fa88dcf314ba\" returns successfully" Apr 30 03:29:12.666177 kubelet[2202]: E0430 03:29:12.666130 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:12.672377 kubelet[2202]: E0430 03:29:12.672033 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:12.675640 kubelet[2202]: E0430 03:29:12.675531 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:13.679799 kubelet[2202]: E0430 03:29:13.679740 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:13.680696 kubelet[2202]: E0430 03:29:13.680400 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:13.709012 kubelet[2202]: I0430 03:29:13.708340 2202 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:14.204250 kubelet[2202]: E0430 03:29:14.204185 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-32b52f0300\" not found" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:14.348564 kubelet[2202]: I0430 03:29:14.348314 2202 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:14.373420 kubelet[2202]: E0430 03:29:14.373296 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:14.474107 kubelet[2202]: E0430 03:29:14.473909 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:14.574405 kubelet[2202]: E0430 03:29:14.574330 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:14.674558 kubelet[2202]: E0430 03:29:14.674487 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:14.774846 kubelet[2202]: E0430 03:29:14.774650 2202 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-32b52f0300\" not found" Apr 30 03:29:15.562251 kubelet[2202]: I0430 03:29:15.562202 2202 apiserver.go:52] "Watching apiserver" Apr 30 03:29:15.591537 kubelet[2202]: I0430 03:29:15.591487 2202 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:15.937423 kubelet[2202]: W0430 03:29:15.937295 2202 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:15.937875 kubelet[2202]: E0430 03:29:15.937815 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:16.454945 systemd[1]: Reloading requested from client PID 2475 ('systemctl') (unit session-7.scope)... Apr 30 03:29:16.454962 systemd[1]: Reloading... Apr 30 03:29:16.555128 zram_generator::config[2511]: No configuration found. Apr 30 03:29:16.684805 kubelet[2202]: E0430 03:29:16.684761 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:16.695988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:29:16.789027 systemd[1]: Reloading finished in 333 ms. Apr 30 03:29:16.845783 kubelet[2202]: I0430 03:29:16.845741 2202 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:16.846154 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:16.861276 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:29:16.861784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:16.861939 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 110.1M memory peak, 0B memory swap peak. Apr 30 03:29:16.869814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:29:17.038443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:29:17.041401 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:29:17.136535 kubelet[2565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:17.136535 kubelet[2565]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:29:17.136535 kubelet[2565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:29:17.136988 kubelet[2565]: I0430 03:29:17.136559 2565 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:29:17.143099 kubelet[2565]: I0430 03:29:17.142971 2565 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:29:17.143099 kubelet[2565]: I0430 03:29:17.143004 2565 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:29:17.143344 kubelet[2565]: I0430 03:29:17.143245 2565 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:29:17.145037 kubelet[2565]: I0430 03:29:17.144973 2565 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:29:17.148108 kubelet[2565]: I0430 03:29:17.146741 2565 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:29:17.159947 kubelet[2565]: I0430 03:29:17.159884 2565 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:29:17.160392 kubelet[2565]: I0430 03:29:17.160349 2565 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:29:17.160763 kubelet[2565]: I0430 03:29:17.160436 2565 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-32b52f0300","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:29:17.160870 kubelet[2565]: I0430 03:29:17.160828 2565 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:29:17.160870 kubelet[2565]: I0430 03:29:17.160851 2565 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:29:17.160981 kubelet[2565]: I0430 03:29:17.160925 2565 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:17.161158 kubelet[2565]: I0430 03:29:17.161136 2565 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:29:17.161934 kubelet[2565]: I0430 03:29:17.161895 2565 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:29:17.162020 kubelet[2565]: I0430 03:29:17.162013 2565 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:29:17.162180 kubelet[2565]: I0430 03:29:17.162038 2565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:29:17.166409 kubelet[2565]: I0430 03:29:17.166329 2565 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:29:17.171002 kubelet[2565]: I0430 03:29:17.170907 2565 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:29:17.174015 kubelet[2565]: I0430 03:29:17.173938 2565 server.go:1264] "Started kubelet" Apr 30 03:29:17.180710 kubelet[2565]: I0430 03:29:17.180630 2565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:29:17.195592 kubelet[2565]: I0430 03:29:17.191903 2565 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:29:17.195592 kubelet[2565]: I0430 03:29:17.193230 2565 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:29:17.196380 kubelet[2565]: I0430 03:29:17.196270 2565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:29:17.197441 kubelet[2565]: I0430 03:29:17.197424 2565 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:29:17.201723 kubelet[2565]: I0430 03:29:17.201688 2565 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:29:17.203809 kubelet[2565]: I0430 03:29:17.202919 2565 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:29:17.204007 kubelet[2565]: I0430 03:29:17.203993 2565 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:29:17.204755 kubelet[2565]: I0430 03:29:17.204730 2565 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:29:17.204899 kubelet[2565]: I0430 03:29:17.204875 2565 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:29:17.218764 kubelet[2565]: I0430 03:29:17.218727 2565 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:29:17.227668 kubelet[2565]: I0430 03:29:17.227598 2565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:29:17.229318 kubelet[2565]: I0430 03:29:17.229206 2565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:29:17.229318 kubelet[2565]: I0430 03:29:17.229308 2565 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:29:17.229543 kubelet[2565]: I0430 03:29:17.229344 2565 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:29:17.229543 kubelet[2565]: E0430 03:29:17.229396 2565 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:29:17.240106 kubelet[2565]: E0430 03:29:17.240044 2565 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:29:17.285273 kubelet[2565]: I0430 03:29:17.285229 2565 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:29:17.285273 kubelet[2565]: I0430 03:29:17.285260 2565 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:29:17.285273 kubelet[2565]: I0430 03:29:17.285285 2565 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:29:17.286018 kubelet[2565]: I0430 03:29:17.285462 2565 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:29:17.286018 kubelet[2565]: I0430 03:29:17.285472 2565 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:29:17.286018 kubelet[2565]: I0430 03:29:17.285494 2565 policy_none.go:49] "None policy: Start" Apr 30 03:29:17.287210 kubelet[2565]: I0430 03:29:17.286261 2565 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:29:17.287210 kubelet[2565]: I0430 03:29:17.286291 2565 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:29:17.287210 kubelet[2565]: I0430 03:29:17.286554 2565 state_mem.go:75] "Updated machine memory state" Apr 30 03:29:17.294360 kubelet[2565]: I0430 03:29:17.293077 2565 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:29:17.295221 kubelet[2565]: I0430 03:29:17.295046 2565 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:29:17.296150 kubelet[2565]: I0430 03:29:17.296108 2565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:29:17.305752 kubelet[2565]: I0430 03:29:17.305712 2565 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.320517 kubelet[2565]: I0430 03:29:17.320344 2565 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.321064 kubelet[2565]: I0430 03:29:17.320604 2565 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.329631 kubelet[2565]: I0430 03:29:17.329516 2565 topology_manager.go:215] "Topology Admit Handler" podUID="8cc4a85a0fa87ae1cad23b58cbda0244" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.331649 kubelet[2565]: I0430 03:29:17.331210 2565 topology_manager.go:215] "Topology Admit Handler" podUID="435ef49768032014a72d38adcdf993de" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.331649 kubelet[2565]: I0430 03:29:17.331539 2565 topology_manager.go:215] "Topology Admit Handler" podUID="0b63497f7deb34b109c4496d822a99ec" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.342747 kubelet[2565]: W0430 03:29:17.342696 2565 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:17.342945 kubelet[2565]: E0430 03:29:17.342781 2565 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.347451 kubelet[2565]: W0430 03:29:17.346579 2565 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:17.348288 kubelet[2565]: W0430 03:29:17.348222 2565 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:17.404829 kubelet[2565]: I0430 03:29:17.404765 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.404829 kubelet[2565]: I0430 03:29:17.404814 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405139 kubelet[2565]: I0430 03:29:17.404849 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cc4a85a0fa87ae1cad23b58cbda0244-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" (UID: \"8cc4a85a0fa87ae1cad23b58cbda0244\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405139 kubelet[2565]: I0430 03:29:17.404882 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cc4a85a0fa87ae1cad23b58cbda0244-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" (UID: \"8cc4a85a0fa87ae1cad23b58cbda0244\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405139 kubelet[2565]: I0430 03:29:17.404900 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cc4a85a0fa87ae1cad23b58cbda0244-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" (UID: \"8cc4a85a0fa87ae1cad23b58cbda0244\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405139 kubelet[2565]: I0430 03:29:17.404916 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405139 kubelet[2565]: I0430 03:29:17.404933 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405386 kubelet[2565]: I0430 03:29:17.404949 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/435ef49768032014a72d38adcdf993de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-32b52f0300\" (UID: \"435ef49768032014a72d38adcdf993de\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.405386 kubelet[2565]: I0430 03:29:17.404966 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b63497f7deb34b109c4496d822a99ec-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-32b52f0300\" (UID: \"0b63497f7deb34b109c4496d822a99ec\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:17.465154 sudo[2598]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 03:29:17.466485 sudo[2598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 03:29:17.646865 kubelet[2565]: E0430 03:29:17.645578 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:17.648190 kubelet[2565]: E0430 03:29:17.648018 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:17.652304 kubelet[2565]: E0430 03:29:17.650447 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:18.183331 kubelet[2565]: I0430 03:29:18.182824 2565 apiserver.go:52] "Watching apiserver" Apr 30 03:29:18.204702 kubelet[2565]: I0430 03:29:18.204642 2565 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:29:18.206873 sudo[2598]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:18.272735 kubelet[2565]: E0430 03:29:18.270528 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:18.272735 kubelet[2565]: E0430 03:29:18.270676 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:18.290657 kubelet[2565]: W0430 03:29:18.289990 2565 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:29:18.291590 kubelet[2565]: E0430 03:29:18.290756 2565 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-32b52f0300\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" Apr 30 03:29:18.293696 kubelet[2565]: E0430 03:29:18.293039 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:18.339363 kubelet[2565]: I0430 03:29:18.339277 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-32b52f0300" podStartSLOduration=1.339253748 podStartE2EDuration="1.339253748s" podCreationTimestamp="2025-04-30 03:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:18.325214107 +0000 UTC m=+1.275718891" watchObservedRunningTime="2025-04-30 03:29:18.339253748 +0000 UTC m=+1.289758526" Apr 30 03:29:18.339616 kubelet[2565]: I0430 03:29:18.339422 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-32b52f0300" podStartSLOduration=1.339413951 podStartE2EDuration="1.339413951s" podCreationTimestamp="2025-04-30 03:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:18.339201632 +0000 UTC m=+1.289706410" watchObservedRunningTime="2025-04-30 03:29:18.339413951 +0000 UTC m=+1.289918736" Apr 30 03:29:18.359373 kubelet[2565]: I0430 03:29:18.358889 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-32b52f0300" podStartSLOduration=3.358859428 podStartE2EDuration="3.358859428s" podCreationTimestamp="2025-04-30 03:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:18.358853925 +0000 UTC m=+1.309358700" watchObservedRunningTime="2025-04-30 03:29:18.358859428 +0000 UTC m=+1.309364209" Apr 30 03:29:19.272135 kubelet[2565]: E0430 03:29:19.272093 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:19.818261 sudo[1651]: pam_unix(sudo:session): session closed for user root Apr 30 03:29:19.824630 sshd[1648]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:19.828233 systemd[1]: sshd@6-164.92.87.160:22-139.178.89.65:52842.service: Deactivated successfully. Apr 30 03:29:19.831484 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:29:19.831768 systemd[1]: session-7.scope: Consumed 6.491s CPU time, 189.2M memory peak, 0B memory swap peak. Apr 30 03:29:19.834793 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:29:19.836707 systemd-logind[1444]: Removed session 7. Apr 30 03:29:20.280489 kubelet[2565]: E0430 03:29:20.279822 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:21.631359 systemd-timesyncd[1340]: Contacted time server 23.186.168.127:123 (2.flatcar.pool.ntp.org). Apr 30 03:29:21.631432 systemd-timesyncd[1340]: Initial clock synchronization to Wed 2025-04-30 03:29:21.631053 UTC. Apr 30 03:29:21.631548 systemd-resolved[1322]: Clock change detected. Flushing caches. Apr 30 03:29:23.150126 kubelet[2565]: E0430 03:29:23.150077 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:23.893932 kubelet[2565]: E0430 03:29:23.893866 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:23.914825 kubelet[2565]: E0430 03:29:23.912991 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:23.914825 kubelet[2565]: E0430 03:29:23.913010 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:24.913252 kubelet[2565]: E0430 03:29:24.912838 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:28.143378 kubelet[2565]: E0430 03:29:28.143312 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:28.921986 kubelet[2565]: E0430 03:29:28.921634 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:31.115260 update_engine[1445]: I20250430 03:29:31.115111 1445 update_attempter.cc:509] Updating boot flags... Apr 30 03:29:31.158890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2641) Apr 30 03:29:31.241839 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2642) Apr 30 03:29:31.301854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2642) Apr 30 03:29:31.823721 kubelet[2565]: I0430 03:29:31.823551 2565 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:31.825083 kubelet[2565]: I0430 03:29:31.824490 2565 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:31.825170 containerd[1459]: time="2025-04-30T03:29:31.824148664Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:32.656830 kubelet[2565]: I0430 03:29:32.654827 2565 topology_manager.go:215] "Topology Admit Handler" podUID="d42debee-e202-4c16-aaf8-0ec8fd2be920" podNamespace="kube-system" podName="kube-proxy-7254q" Apr 30 03:29:32.668857 systemd[1]: Created slice kubepods-besteffort-podd42debee_e202_4c16_aaf8_0ec8fd2be920.slice - libcontainer container kubepods-besteffort-podd42debee_e202_4c16_aaf8_0ec8fd2be920.slice. Apr 30 03:29:32.675273 kubelet[2565]: I0430 03:29:32.674445 2565 topology_manager.go:215] "Topology Admit Handler" podUID="76e9132f-d854-4a4d-ab40-398170125691" podNamespace="kube-system" podName="cilium-th48c" Apr 30 03:29:32.687469 systemd[1]: Created slice kubepods-burstable-pod76e9132f_d854_4a4d_ab40_398170125691.slice - libcontainer container kubepods-burstable-pod76e9132f_d854_4a4d_ab40_398170125691.slice. Apr 30 03:29:32.850981 kubelet[2565]: I0430 03:29:32.850409 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-cgroup\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.850981 kubelet[2565]: I0430 03:29:32.850476 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-run\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.850981 kubelet[2565]: I0430 03:29:32.850504 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-hostproc\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.850981 kubelet[2565]: I0430 03:29:32.850527 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d42debee-e202-4c16-aaf8-0ec8fd2be920-lib-modules\") pod \"kube-proxy-7254q\" (UID: \"d42debee-e202-4c16-aaf8-0ec8fd2be920\") " pod="kube-system/kube-proxy-7254q" Apr 30 03:29:32.850981 kubelet[2565]: I0430 03:29:32.850554 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn28l\" (UniqueName: \"kubernetes.io/projected/d42debee-e202-4c16-aaf8-0ec8fd2be920-kube-api-access-fn28l\") pod \"kube-proxy-7254q\" (UID: \"d42debee-e202-4c16-aaf8-0ec8fd2be920\") " pod="kube-system/kube-proxy-7254q" Apr 30 03:29:32.850981 kubelet[2565]: I0430 03:29:32.850586 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-xtables-lock\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.851834 kubelet[2565]: I0430 03:29:32.850616 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-etc-cni-netd\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.851834 kubelet[2565]: I0430 03:29:32.850633 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d42debee-e202-4c16-aaf8-0ec8fd2be920-kube-proxy\") pod \"kube-proxy-7254q\" (UID: \"d42debee-e202-4c16-aaf8-0ec8fd2be920\") " pod="kube-system/kube-proxy-7254q" Apr 30 03:29:32.851834 kubelet[2565]: I0430 03:29:32.850649 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d42debee-e202-4c16-aaf8-0ec8fd2be920-xtables-lock\") pod \"kube-proxy-7254q\" (UID: \"d42debee-e202-4c16-aaf8-0ec8fd2be920\") " pod="kube-system/kube-proxy-7254q" Apr 30 03:29:32.851834 kubelet[2565]: I0430 03:29:32.850664 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-kernel\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.851834 kubelet[2565]: I0430 03:29:32.850679 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pck\" (UniqueName: \"kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-kube-api-access-q4pck\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.851834 kubelet[2565]: I0430 03:29:32.850694 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cni-path\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.852235 kubelet[2565]: I0430 03:29:32.850711 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76e9132f-d854-4a4d-ab40-398170125691-cilium-config-path\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.852235 kubelet[2565]: I0430 03:29:32.850725 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-bpf-maps\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.852235 kubelet[2565]: I0430 03:29:32.850741 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-lib-modules\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.852235 kubelet[2565]: I0430 03:29:32.850759 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76e9132f-d854-4a4d-ab40-398170125691-clustermesh-secrets\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.852235 kubelet[2565]: I0430 03:29:32.850773 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-hubble-tls\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.852235 kubelet[2565]: I0430 03:29:32.850808 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-net\") pod \"cilium-th48c\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " pod="kube-system/cilium-th48c" Apr 30 03:29:32.918857 kubelet[2565]: I0430 03:29:32.918633 2565 topology_manager.go:215] "Topology Admit Handler" podUID="4c8e27c5-1eba-488e-97d5-3b54b80364e2" podNamespace="kube-system" podName="cilium-operator-599987898-zn857" Apr 30 03:29:32.933361 systemd[1]: Created slice kubepods-besteffort-pod4c8e27c5_1eba_488e_97d5_3b54b80364e2.slice - libcontainer container kubepods-besteffort-pod4c8e27c5_1eba_488e_97d5_3b54b80364e2.slice. Apr 30 03:29:33.084212 kubelet[2565]: I0430 03:29:33.083866 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-587j7\" (UniqueName: \"kubernetes.io/projected/4c8e27c5-1eba-488e-97d5-3b54b80364e2-kube-api-access-587j7\") pod \"cilium-operator-599987898-zn857\" (UID: \"4c8e27c5-1eba-488e-97d5-3b54b80364e2\") " pod="kube-system/cilium-operator-599987898-zn857" Apr 30 03:29:33.084212 kubelet[2565]: I0430 03:29:33.083951 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8e27c5-1eba-488e-97d5-3b54b80364e2-cilium-config-path\") pod \"cilium-operator-599987898-zn857\" (UID: \"4c8e27c5-1eba-488e-97d5-3b54b80364e2\") " pod="kube-system/cilium-operator-599987898-zn857" Apr 30 03:29:33.242266 kubelet[2565]: E0430 03:29:33.241996 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:33.244321 containerd[1459]: time="2025-04-30T03:29:33.243607848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zn857,Uid:4c8e27c5-1eba-488e-97d5-3b54b80364e2,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:33.283645 kubelet[2565]: E0430 03:29:33.283244 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:33.286396 containerd[1459]: time="2025-04-30T03:29:33.286329103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7254q,Uid:d42debee-e202-4c16-aaf8-0ec8fd2be920,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:33.286973 containerd[1459]: time="2025-04-30T03:29:33.285997259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:33.286973 containerd[1459]: time="2025-04-30T03:29:33.286298838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:33.286973 containerd[1459]: time="2025-04-30T03:29:33.286363930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.286973 containerd[1459]: time="2025-04-30T03:29:33.286796328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.292416 kubelet[2565]: E0430 03:29:33.291553 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:33.294338 containerd[1459]: time="2025-04-30T03:29:33.293246471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-th48c,Uid:76e9132f-d854-4a4d-ab40-398170125691,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:33.326073 systemd[1]: Started cri-containerd-a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd.scope - libcontainer container a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd. Apr 30 03:29:33.355199 containerd[1459]: time="2025-04-30T03:29:33.353733862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:33.355199 containerd[1459]: time="2025-04-30T03:29:33.353843006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:33.355199 containerd[1459]: time="2025-04-30T03:29:33.353855298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.355199 containerd[1459]: time="2025-04-30T03:29:33.353951226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.379275 containerd[1459]: time="2025-04-30T03:29:33.377768189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:33.379275 containerd[1459]: time="2025-04-30T03:29:33.378013573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:33.379275 containerd[1459]: time="2025-04-30T03:29:33.378060522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.379275 containerd[1459]: time="2025-04-30T03:29:33.378244968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:33.391042 systemd[1]: Started cri-containerd-57a032bb1bc5c3c080717258a1115a663a27a1974661c3d179f7b024f423453e.scope - libcontainer container 57a032bb1bc5c3c080717258a1115a663a27a1974661c3d179f7b024f423453e. Apr 30 03:29:33.422043 systemd[1]: Started cri-containerd-f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b.scope - libcontainer container f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b. Apr 30 03:29:33.472409 containerd[1459]: time="2025-04-30T03:29:33.472178248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7254q,Uid:d42debee-e202-4c16-aaf8-0ec8fd2be920,Namespace:kube-system,Attempt:0,} returns sandbox id \"57a032bb1bc5c3c080717258a1115a663a27a1974661c3d179f7b024f423453e\"" Apr 30 03:29:33.475092 kubelet[2565]: E0430 03:29:33.474839 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:33.481819 containerd[1459]: time="2025-04-30T03:29:33.481448338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zn857,Uid:4c8e27c5-1eba-488e-97d5-3b54b80364e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\"" Apr 30 03:29:33.483049 containerd[1459]: time="2025-04-30T03:29:33.482643641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-th48c,Uid:76e9132f-d854-4a4d-ab40-398170125691,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\"" Apr 30 03:29:33.485018 kubelet[2565]: E0430 03:29:33.483851 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:33.487136 kubelet[2565]: E0430 03:29:33.486167 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:33.487295 containerd[1459]: time="2025-04-30T03:29:33.486921361Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 03:29:33.490393 containerd[1459]: time="2025-04-30T03:29:33.489928727Z" level=info msg="CreateContainer within sandbox \"57a032bb1bc5c3c080717258a1115a663a27a1974661c3d179f7b024f423453e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:33.519450 containerd[1459]: time="2025-04-30T03:29:33.519283071Z" level=info msg="CreateContainer within sandbox \"57a032bb1bc5c3c080717258a1115a663a27a1974661c3d179f7b024f423453e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c0cd1e03bfc020632ef109a5d1ea7c06b0dff28e0eb05b9ac036acd609310f61\"" Apr 30 03:29:33.528196 containerd[1459]: time="2025-04-30T03:29:33.521989264Z" level=info msg="StartContainer for \"c0cd1e03bfc020632ef109a5d1ea7c06b0dff28e0eb05b9ac036acd609310f61\"" Apr 30 03:29:33.558162 systemd[1]: Started cri-containerd-c0cd1e03bfc020632ef109a5d1ea7c06b0dff28e0eb05b9ac036acd609310f61.scope - libcontainer container c0cd1e03bfc020632ef109a5d1ea7c06b0dff28e0eb05b9ac036acd609310f61. Apr 30 03:29:33.601816 containerd[1459]: time="2025-04-30T03:29:33.601709034Z" level=info msg="StartContainer for \"c0cd1e03bfc020632ef109a5d1ea7c06b0dff28e0eb05b9ac036acd609310f61\" returns successfully" Apr 30 03:29:33.948135 kubelet[2565]: E0430 03:29:33.947624 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:34.940973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366587276.mount: Deactivated successfully. Apr 30 03:29:36.067526 containerd[1459]: time="2025-04-30T03:29:36.067415078Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:36.069819 containerd[1459]: time="2025-04-30T03:29:36.069750173Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:36.069929 containerd[1459]: time="2025-04-30T03:29:36.069836087Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 03:29:36.072813 containerd[1459]: time="2025-04-30T03:29:36.072725338Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.585751556s" Apr 30 03:29:36.072813 containerd[1459]: time="2025-04-30T03:29:36.072817439Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 03:29:36.095008 containerd[1459]: time="2025-04-30T03:29:36.093889275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 03:29:36.127650 containerd[1459]: time="2025-04-30T03:29:36.127572873Z" level=info msg="CreateContainer within sandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 03:29:36.145697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041481006.mount: Deactivated successfully. Apr 30 03:29:36.149937 containerd[1459]: time="2025-04-30T03:29:36.149678234Z" level=info msg="CreateContainer within sandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\"" Apr 30 03:29:36.154105 containerd[1459]: time="2025-04-30T03:29:36.152560027Z" level=info msg="StartContainer for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\"" Apr 30 03:29:36.199798 systemd[1]: run-containerd-runc-k8s.io-dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755-runc.ZNpv4s.mount: Deactivated successfully. Apr 30 03:29:36.210235 systemd[1]: Started cri-containerd-dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755.scope - libcontainer container dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755. Apr 30 03:29:36.266092 containerd[1459]: time="2025-04-30T03:29:36.265948091Z" level=info msg="StartContainer for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" returns successfully" Apr 30 03:29:36.966995 kubelet[2565]: E0430 03:29:36.966586 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:37.044047 kubelet[2565]: I0430 03:29:37.043641 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7254q" podStartSLOduration=5.04359096 podStartE2EDuration="5.04359096s" podCreationTimestamp="2025-04-30 03:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:33.965408286 +0000 UTC m=+16.292185509" watchObservedRunningTime="2025-04-30 03:29:37.04359096 +0000 UTC m=+19.370368196" Apr 30 03:29:37.890435 kubelet[2565]: I0430 03:29:37.889579 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zn857" podStartSLOduration=3.2943037410000002 podStartE2EDuration="5.889467085s" podCreationTimestamp="2025-04-30 03:29:32 +0000 UTC" firstStartedPulling="2025-04-30 03:29:33.485571946 +0000 UTC m=+15.812349143" lastFinishedPulling="2025-04-30 03:29:36.080735272 +0000 UTC m=+18.407512487" observedRunningTime="2025-04-30 03:29:37.045906845 +0000 UTC m=+19.372684082" watchObservedRunningTime="2025-04-30 03:29:37.889467085 +0000 UTC m=+20.216244310" Apr 30 03:29:37.992491 kubelet[2565]: E0430 03:29:37.992387 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:41.603536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942911620.mount: Deactivated successfully. Apr 30 03:29:45.210858 containerd[1459]: time="2025-04-30T03:29:45.210718230Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.213062 containerd[1459]: time="2025-04-30T03:29:45.212854426Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 03:29:45.215763 containerd[1459]: time="2025-04-30T03:29:45.214075758Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:45.215763 containerd[1459]: time="2025-04-30T03:29:45.215594016Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.121606003s" Apr 30 03:29:45.215763 containerd[1459]: time="2025-04-30T03:29:45.215634484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 03:29:45.221819 containerd[1459]: time="2025-04-30T03:29:45.221731741Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:29:45.342502 containerd[1459]: time="2025-04-30T03:29:45.342216439Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\"" Apr 30 03:29:45.343714 containerd[1459]: time="2025-04-30T03:29:45.343318755Z" level=info msg="StartContainer for \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\"" Apr 30 03:29:45.594134 systemd[1]: run-containerd-runc-k8s.io-bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac-runc.oZkWM4.mount: Deactivated successfully. Apr 30 03:29:45.603130 systemd[1]: Started cri-containerd-bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac.scope - libcontainer container bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac. Apr 30 03:29:45.657608 containerd[1459]: time="2025-04-30T03:29:45.656418601Z" level=info msg="StartContainer for \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\" returns successfully" Apr 30 03:29:45.667621 systemd[1]: cri-containerd-bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac.scope: Deactivated successfully. Apr 30 03:29:45.768921 containerd[1459]: time="2025-04-30T03:29:45.748917658Z" level=info msg="shim disconnected" id=bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac namespace=k8s.io Apr 30 03:29:45.768921 containerd[1459]: time="2025-04-30T03:29:45.768909606Z" level=warning msg="cleaning up after shim disconnected" id=bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac namespace=k8s.io Apr 30 03:29:45.768921 containerd[1459]: time="2025-04-30T03:29:45.768936030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:45.786834 containerd[1459]: time="2025-04-30T03:29:45.786646797Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:29:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:29:46.027489 kubelet[2565]: E0430 03:29:46.027334 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:46.042035 containerd[1459]: time="2025-04-30T03:29:46.041907016Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:29:46.069266 containerd[1459]: time="2025-04-30T03:29:46.069069163Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\"" Apr 30 03:29:46.071422 containerd[1459]: time="2025-04-30T03:29:46.070153186Z" level=info msg="StartContainer for \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\"" Apr 30 03:29:46.107297 systemd[1]: Started cri-containerd-773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed.scope - libcontainer container 773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed. Apr 30 03:29:46.147842 containerd[1459]: time="2025-04-30T03:29:46.147695749Z" level=info msg="StartContainer for \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\" returns successfully" Apr 30 03:29:46.164696 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:29:46.164992 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:29:46.165088 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:29:46.173379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:29:46.173657 systemd[1]: cri-containerd-773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed.scope: Deactivated successfully. Apr 30 03:29:46.217425 containerd[1459]: time="2025-04-30T03:29:46.217010995Z" level=info msg="shim disconnected" id=773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed namespace=k8s.io Apr 30 03:29:46.217425 containerd[1459]: time="2025-04-30T03:29:46.217159594Z" level=warning msg="cleaning up after shim disconnected" id=773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed namespace=k8s.io Apr 30 03:29:46.217425 containerd[1459]: time="2025-04-30T03:29:46.217174568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:46.240802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:29:46.335606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac-rootfs.mount: Deactivated successfully. Apr 30 03:29:47.036811 kubelet[2565]: E0430 03:29:47.035830 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:47.058942 containerd[1459]: time="2025-04-30T03:29:47.058410875Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:29:47.105207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248874355.mount: Deactivated successfully. Apr 30 03:29:47.114413 containerd[1459]: time="2025-04-30T03:29:47.114331818Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\"" Apr 30 03:29:47.116916 containerd[1459]: time="2025-04-30T03:29:47.115909025Z" level=info msg="StartContainer for \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\"" Apr 30 03:29:47.165106 systemd[1]: Started cri-containerd-dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7.scope - libcontainer container dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7. Apr 30 03:29:47.215344 containerd[1459]: time="2025-04-30T03:29:47.215251427Z" level=info msg="StartContainer for \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\" returns successfully" Apr 30 03:29:47.221046 systemd[1]: cri-containerd-dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7.scope: Deactivated successfully. Apr 30 03:29:47.259850 containerd[1459]: time="2025-04-30T03:29:47.259557937Z" level=info msg="shim disconnected" id=dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7 namespace=k8s.io Apr 30 03:29:47.259850 containerd[1459]: time="2025-04-30T03:29:47.259637912Z" level=warning msg="cleaning up after shim disconnected" id=dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7 namespace=k8s.io Apr 30 03:29:47.259850 containerd[1459]: time="2025-04-30T03:29:47.259651325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:47.334159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7-rootfs.mount: Deactivated successfully. Apr 30 03:29:48.047385 kubelet[2565]: E0430 03:29:48.047329 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:48.057355 containerd[1459]: time="2025-04-30T03:29:48.057120408Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:29:48.090971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862858216.mount: Deactivated successfully. Apr 30 03:29:48.110375 containerd[1459]: time="2025-04-30T03:29:48.110292178Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\"" Apr 30 03:29:48.113670 containerd[1459]: time="2025-04-30T03:29:48.113577080Z" level=info msg="StartContainer for \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\"" Apr 30 03:29:48.168430 systemd[1]: Started cri-containerd-f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959.scope - libcontainer container f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959. Apr 30 03:29:48.213522 systemd[1]: cri-containerd-f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959.scope: Deactivated successfully. Apr 30 03:29:48.222595 containerd[1459]: time="2025-04-30T03:29:48.222514439Z" level=info msg="StartContainer for \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\" returns successfully" Apr 30 03:29:48.264441 containerd[1459]: time="2025-04-30T03:29:48.264320591Z" level=info msg="shim disconnected" id=f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959 namespace=k8s.io Apr 30 03:29:48.264441 containerd[1459]: time="2025-04-30T03:29:48.264440723Z" level=warning msg="cleaning up after shim disconnected" id=f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959 namespace=k8s.io Apr 30 03:29:48.264441 containerd[1459]: time="2025-04-30T03:29:48.264455517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:48.337764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959-rootfs.mount: Deactivated successfully. Apr 30 03:29:49.061992 kubelet[2565]: E0430 03:29:49.061321 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:49.070062 containerd[1459]: time="2025-04-30T03:29:49.067829221Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:29:49.118818 containerd[1459]: time="2025-04-30T03:29:49.117078498Z" level=info msg="CreateContainer within sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\"" Apr 30 03:29:49.124820 containerd[1459]: time="2025-04-30T03:29:49.122409144Z" level=info msg="StartContainer for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\"" Apr 30 03:29:49.196246 systemd[1]: Started cri-containerd-8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad.scope - libcontainer container 8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad. Apr 30 03:29:49.331944 containerd[1459]: time="2025-04-30T03:29:49.328929629Z" level=info msg="StartContainer for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" returns successfully" Apr 30 03:29:49.572541 kubelet[2565]: I0430 03:29:49.571804 2565 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:29:49.630259 kubelet[2565]: I0430 03:29:49.628930 2565 topology_manager.go:215] "Topology Admit Handler" podUID="c0081508-c8e3-490a-b57b-b113cee0b31d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-99zjh" Apr 30 03:29:49.634061 kubelet[2565]: I0430 03:29:49.633134 2565 topology_manager.go:215] "Topology Admit Handler" podUID="5812e052-e1c2-4134-b441-638c1f81e36b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-frmqn" Apr 30 03:29:49.675217 systemd[1]: Created slice kubepods-burstable-podc0081508_c8e3_490a_b57b_b113cee0b31d.slice - libcontainer container kubepods-burstable-podc0081508_c8e3_490a_b57b_b113cee0b31d.slice. Apr 30 03:29:49.702031 systemd[1]: Created slice kubepods-burstable-pod5812e052_e1c2_4134_b441_638c1f81e36b.slice - libcontainer container kubepods-burstable-pod5812e052_e1c2_4134_b441_638c1f81e36b.slice. Apr 30 03:29:49.827517 kubelet[2565]: I0430 03:29:49.827167 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-755rd\" (UniqueName: \"kubernetes.io/projected/c0081508-c8e3-490a-b57b-b113cee0b31d-kube-api-access-755rd\") pod \"coredns-7db6d8ff4d-99zjh\" (UID: \"c0081508-c8e3-490a-b57b-b113cee0b31d\") " pod="kube-system/coredns-7db6d8ff4d-99zjh" Apr 30 03:29:49.827517 kubelet[2565]: I0430 03:29:49.827313 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5812e052-e1c2-4134-b441-638c1f81e36b-config-volume\") pod \"coredns-7db6d8ff4d-frmqn\" (UID: \"5812e052-e1c2-4134-b441-638c1f81e36b\") " pod="kube-system/coredns-7db6d8ff4d-frmqn" Apr 30 03:29:49.827517 kubelet[2565]: I0430 03:29:49.827355 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0081508-c8e3-490a-b57b-b113cee0b31d-config-volume\") pod \"coredns-7db6d8ff4d-99zjh\" (UID: \"c0081508-c8e3-490a-b57b-b113cee0b31d\") " pod="kube-system/coredns-7db6d8ff4d-99zjh" Apr 30 03:29:49.827517 kubelet[2565]: I0430 03:29:49.827390 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65pq\" (UniqueName: \"kubernetes.io/projected/5812e052-e1c2-4134-b441-638c1f81e36b-kube-api-access-f65pq\") pod \"coredns-7db6d8ff4d-frmqn\" (UID: \"5812e052-e1c2-4134-b441-638c1f81e36b\") " pod="kube-system/coredns-7db6d8ff4d-frmqn" Apr 30 03:29:49.993067 kubelet[2565]: E0430 03:29:49.991378 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:50.006195 containerd[1459]: time="2025-04-30T03:29:50.006118621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-99zjh,Uid:c0081508-c8e3-490a-b57b-b113cee0b31d,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:50.013849 kubelet[2565]: E0430 03:29:50.012568 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:50.014102 containerd[1459]: time="2025-04-30T03:29:50.013278957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmqn,Uid:5812e052-e1c2-4134-b441-638c1f81e36b,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:50.116226 kubelet[2565]: E0430 03:29:50.115462 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:50.191110 kubelet[2565]: I0430 03:29:50.190096 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-th48c" podStartSLOduration=6.460364478 podStartE2EDuration="18.19006735s" podCreationTimestamp="2025-04-30 03:29:32 +0000 UTC" firstStartedPulling="2025-04-30 03:29:33.487724931 +0000 UTC m=+15.814502132" lastFinishedPulling="2025-04-30 03:29:45.217427788 +0000 UTC m=+27.544205004" observedRunningTime="2025-04-30 03:29:50.17914032 +0000 UTC m=+32.505917577" watchObservedRunningTime="2025-04-30 03:29:50.19006735 +0000 UTC m=+32.516844572" Apr 30 03:29:51.120319 kubelet[2565]: E0430 03:29:51.118350 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:51.942068 systemd-networkd[1368]: cilium_host: Link UP Apr 30 03:29:51.942713 systemd-networkd[1368]: cilium_net: Link UP Apr 30 03:29:51.942719 systemd-networkd[1368]: cilium_net: Gained carrier Apr 30 03:29:51.946054 systemd-networkd[1368]: cilium_host: Gained carrier Apr 30 03:29:52.020939 systemd-networkd[1368]: cilium_host: Gained IPv6LL Apr 30 03:29:52.120298 systemd-networkd[1368]: cilium_vxlan: Link UP Apr 30 03:29:52.120306 systemd-networkd[1368]: cilium_vxlan: Gained carrier Apr 30 03:29:52.121150 kubelet[2565]: E0430 03:29:52.120958 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:52.523863 kernel: NET: Registered PF_ALG protocol family Apr 30 03:29:52.660585 systemd-networkd[1368]: cilium_net: Gained IPv6LL Apr 30 03:29:53.520695 systemd-networkd[1368]: lxc_health: Link UP Apr 30 03:29:53.521218 systemd-networkd[1368]: lxc_health: Gained carrier Apr 30 03:29:53.620705 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL Apr 30 03:29:54.133824 kernel: eth0: renamed from tmpe234e Apr 30 03:29:54.138150 systemd-networkd[1368]: lxc13c794eb6faf: Link UP Apr 30 03:29:54.140741 systemd-networkd[1368]: lxc13c794eb6faf: Gained carrier Apr 30 03:29:54.174286 systemd-networkd[1368]: lxcca59e91f3155: Link UP Apr 30 03:29:54.183151 kernel: eth0: renamed from tmpf0885 Apr 30 03:29:54.191109 systemd-networkd[1368]: lxcca59e91f3155: Gained carrier Apr 30 03:29:54.333049 kubelet[2565]: E0430 03:29:54.332997 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:54.644110 systemd-networkd[1368]: lxc_health: Gained IPv6LL Apr 30 03:29:55.294902 kubelet[2565]: E0430 03:29:55.294849 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:55.348016 systemd-networkd[1368]: lxc13c794eb6faf: Gained IPv6LL Apr 30 03:29:55.796004 systemd-networkd[1368]: lxcca59e91f3155: Gained IPv6LL Apr 30 03:29:56.130483 kubelet[2565]: E0430 03:29:56.130322 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:59.426065 containerd[1459]: time="2025-04-30T03:29:59.420578928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:59.426065 containerd[1459]: time="2025-04-30T03:29:59.420694212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:59.426065 containerd[1459]: time="2025-04-30T03:29:59.420718414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:59.426065 containerd[1459]: time="2025-04-30T03:29:59.420881603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:59.443248 containerd[1459]: time="2025-04-30T03:29:59.442546579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:59.443248 containerd[1459]: time="2025-04-30T03:29:59.442649405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:59.443248 containerd[1459]: time="2025-04-30T03:29:59.442665930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:59.443248 containerd[1459]: time="2025-04-30T03:29:59.442833468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:59.482583 systemd[1]: run-containerd-runc-k8s.io-e234eabd3901020470b3630a326bf2cec7f875a34679a07fae3feb96b6b4d976-runc.Un91wj.mount: Deactivated successfully. Apr 30 03:29:59.495616 systemd[1]: Started cri-containerd-e234eabd3901020470b3630a326bf2cec7f875a34679a07fae3feb96b6b4d976.scope - libcontainer container e234eabd3901020470b3630a326bf2cec7f875a34679a07fae3feb96b6b4d976. Apr 30 03:29:59.520614 systemd[1]: Started cri-containerd-f08851c24d4c60319a17d16d1dc544ebf9ebe94b1ec89cd3570615f8df9cb585.scope - libcontainer container f08851c24d4c60319a17d16d1dc544ebf9ebe94b1ec89cd3570615f8df9cb585. Apr 30 03:29:59.649504 containerd[1459]: time="2025-04-30T03:29:59.649328907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-99zjh,Uid:c0081508-c8e3-490a-b57b-b113cee0b31d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f08851c24d4c60319a17d16d1dc544ebf9ebe94b1ec89cd3570615f8df9cb585\"" Apr 30 03:29:59.651992 kubelet[2565]: E0430 03:29:59.651757 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:59.657830 containerd[1459]: time="2025-04-30T03:29:59.657686384Z" level=info msg="CreateContainer within sandbox \"f08851c24d4c60319a17d16d1dc544ebf9ebe94b1ec89cd3570615f8df9cb585\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:59.668775 containerd[1459]: time="2025-04-30T03:29:59.668692739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmqn,Uid:5812e052-e1c2-4134-b441-638c1f81e36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e234eabd3901020470b3630a326bf2cec7f875a34679a07fae3feb96b6b4d976\"" Apr 30 03:29:59.671527 kubelet[2565]: E0430 03:29:59.671250 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:29:59.678490 containerd[1459]: time="2025-04-30T03:29:59.677364227Z" level=info msg="CreateContainer within sandbox \"e234eabd3901020470b3630a326bf2cec7f875a34679a07fae3feb96b6b4d976\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:59.699518 containerd[1459]: time="2025-04-30T03:29:59.699451910Z" level=info msg="CreateContainer within sandbox \"f08851c24d4c60319a17d16d1dc544ebf9ebe94b1ec89cd3570615f8df9cb585\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb47f06e5fdca7b46f39b0caf0174be1f88afa76b2634b9e3219c5df7375d6cd\"" Apr 30 03:29:59.700692 containerd[1459]: time="2025-04-30T03:29:59.700540336Z" level=info msg="StartContainer for \"eb47f06e5fdca7b46f39b0caf0174be1f88afa76b2634b9e3219c5df7375d6cd\"" Apr 30 03:29:59.709233 containerd[1459]: time="2025-04-30T03:29:59.709142170Z" level=info msg="CreateContainer within sandbox \"e234eabd3901020470b3630a326bf2cec7f875a34679a07fae3feb96b6b4d976\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"786ac628131c2c6ccfbaab407d19a0a4675e7f0f549bb96929b53391fa4f0d17\"" Apr 30 03:29:59.715423 containerd[1459]: time="2025-04-30T03:29:59.714171795Z" level=info msg="StartContainer for \"786ac628131c2c6ccfbaab407d19a0a4675e7f0f549bb96929b53391fa4f0d17\"" Apr 30 03:29:59.755435 systemd[1]: Started cri-containerd-eb47f06e5fdca7b46f39b0caf0174be1f88afa76b2634b9e3219c5df7375d6cd.scope - libcontainer container eb47f06e5fdca7b46f39b0caf0174be1f88afa76b2634b9e3219c5df7375d6cd. Apr 30 03:29:59.767033 systemd[1]: Started cri-containerd-786ac628131c2c6ccfbaab407d19a0a4675e7f0f549bb96929b53391fa4f0d17.scope - libcontainer container 786ac628131c2c6ccfbaab407d19a0a4675e7f0f549bb96929b53391fa4f0d17. Apr 30 03:29:59.808929 containerd[1459]: time="2025-04-30T03:29:59.808745548Z" level=info msg="StartContainer for \"eb47f06e5fdca7b46f39b0caf0174be1f88afa76b2634b9e3219c5df7375d6cd\" returns successfully" Apr 30 03:29:59.835470 containerd[1459]: time="2025-04-30T03:29:59.834616330Z" level=info msg="StartContainer for \"786ac628131c2c6ccfbaab407d19a0a4675e7f0f549bb96929b53391fa4f0d17\" returns successfully" Apr 30 03:30:00.148806 kubelet[2565]: E0430 03:30:00.147142 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:00.159597 kubelet[2565]: E0430 03:30:00.159011 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:00.179144 kubelet[2565]: I0430 03:30:00.178562 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-frmqn" podStartSLOduration=28.178533624 podStartE2EDuration="28.178533624s" podCreationTimestamp="2025-04-30 03:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:00.176502566 +0000 UTC m=+42.503279769" watchObservedRunningTime="2025-04-30 03:30:00.178533624 +0000 UTC m=+42.505310856" Apr 30 03:30:00.252931 kubelet[2565]: I0430 03:30:00.252297 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-99zjh" podStartSLOduration=28.252268344 podStartE2EDuration="28.252268344s" podCreationTimestamp="2025-04-30 03:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:30:00.211575335 +0000 UTC m=+42.538352554" watchObservedRunningTime="2025-04-30 03:30:00.252268344 +0000 UTC m=+42.579045569" Apr 30 03:30:01.161458 kubelet[2565]: E0430 03:30:01.161289 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:01.161458 kubelet[2565]: E0430 03:30:01.161302 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:02.167393 kubelet[2565]: E0430 03:30:02.165034 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:02.170514 kubelet[2565]: E0430 03:30:02.169311 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:02.580481 systemd[1]: Started sshd@7-164.92.87.160:22-139.178.89.65:55422.service - OpenSSH per-connection server daemon (139.178.89.65:55422). Apr 30 03:30:02.686484 sshd[3947]: Accepted publickey for core from 139.178.89.65 port 55422 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:02.691294 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:02.702289 systemd-logind[1444]: New session 8 of user core. Apr 30 03:30:02.712392 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:30:03.406758 sshd[3947]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:03.411679 systemd[1]: sshd@7-164.92.87.160:22-139.178.89.65:55422.service: Deactivated successfully. Apr 30 03:30:03.416982 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:30:03.421724 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:30:03.424434 systemd-logind[1444]: Removed session 8. Apr 30 03:30:08.432358 systemd[1]: Started sshd@8-164.92.87.160:22-139.178.89.65:36370.service - OpenSSH per-connection server daemon (139.178.89.65:36370). Apr 30 03:30:08.481165 sshd[3965]: Accepted publickey for core from 139.178.89.65 port 36370 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:08.483102 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:08.488811 systemd-logind[1444]: New session 9 of user core. Apr 30 03:30:08.498093 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:30:08.642888 sshd[3965]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:08.649257 systemd[1]: sshd@8-164.92.87.160:22-139.178.89.65:36370.service: Deactivated successfully. Apr 30 03:30:08.653952 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:30:08.655724 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:30:08.657845 systemd-logind[1444]: Removed session 9. Apr 30 03:30:13.662861 systemd[1]: Started sshd@9-164.92.87.160:22-139.178.89.65:36374.service - OpenSSH per-connection server daemon (139.178.89.65:36374). Apr 30 03:30:13.726534 sshd[3979]: Accepted publickey for core from 139.178.89.65 port 36374 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:13.729205 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:13.737638 systemd-logind[1444]: New session 10 of user core. Apr 30 03:30:13.744302 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:30:13.915389 sshd[3979]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:13.922643 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:30:13.923032 systemd[1]: sshd@9-164.92.87.160:22-139.178.89.65:36374.service: Deactivated successfully. Apr 30 03:30:13.928235 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:30:13.932136 systemd-logind[1444]: Removed session 10. Apr 30 03:30:18.938434 systemd[1]: Started sshd@10-164.92.87.160:22-139.178.89.65:34416.service - OpenSSH per-connection server daemon (139.178.89.65:34416). Apr 30 03:30:18.993705 sshd[3995]: Accepted publickey for core from 139.178.89.65 port 34416 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:18.994591 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:19.001463 systemd-logind[1444]: New session 11 of user core. Apr 30 03:30:19.008182 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:30:19.142918 sshd[3995]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:19.149103 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:30:19.149451 systemd[1]: sshd@10-164.92.87.160:22-139.178.89.65:34416.service: Deactivated successfully. Apr 30 03:30:19.152301 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:30:19.154217 systemd-logind[1444]: Removed session 11. Apr 30 03:30:24.163441 systemd[1]: Started sshd@11-164.92.87.160:22-139.178.89.65:34422.service - OpenSSH per-connection server daemon (139.178.89.65:34422). Apr 30 03:30:24.214992 sshd[4008]: Accepted publickey for core from 139.178.89.65 port 34422 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:24.215853 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:24.224063 systemd-logind[1444]: New session 12 of user core. Apr 30 03:30:24.230461 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:30:24.385772 sshd[4008]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:24.393461 systemd[1]: sshd@11-164.92.87.160:22-139.178.89.65:34422.service: Deactivated successfully. Apr 30 03:30:24.396319 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:30:24.397684 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:30:24.399407 systemd-logind[1444]: Removed session 12. Apr 30 03:30:28.855017 kubelet[2565]: E0430 03:30:28.854875 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:29.408954 systemd[1]: Started sshd@12-164.92.87.160:22-139.178.89.65:35284.service - OpenSSH per-connection server daemon (139.178.89.65:35284). Apr 30 03:30:29.457981 sshd[4022]: Accepted publickey for core from 139.178.89.65 port 35284 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:29.460651 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:29.469080 systemd-logind[1444]: New session 13 of user core. Apr 30 03:30:29.481160 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:30:29.639914 sshd[4022]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:29.650399 systemd[1]: sshd@12-164.92.87.160:22-139.178.89.65:35284.service: Deactivated successfully. Apr 30 03:30:29.653965 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:30:29.657022 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:30:29.665252 systemd[1]: Started sshd@13-164.92.87.160:22-139.178.89.65:35300.service - OpenSSH per-connection server daemon (139.178.89.65:35300). Apr 30 03:30:29.667376 systemd-logind[1444]: Removed session 13. Apr 30 03:30:29.716613 sshd[4036]: Accepted publickey for core from 139.178.89.65 port 35300 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:29.718512 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:29.724971 systemd-logind[1444]: New session 14 of user core. Apr 30 03:30:29.732130 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:30:29.936457 sshd[4036]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:29.950141 systemd[1]: sshd@13-164.92.87.160:22-139.178.89.65:35300.service: Deactivated successfully. Apr 30 03:30:29.955651 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:30:29.957909 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:30:29.969239 systemd[1]: Started sshd@14-164.92.87.160:22-139.178.89.65:35302.service - OpenSSH per-connection server daemon (139.178.89.65:35302). Apr 30 03:30:29.977028 systemd-logind[1444]: Removed session 14. Apr 30 03:30:30.050452 sshd[4047]: Accepted publickey for core from 139.178.89.65 port 35302 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:30.055158 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:30.066670 systemd-logind[1444]: New session 15 of user core. Apr 30 03:30:30.071101 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:30:30.230922 sshd[4047]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:30.237572 systemd[1]: sshd@14-164.92.87.160:22-139.178.89.65:35302.service: Deactivated successfully. Apr 30 03:30:30.241661 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:30:30.245328 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:30:30.247394 systemd-logind[1444]: Removed session 15. Apr 30 03:30:35.256308 systemd[1]: Started sshd@15-164.92.87.160:22-139.178.89.65:35318.service - OpenSSH per-connection server daemon (139.178.89.65:35318). Apr 30 03:30:35.321192 sshd[4062]: Accepted publickey for core from 139.178.89.65 port 35318 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:35.324378 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:35.335170 systemd-logind[1444]: New session 16 of user core. Apr 30 03:30:35.348337 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:30:35.542322 sshd[4062]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:35.548964 systemd[1]: sshd@15-164.92.87.160:22-139.178.89.65:35318.service: Deactivated successfully. Apr 30 03:30:35.554483 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:30:35.556512 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:30:35.560986 systemd-logind[1444]: Removed session 16. Apr 30 03:30:40.564279 systemd[1]: Started sshd@16-164.92.87.160:22-139.178.89.65:53968.service - OpenSSH per-connection server daemon (139.178.89.65:53968). Apr 30 03:30:40.613650 sshd[4075]: Accepted publickey for core from 139.178.89.65 port 53968 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:40.615908 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:40.623238 systemd-logind[1444]: New session 17 of user core. Apr 30 03:30:40.631189 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:30:40.793140 sshd[4075]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:40.799561 systemd[1]: sshd@16-164.92.87.160:22-139.178.89.65:53968.service: Deactivated successfully. Apr 30 03:30:40.804392 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:30:40.806535 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:30:40.808656 systemd-logind[1444]: Removed session 17. Apr 30 03:30:45.814340 systemd[1]: Started sshd@17-164.92.87.160:22-139.178.89.65:53978.service - OpenSSH per-connection server daemon (139.178.89.65:53978). Apr 30 03:30:45.867896 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 53978 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:45.869362 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:45.875376 systemd-logind[1444]: New session 18 of user core. Apr 30 03:30:45.881086 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:30:46.024541 sshd[4088]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:46.039291 systemd[1]: sshd@17-164.92.87.160:22-139.178.89.65:53978.service: Deactivated successfully. Apr 30 03:30:46.043412 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:30:46.046001 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:30:46.058533 systemd[1]: Started sshd@18-164.92.87.160:22-139.178.89.65:53984.service - OpenSSH per-connection server daemon (139.178.89.65:53984). Apr 30 03:30:46.059715 systemd-logind[1444]: Removed session 18. Apr 30 03:30:46.104424 sshd[4101]: Accepted publickey for core from 139.178.89.65 port 53984 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:46.107590 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:46.114246 systemd-logind[1444]: New session 19 of user core. Apr 30 03:30:46.126076 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:30:46.420117 sshd[4101]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:46.431519 systemd[1]: sshd@18-164.92.87.160:22-139.178.89.65:53984.service: Deactivated successfully. Apr 30 03:30:46.434044 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:30:46.436025 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:30:46.442071 systemd[1]: Started sshd@19-164.92.87.160:22-139.178.89.65:54000.service - OpenSSH per-connection server daemon (139.178.89.65:54000). Apr 30 03:30:46.444474 systemd-logind[1444]: Removed session 19. Apr 30 03:30:46.523468 sshd[4111]: Accepted publickey for core from 139.178.89.65 port 54000 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:46.525610 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:46.532720 systemd-logind[1444]: New session 20 of user core. Apr 30 03:30:46.542192 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:30:48.643145 sshd[4111]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:48.667488 systemd[1]: Started sshd@20-164.92.87.160:22-139.178.89.65:47876.service - OpenSSH per-connection server daemon (139.178.89.65:47876). Apr 30 03:30:48.668202 systemd[1]: sshd@19-164.92.87.160:22-139.178.89.65:54000.service: Deactivated successfully. Apr 30 03:30:48.675436 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:30:48.681023 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:30:48.686741 systemd-logind[1444]: Removed session 20. Apr 30 03:30:48.741850 sshd[4127]: Accepted publickey for core from 139.178.89.65 port 47876 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:48.743181 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:48.748889 systemd-logind[1444]: New session 21 of user core. Apr 30 03:30:48.757320 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:30:49.098426 sshd[4127]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:49.115775 systemd[1]: sshd@20-164.92.87.160:22-139.178.89.65:47876.service: Deactivated successfully. Apr 30 03:30:49.121473 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:30:49.125378 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:30:49.132359 systemd[1]: Started sshd@21-164.92.87.160:22-139.178.89.65:47892.service - OpenSSH per-connection server daemon (139.178.89.65:47892). Apr 30 03:30:49.135284 systemd-logind[1444]: Removed session 21. Apr 30 03:30:49.192221 sshd[4140]: Accepted publickey for core from 139.178.89.65 port 47892 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:49.194363 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:49.202176 systemd-logind[1444]: New session 22 of user core. Apr 30 03:30:49.210069 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:30:49.368279 sshd[4140]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:49.374840 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:30:49.375535 systemd[1]: sshd@21-164.92.87.160:22-139.178.89.65:47892.service: Deactivated successfully. Apr 30 03:30:49.378420 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:30:49.380303 systemd-logind[1444]: Removed session 22. Apr 30 03:30:50.855237 kubelet[2565]: E0430 03:30:50.855107 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:52.854351 kubelet[2565]: E0430 03:30:52.854227 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:53.856114 kubelet[2565]: E0430 03:30:53.855551 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:54.383202 systemd[1]: Started sshd@22-164.92.87.160:22-139.178.89.65:47904.service - OpenSSH per-connection server daemon (139.178.89.65:47904). Apr 30 03:30:54.440610 sshd[4153]: Accepted publickey for core from 139.178.89.65 port 47904 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:54.442647 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:54.449006 systemd-logind[1444]: New session 23 of user core. Apr 30 03:30:54.453063 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:30:54.600224 sshd[4153]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:54.606526 systemd[1]: sshd@22-164.92.87.160:22-139.178.89.65:47904.service: Deactivated successfully. Apr 30 03:30:54.611316 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:30:54.613039 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:30:54.614471 systemd-logind[1444]: Removed session 23. Apr 30 03:30:57.855826 kubelet[2565]: E0430 03:30:57.855016 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:30:59.621318 systemd[1]: Started sshd@23-164.92.87.160:22-139.178.89.65:53432.service - OpenSSH per-connection server daemon (139.178.89.65:53432). Apr 30 03:30:59.677976 sshd[4170]: Accepted publickey for core from 139.178.89.65 port 53432 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:30:59.680153 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:59.687166 systemd-logind[1444]: New session 24 of user core. Apr 30 03:30:59.695131 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:30:59.840323 sshd[4170]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:59.846268 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:30:59.847027 systemd[1]: sshd@23-164.92.87.160:22-139.178.89.65:53432.service: Deactivated successfully. Apr 30 03:30:59.851611 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:30:59.855242 systemd-logind[1444]: Removed session 24. Apr 30 03:30:59.858251 kubelet[2565]: E0430 03:30:59.857940 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:03.854865 kubelet[2565]: E0430 03:31:03.854726 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:04.860288 systemd[1]: Started sshd@24-164.92.87.160:22-139.178.89.65:53438.service - OpenSSH per-connection server daemon (139.178.89.65:53438). Apr 30 03:31:04.907344 sshd[4185]: Accepted publickey for core from 139.178.89.65 port 53438 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:31:04.908259 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:04.916248 systemd-logind[1444]: New session 25 of user core. Apr 30 03:31:04.926204 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:31:05.082224 sshd[4185]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:05.086887 systemd[1]: sshd@24-164.92.87.160:22-139.178.89.65:53438.service: Deactivated successfully. Apr 30 03:31:05.086934 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:31:05.091469 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:31:05.094744 systemd-logind[1444]: Removed session 25. Apr 30 03:31:10.101291 systemd[1]: Started sshd@25-164.92.87.160:22-139.178.89.65:56782.service - OpenSSH per-connection server daemon (139.178.89.65:56782). Apr 30 03:31:10.182101 sshd[4198]: Accepted publickey for core from 139.178.89.65 port 56782 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:31:10.184605 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:10.191554 systemd-logind[1444]: New session 26 of user core. Apr 30 03:31:10.198409 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:31:10.356237 sshd[4198]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:10.367516 systemd[1]: sshd@25-164.92.87.160:22-139.178.89.65:56782.service: Deactivated successfully. Apr 30 03:31:10.370075 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:31:10.373291 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:31:10.383304 systemd[1]: Started sshd@26-164.92.87.160:22-139.178.89.65:56786.service - OpenSSH per-connection server daemon (139.178.89.65:56786). Apr 30 03:31:10.384185 systemd-logind[1444]: Removed session 26. Apr 30 03:31:10.450854 sshd[4210]: Accepted publickey for core from 139.178.89.65 port 56786 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:31:10.452374 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:10.458808 systemd-logind[1444]: New session 27 of user core. Apr 30 03:31:10.466083 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:31:12.251880 containerd[1459]: time="2025-04-30T03:31:12.250946582Z" level=info msg="StopContainer for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" with timeout 30 (s)" Apr 30 03:31:12.254424 systemd[1]: run-containerd-runc-k8s.io-8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad-runc.qfCIwu.mount: Deactivated successfully. Apr 30 03:31:12.260933 containerd[1459]: time="2025-04-30T03:31:12.260226122Z" level=info msg="Stop container \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" with signal terminated" Apr 30 03:31:12.273411 containerd[1459]: time="2025-04-30T03:31:12.273009826Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:31:12.283682 containerd[1459]: time="2025-04-30T03:31:12.283632693Z" level=info msg="StopContainer for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" with timeout 2 (s)" Apr 30 03:31:12.284098 containerd[1459]: time="2025-04-30T03:31:12.284073297Z" level=info msg="Stop container \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" with signal terminated" Apr 30 03:31:12.288318 systemd[1]: cri-containerd-dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755.scope: Deactivated successfully. Apr 30 03:31:12.295622 systemd-networkd[1368]: lxc_health: Link DOWN Apr 30 03:31:12.295632 systemd-networkd[1368]: lxc_health: Lost carrier Apr 30 03:31:12.318671 systemd[1]: cri-containerd-8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad.scope: Deactivated successfully. Apr 30 03:31:12.319393 systemd[1]: cri-containerd-8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad.scope: Consumed 9.303s CPU time. Apr 30 03:31:12.348072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755-rootfs.mount: Deactivated successfully. Apr 30 03:31:12.362524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad-rootfs.mount: Deactivated successfully. Apr 30 03:31:12.363770 containerd[1459]: time="2025-04-30T03:31:12.363532298Z" level=info msg="shim disconnected" id=dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755 namespace=k8s.io Apr 30 03:31:12.364325 containerd[1459]: time="2025-04-30T03:31:12.364134485Z" level=warning msg="cleaning up after shim disconnected" id=dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755 namespace=k8s.io Apr 30 03:31:12.364325 containerd[1459]: time="2025-04-30T03:31:12.364160193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:12.370486 containerd[1459]: time="2025-04-30T03:31:12.370220620Z" level=info msg="shim disconnected" id=8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad namespace=k8s.io Apr 30 03:31:12.370486 containerd[1459]: time="2025-04-30T03:31:12.370296641Z" level=warning msg="cleaning up after shim disconnected" id=8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad namespace=k8s.io Apr 30 03:31:12.370486 containerd[1459]: time="2025-04-30T03:31:12.370306392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:12.394047 containerd[1459]: time="2025-04-30T03:31:12.393879904Z" level=info msg="StopContainer for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" returns successfully" Apr 30 03:31:12.396299 containerd[1459]: time="2025-04-30T03:31:12.395726486Z" level=info msg="StopPodSandbox for \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\"" Apr 30 03:31:12.396299 containerd[1459]: time="2025-04-30T03:31:12.395794132Z" level=info msg="Container to stop \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:31:12.398481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd-shm.mount: Deactivated successfully. Apr 30 03:31:12.415529 containerd[1459]: time="2025-04-30T03:31:12.415466620Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:31:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:31:12.426518 systemd[1]: cri-containerd-a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd.scope: Deactivated successfully. Apr 30 03:31:12.429607 containerd[1459]: time="2025-04-30T03:31:12.429185369Z" level=info msg="StopContainer for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" returns successfully" Apr 30 03:31:12.430966 containerd[1459]: time="2025-04-30T03:31:12.430603381Z" level=info msg="StopPodSandbox for \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\"" Apr 30 03:31:12.430966 containerd[1459]: time="2025-04-30T03:31:12.430658477Z" level=info msg="Container to stop \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:31:12.430966 containerd[1459]: time="2025-04-30T03:31:12.430670462Z" level=info msg="Container to stop \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:31:12.430966 containerd[1459]: time="2025-04-30T03:31:12.430680267Z" level=info msg="Container to stop \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:31:12.430966 containerd[1459]: time="2025-04-30T03:31:12.430689558Z" level=info msg="Container to stop \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:31:12.430966 containerd[1459]: time="2025-04-30T03:31:12.430698689Z" level=info msg="Container to stop \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:31:12.441604 systemd[1]: cri-containerd-f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b.scope: Deactivated successfully. Apr 30 03:31:12.493503 containerd[1459]: time="2025-04-30T03:31:12.493244180Z" level=info msg="shim disconnected" id=f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b namespace=k8s.io Apr 30 03:31:12.493503 containerd[1459]: time="2025-04-30T03:31:12.493306949Z" level=warning msg="cleaning up after shim disconnected" id=f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b namespace=k8s.io Apr 30 03:31:12.493503 containerd[1459]: time="2025-04-30T03:31:12.493320862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:12.493503 containerd[1459]: time="2025-04-30T03:31:12.493359235Z" level=info msg="shim disconnected" id=a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd namespace=k8s.io Apr 30 03:31:12.493906 containerd[1459]: time="2025-04-30T03:31:12.493473175Z" level=warning msg="cleaning up after shim disconnected" id=a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd namespace=k8s.io Apr 30 03:31:12.493906 containerd[1459]: time="2025-04-30T03:31:12.493624697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:12.525477 containerd[1459]: time="2025-04-30T03:31:12.524228602Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:31:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:31:12.534506 containerd[1459]: time="2025-04-30T03:31:12.533659971Z" level=info msg="TearDown network for sandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" successfully" Apr 30 03:31:12.534506 containerd[1459]: time="2025-04-30T03:31:12.533739468Z" level=info msg="StopPodSandbox for \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" returns successfully" Apr 30 03:31:12.534818 containerd[1459]: time="2025-04-30T03:31:12.534741112Z" level=info msg="TearDown network for sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" successfully" Apr 30 03:31:12.534891 containerd[1459]: time="2025-04-30T03:31:12.534878249Z" level=info msg="StopPodSandbox for \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" returns successfully" Apr 30 03:31:12.653996 kubelet[2565]: I0430 03:31:12.653899 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8e27c5-1eba-488e-97d5-3b54b80364e2-cilium-config-path\") pod \"4c8e27c5-1eba-488e-97d5-3b54b80364e2\" (UID: \"4c8e27c5-1eba-488e-97d5-3b54b80364e2\") " Apr 30 03:31:12.654744 kubelet[2565]: I0430 03:31:12.654028 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-lib-modules\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.654744 kubelet[2565]: I0430 03:31:12.654063 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cni-path\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.654744 kubelet[2565]: I0430 03:31:12.654094 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-hubble-tls\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.654744 kubelet[2565]: I0430 03:31:12.654120 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-net\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.654744 kubelet[2565]: I0430 03:31:12.654143 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4pck\" (UniqueName: \"kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-kube-api-access-q4pck\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.654744 kubelet[2565]: I0430 03:31:12.654161 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76e9132f-d854-4a4d-ab40-398170125691-clustermesh-secrets\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655402 kubelet[2565]: I0430 03:31:12.654185 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-run\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655402 kubelet[2565]: I0430 03:31:12.654215 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76e9132f-d854-4a4d-ab40-398170125691-cilium-config-path\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655402 kubelet[2565]: I0430 03:31:12.654238 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-bpf-maps\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655402 kubelet[2565]: I0430 03:31:12.654258 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-587j7\" (UniqueName: \"kubernetes.io/projected/4c8e27c5-1eba-488e-97d5-3b54b80364e2-kube-api-access-587j7\") pod \"4c8e27c5-1eba-488e-97d5-3b54b80364e2\" (UID: \"4c8e27c5-1eba-488e-97d5-3b54b80364e2\") " Apr 30 03:31:12.655402 kubelet[2565]: I0430 03:31:12.654281 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-cgroup\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655402 kubelet[2565]: I0430 03:31:12.654305 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-hostproc\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655642 kubelet[2565]: I0430 03:31:12.654325 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-kernel\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655642 kubelet[2565]: I0430 03:31:12.654339 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-xtables-lock\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.655642 kubelet[2565]: I0430 03:31:12.654353 2565 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-etc-cni-netd\") pod \"76e9132f-d854-4a4d-ab40-398170125691\" (UID: \"76e9132f-d854-4a4d-ab40-398170125691\") " Apr 30 03:31:12.666620 kubelet[2565]: I0430 03:31:12.665495 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.666620 kubelet[2565]: I0430 03:31:12.664031 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.671540 kubelet[2565]: I0430 03:31:12.671466 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8e27c5-1eba-488e-97d5-3b54b80364e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c8e27c5-1eba-488e-97d5-3b54b80364e2" (UID: "4c8e27c5-1eba-488e-97d5-3b54b80364e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:31:12.671832 kubelet[2565]: I0430 03:31:12.671810 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.671936 kubelet[2565]: I0430 03:31:12.671922 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cni-path" (OuterVolumeSpecName: "cni-path") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.672571 kubelet[2565]: I0430 03:31:12.672514 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e9132f-d854-4a4d-ab40-398170125691-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:31:12.672571 kubelet[2565]: I0430 03:31:12.672583 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.684993 kubelet[2565]: I0430 03:31:12.684931 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8e27c5-1eba-488e-97d5-3b54b80364e2-kube-api-access-587j7" (OuterVolumeSpecName: "kube-api-access-587j7") pod "4c8e27c5-1eba-488e-97d5-3b54b80364e2" (UID: "4c8e27c5-1eba-488e-97d5-3b54b80364e2"). InnerVolumeSpecName "kube-api-access-587j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:31:12.685195 kubelet[2565]: I0430 03:31:12.685013 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:31:12.685195 kubelet[2565]: I0430 03:31:12.685048 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.685195 kubelet[2565]: I0430 03:31:12.685097 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-hostproc" (OuterVolumeSpecName: "hostproc") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.685195 kubelet[2565]: I0430 03:31:12.685123 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.685195 kubelet[2565]: I0430 03:31:12.685146 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.685697 kubelet[2565]: I0430 03:31:12.684951 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:31:12.688456 kubelet[2565]: I0430 03:31:12.688371 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e9132f-d854-4a4d-ab40-398170125691-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:31:12.688731 kubelet[2565]: I0430 03:31:12.688691 2565 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-kube-api-access-q4pck" (OuterVolumeSpecName: "kube-api-access-q4pck") pod "76e9132f-d854-4a4d-ab40-398170125691" (UID: "76e9132f-d854-4a4d-ab40-398170125691"). InnerVolumeSpecName "kube-api-access-q4pck". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755223 2565 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-run\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755281 2565 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76e9132f-d854-4a4d-ab40-398170125691-cilium-config-path\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755293 2565 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-bpf-maps\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755305 2565 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-587j7\" (UniqueName: \"kubernetes.io/projected/4c8e27c5-1eba-488e-97d5-3b54b80364e2-kube-api-access-587j7\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755321 2565 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cilium-cgroup\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755335 2565 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-hostproc\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755355 2565 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-kernel\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.755471 kubelet[2565]: I0430 03:31:12.755364 2565 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-xtables-lock\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755375 2565 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-etc-cni-netd\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755384 2565 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8e27c5-1eba-488e-97d5-3b54b80364e2-cilium-config-path\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755392 2565 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-lib-modules\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755403 2565 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-cni-path\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755412 2565 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-hubble-tls\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755421 2565 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76e9132f-d854-4a4d-ab40-398170125691-host-proc-sys-net\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755430 2565 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q4pck\" (UniqueName: \"kubernetes.io/projected/76e9132f-d854-4a4d-ab40-398170125691-kube-api-access-q4pck\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.756034 kubelet[2565]: I0430 03:31:12.755438 2565 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76e9132f-d854-4a4d-ab40-398170125691-clustermesh-secrets\") on node \"ci-4081.3.3-a-32b52f0300\" DevicePath \"\"" Apr 30 03:31:12.973236 kubelet[2565]: E0430 03:31:12.966288 2565 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:31:13.246949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b-rootfs.mount: Deactivated successfully. Apr 30 03:31:13.247091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b-shm.mount: Deactivated successfully. Apr 30 03:31:13.247204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd-rootfs.mount: Deactivated successfully. Apr 30 03:31:13.247324 systemd[1]: var-lib-kubelet-pods-4c8e27c5\x2d1eba\x2d488e\x2d97d5\x2d3b54b80364e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d587j7.mount: Deactivated successfully. Apr 30 03:31:13.247419 systemd[1]: var-lib-kubelet-pods-76e9132f\x2dd854\x2d4a4d\x2dab40\x2d398170125691-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4pck.mount: Deactivated successfully. Apr 30 03:31:13.247531 systemd[1]: var-lib-kubelet-pods-76e9132f\x2dd854\x2d4a4d\x2dab40\x2d398170125691-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 03:31:13.247615 systemd[1]: var-lib-kubelet-pods-76e9132f\x2dd854\x2d4a4d\x2dab40\x2d398170125691-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 03:31:13.394365 kubelet[2565]: I0430 03:31:13.393110 2565 scope.go:117] "RemoveContainer" containerID="dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755" Apr 30 03:31:13.396968 systemd[1]: Removed slice kubepods-besteffort-pod4c8e27c5_1eba_488e_97d5_3b54b80364e2.slice - libcontainer container kubepods-besteffort-pod4c8e27c5_1eba_488e_97d5_3b54b80364e2.slice. Apr 30 03:31:13.398165 containerd[1459]: time="2025-04-30T03:31:13.396848333Z" level=info msg="RemoveContainer for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\"" Apr 30 03:31:13.414249 containerd[1459]: time="2025-04-30T03:31:13.413711058Z" level=info msg="RemoveContainer for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" returns successfully" Apr 30 03:31:13.414814 kubelet[2565]: I0430 03:31:13.414747 2565 scope.go:117] "RemoveContainer" containerID="dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755" Apr 30 03:31:13.424649 systemd[1]: Removed slice kubepods-burstable-pod76e9132f_d854_4a4d_ab40_398170125691.slice - libcontainer container kubepods-burstable-pod76e9132f_d854_4a4d_ab40_398170125691.slice. Apr 30 03:31:13.424828 systemd[1]: kubepods-burstable-pod76e9132f_d854_4a4d_ab40_398170125691.slice: Consumed 9.412s CPU time. Apr 30 03:31:13.429246 containerd[1459]: time="2025-04-30T03:31:13.417865611Z" level=error msg="ContainerStatus for \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\": not found" Apr 30 03:31:13.453870 kubelet[2565]: E0430 03:31:13.453807 2565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\": not found" containerID="dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755" Apr 30 03:31:13.454151 kubelet[2565]: I0430 03:31:13.453887 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755"} err="failed to get container status \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\": rpc error: code = NotFound desc = an error occurred when try to find container \"dec8a2339334bf412a8ab4a44d37b2674a18e83a508657bc1e701ef068905755\": not found" Apr 30 03:31:13.454151 kubelet[2565]: I0430 03:31:13.453990 2565 scope.go:117] "RemoveContainer" containerID="8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad" Apr 30 03:31:13.458183 containerd[1459]: time="2025-04-30T03:31:13.458128409Z" level=info msg="RemoveContainer for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\"" Apr 30 03:31:13.463987 containerd[1459]: time="2025-04-30T03:31:13.463872588Z" level=info msg="RemoveContainer for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" returns successfully" Apr 30 03:31:13.464391 kubelet[2565]: I0430 03:31:13.464312 2565 scope.go:117] "RemoveContainer" containerID="f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959" Apr 30 03:31:13.465922 containerd[1459]: time="2025-04-30T03:31:13.465879977Z" level=info msg="RemoveContainer for \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\"" Apr 30 03:31:13.469068 containerd[1459]: time="2025-04-30T03:31:13.469010103Z" level=info msg="RemoveContainer for \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\" returns successfully" Apr 30 03:31:13.469455 kubelet[2565]: I0430 03:31:13.469303 2565 scope.go:117] "RemoveContainer" containerID="dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7" Apr 30 03:31:13.471477 containerd[1459]: time="2025-04-30T03:31:13.471434771Z" level=info msg="RemoveContainer for \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\"" Apr 30 03:31:13.474440 containerd[1459]: time="2025-04-30T03:31:13.474396601Z" level=info msg="RemoveContainer for \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\" returns successfully" Apr 30 03:31:13.475248 kubelet[2565]: I0430 03:31:13.474875 2565 scope.go:117] "RemoveContainer" containerID="773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed" Apr 30 03:31:13.476829 containerd[1459]: time="2025-04-30T03:31:13.476739080Z" level=info msg="RemoveContainer for \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\"" Apr 30 03:31:13.480534 containerd[1459]: time="2025-04-30T03:31:13.480452128Z" level=info msg="RemoveContainer for \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\" returns successfully" Apr 30 03:31:13.480853 kubelet[2565]: I0430 03:31:13.480818 2565 scope.go:117] "RemoveContainer" containerID="bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac" Apr 30 03:31:13.482416 containerd[1459]: time="2025-04-30T03:31:13.482327819Z" level=info msg="RemoveContainer for \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\"" Apr 30 03:31:13.490862 containerd[1459]: time="2025-04-30T03:31:13.490796415Z" level=info msg="RemoveContainer for \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\" returns successfully" Apr 30 03:31:13.491396 kubelet[2565]: I0430 03:31:13.491330 2565 scope.go:117] "RemoveContainer" containerID="8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad" Apr 30 03:31:13.492281 containerd[1459]: time="2025-04-30T03:31:13.491816106Z" level=error msg="ContainerStatus for \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\": not found" Apr 30 03:31:13.492395 kubelet[2565]: E0430 03:31:13.492084 2565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\": not found" containerID="8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad" Apr 30 03:31:13.492395 kubelet[2565]: I0430 03:31:13.492127 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad"} err="failed to get container status \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ace91d3c088b2e5c1915c498c543491e93f97e5574f140d5bda71e9f2c20dad\": not found" Apr 30 03:31:13.492395 kubelet[2565]: I0430 03:31:13.492159 2565 scope.go:117] "RemoveContainer" containerID="f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959" Apr 30 03:31:13.492579 containerd[1459]: time="2025-04-30T03:31:13.492495103Z" level=error msg="ContainerStatus for \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\": not found" Apr 30 03:31:13.492802 kubelet[2565]: E0430 03:31:13.492718 2565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\": not found" containerID="f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959" Apr 30 03:31:13.492854 kubelet[2565]: I0430 03:31:13.492824 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959"} err="failed to get container status \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\": rpc error: code = NotFound desc = an error occurred when try to find container \"f12530a9b1b7e21ecbd5e0e38bfd0f0ee8a198da5c15f162a7eaee48ef988959\": not found" Apr 30 03:31:13.492894 kubelet[2565]: I0430 03:31:13.492855 2565 scope.go:117] "RemoveContainer" containerID="dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7" Apr 30 03:31:13.493391 containerd[1459]: time="2025-04-30T03:31:13.493340518Z" level=error msg="ContainerStatus for \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\": not found" Apr 30 03:31:13.493548 kubelet[2565]: E0430 03:31:13.493520 2565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\": not found" containerID="dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7" Apr 30 03:31:13.493601 kubelet[2565]: I0430 03:31:13.493556 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7"} err="failed to get container status \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"dadd87215727932a4ae3e3bc61a04a957df348748871daec28a9d27223aff5e7\": not found" Apr 30 03:31:13.493601 kubelet[2565]: I0430 03:31:13.493582 2565 scope.go:117] "RemoveContainer" containerID="773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed" Apr 30 03:31:13.493983 containerd[1459]: time="2025-04-30T03:31:13.493937426Z" level=error msg="ContainerStatus for \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\": not found" Apr 30 03:31:13.494220 kubelet[2565]: E0430 03:31:13.494191 2565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\": not found" containerID="773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed" Apr 30 03:31:13.494254 kubelet[2565]: I0430 03:31:13.494231 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed"} err="failed to get container status \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"773d47269383e075ef4c564a2ab2f9f19db145ef28b08d5760724ab0a19966ed\": not found" Apr 30 03:31:13.494300 kubelet[2565]: I0430 03:31:13.494257 2565 scope.go:117] "RemoveContainer" containerID="bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac" Apr 30 03:31:13.494599 containerd[1459]: time="2025-04-30T03:31:13.494512689Z" level=error msg="ContainerStatus for \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\": not found" Apr 30 03:31:13.494693 kubelet[2565]: E0430 03:31:13.494664 2565 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\": not found" containerID="bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac" Apr 30 03:31:13.494744 kubelet[2565]: I0430 03:31:13.494699 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac"} err="failed to get container status \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb953672a16edac3a452bf85bf461afd1879aed9dbbad72dcfacc6b0b3c4eaac\": not found" Apr 30 03:31:13.857075 kubelet[2565]: I0430 03:31:13.857015 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8e27c5-1eba-488e-97d5-3b54b80364e2" path="/var/lib/kubelet/pods/4c8e27c5-1eba-488e-97d5-3b54b80364e2/volumes" Apr 30 03:31:13.857949 kubelet[2565]: I0430 03:31:13.857768 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e9132f-d854-4a4d-ab40-398170125691" path="/var/lib/kubelet/pods/76e9132f-d854-4a4d-ab40-398170125691/volumes" Apr 30 03:31:14.144506 sshd[4210]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:14.158237 systemd[1]: sshd@26-164.92.87.160:22-139.178.89.65:56786.service: Deactivated successfully. Apr 30 03:31:14.160633 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:31:14.161029 systemd[1]: session-27.scope: Consumed 1.036s CPU time. Apr 30 03:31:14.163591 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:31:14.170731 systemd[1]: Started sshd@27-164.92.87.160:22-139.178.89.65:56790.service - OpenSSH per-connection server daemon (139.178.89.65:56790). Apr 30 03:31:14.173178 systemd-logind[1444]: Removed session 27. Apr 30 03:31:14.237575 sshd[4369]: Accepted publickey for core from 139.178.89.65 port 56790 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:31:14.245237 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:14.256517 systemd-logind[1444]: New session 28 of user core. Apr 30 03:31:14.263186 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:31:14.854209 kubelet[2565]: E0430 03:31:14.854124 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-99zjh" podUID="c0081508-c8e3-490a-b57b-b113cee0b31d" Apr 30 03:31:15.074518 sshd[4369]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:15.088652 systemd[1]: sshd@27-164.92.87.160:22-139.178.89.65:56790.service: Deactivated successfully. Apr 30 03:31:15.091146 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:31:15.093866 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:31:15.098619 systemd[1]: Started sshd@28-164.92.87.160:22-139.178.89.65:56794.service - OpenSSH per-connection server daemon (139.178.89.65:56794). Apr 30 03:31:15.103535 systemd-logind[1444]: Removed session 28. Apr 30 03:31:15.134242 kubelet[2565]: I0430 03:31:15.133911 2565 topology_manager.go:215] "Topology Admit Handler" podUID="21b8f901-b100-4f3c-9da2-4da07ea910a6" podNamespace="kube-system" podName="cilium-m6mmq" Apr 30 03:31:15.134242 kubelet[2565]: E0430 03:31:15.134011 2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76e9132f-d854-4a4d-ab40-398170125691" containerName="apply-sysctl-overwrites" Apr 30 03:31:15.134242 kubelet[2565]: E0430 03:31:15.134035 2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76e9132f-d854-4a4d-ab40-398170125691" containerName="mount-bpf-fs" Apr 30 03:31:15.134242 kubelet[2565]: E0430 03:31:15.134045 2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76e9132f-d854-4a4d-ab40-398170125691" containerName="clean-cilium-state" Apr 30 03:31:15.134242 kubelet[2565]: E0430 03:31:15.134051 2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76e9132f-d854-4a4d-ab40-398170125691" containerName="cilium-agent" Apr 30 03:31:15.134242 kubelet[2565]: E0430 03:31:15.134059 2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c8e27c5-1eba-488e-97d5-3b54b80364e2" containerName="cilium-operator" Apr 30 03:31:15.134242 kubelet[2565]: E0430 03:31:15.134067 2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76e9132f-d854-4a4d-ab40-398170125691" containerName="mount-cgroup" Apr 30 03:31:15.142818 kubelet[2565]: I0430 03:31:15.134090 2565 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8e27c5-1eba-488e-97d5-3b54b80364e2" containerName="cilium-operator" Apr 30 03:31:15.144503 kubelet[2565]: I0430 03:31:15.144234 2565 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e9132f-d854-4a4d-ab40-398170125691" containerName="cilium-agent" Apr 30 03:31:15.179827 sshd[4380]: Accepted publickey for core from 139.178.89.65 port 56794 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:31:15.181206 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:15.198955 systemd-logind[1444]: New session 29 of user core. Apr 30 03:31:15.209025 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 03:31:15.222609 systemd[1]: Created slice kubepods-burstable-pod21b8f901_b100_4f3c_9da2_4da07ea910a6.slice - libcontainer container kubepods-burstable-pod21b8f901_b100_4f3c_9da2_4da07ea910a6.slice. Apr 30 03:31:15.279580 kubelet[2565]: I0430 03:31:15.279511 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-etc-cni-netd\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279580 kubelet[2565]: I0430 03:31:15.279579 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-xtables-lock\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279890 kubelet[2565]: I0430 03:31:15.279618 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-bpf-maps\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279890 kubelet[2565]: I0430 03:31:15.279646 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21b8f901-b100-4f3c-9da2-4da07ea910a6-cilium-config-path\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279890 kubelet[2565]: I0430 03:31:15.279680 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-cilium-run\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279890 kubelet[2565]: I0430 03:31:15.279705 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-hostproc\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279890 kubelet[2565]: I0430 03:31:15.279731 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-cilium-cgroup\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.279890 kubelet[2565]: I0430 03:31:15.279761 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21b8f901-b100-4f3c-9da2-4da07ea910a6-clustermesh-secrets\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280080 kubelet[2565]: I0430 03:31:15.279810 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21b8f901-b100-4f3c-9da2-4da07ea910a6-cilium-ipsec-secrets\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280080 kubelet[2565]: I0430 03:31:15.279841 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpld9\" (UniqueName: \"kubernetes.io/projected/21b8f901-b100-4f3c-9da2-4da07ea910a6-kube-api-access-gpld9\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280080 kubelet[2565]: I0430 03:31:15.279873 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-host-proc-sys-net\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280080 kubelet[2565]: I0430 03:31:15.279900 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-host-proc-sys-kernel\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280080 kubelet[2565]: I0430 03:31:15.279961 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-cni-path\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280224 kubelet[2565]: I0430 03:31:15.279992 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21b8f901-b100-4f3c-9da2-4da07ea910a6-lib-modules\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.280224 kubelet[2565]: I0430 03:31:15.280018 2565 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21b8f901-b100-4f3c-9da2-4da07ea910a6-hubble-tls\") pod \"cilium-m6mmq\" (UID: \"21b8f901-b100-4f3c-9da2-4da07ea910a6\") " pod="kube-system/cilium-m6mmq" Apr 30 03:31:15.283866 sshd[4380]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:15.292334 systemd[1]: sshd@28-164.92.87.160:22-139.178.89.65:56794.service: Deactivated successfully. Apr 30 03:31:15.294771 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 03:31:15.297398 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit. Apr 30 03:31:15.302413 systemd[1]: Started sshd@29-164.92.87.160:22-139.178.89.65:56796.service - OpenSSH per-connection server daemon (139.178.89.65:56796). Apr 30 03:31:15.304977 systemd-logind[1444]: Removed session 29. Apr 30 03:31:15.356831 sshd[4388]: Accepted publickey for core from 139.178.89.65 port 56796 ssh2: RSA SHA256:wGcKg1aesFw1D1AVs13gmVPWFYZ++Dswwfe11kkcINY Apr 30 03:31:15.358676 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:15.371117 systemd-logind[1444]: New session 30 of user core. Apr 30 03:31:15.380168 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 03:31:15.532028 kubelet[2565]: E0430 03:31:15.531570 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:15.532350 containerd[1459]: time="2025-04-30T03:31:15.532295933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6mmq,Uid:21b8f901-b100-4f3c-9da2-4da07ea910a6,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:15.571902 containerd[1459]: time="2025-04-30T03:31:15.571695236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:15.572227 containerd[1459]: time="2025-04-30T03:31:15.571767337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:15.572227 containerd[1459]: time="2025-04-30T03:31:15.571892355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:15.572399 containerd[1459]: time="2025-04-30T03:31:15.572140473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:15.605164 systemd[1]: Started cri-containerd-b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe.scope - libcontainer container b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe. Apr 30 03:31:15.636915 containerd[1459]: time="2025-04-30T03:31:15.636753413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6mmq,Uid:21b8f901-b100-4f3c-9da2-4da07ea910a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\"" Apr 30 03:31:15.638007 kubelet[2565]: E0430 03:31:15.637877 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:15.644695 containerd[1459]: time="2025-04-30T03:31:15.644253093Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:31:15.661510 containerd[1459]: time="2025-04-30T03:31:15.661436649Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850\"" Apr 30 03:31:15.663056 containerd[1459]: time="2025-04-30T03:31:15.662557540Z" level=info msg="StartContainer for \"de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850\"" Apr 30 03:31:15.695102 systemd[1]: Started cri-containerd-de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850.scope - libcontainer container de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850. Apr 30 03:31:15.732093 containerd[1459]: time="2025-04-30T03:31:15.731817892Z" level=info msg="StartContainer for \"de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850\" returns successfully" Apr 30 03:31:15.741319 systemd[1]: cri-containerd-de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850.scope: Deactivated successfully. Apr 30 03:31:15.780212 containerd[1459]: time="2025-04-30T03:31:15.780124815Z" level=info msg="shim disconnected" id=de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850 namespace=k8s.io Apr 30 03:31:15.780212 containerd[1459]: time="2025-04-30T03:31:15.780188616Z" level=warning msg="cleaning up after shim disconnected" id=de2103909201f0d559a5135cb10c366ad3768c5cfa5406f586dd7d74c1454850 namespace=k8s.io Apr 30 03:31:15.780212 containerd[1459]: time="2025-04-30T03:31:15.780197936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:16.431426 kubelet[2565]: E0430 03:31:16.431377 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:16.435570 containerd[1459]: time="2025-04-30T03:31:16.435524704Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:31:16.462341 containerd[1459]: time="2025-04-30T03:31:16.462284084Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c\"" Apr 30 03:31:16.463816 containerd[1459]: time="2025-04-30T03:31:16.463427101Z" level=info msg="StartContainer for \"5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c\"" Apr 30 03:31:16.524507 systemd[1]: Started cri-containerd-5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c.scope - libcontainer container 5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c. Apr 30 03:31:16.561766 containerd[1459]: time="2025-04-30T03:31:16.561704800Z" level=info msg="StartContainer for \"5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c\" returns successfully" Apr 30 03:31:16.569461 systemd[1]: cri-containerd-5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c.scope: Deactivated successfully. Apr 30 03:31:16.599484 containerd[1459]: time="2025-04-30T03:31:16.599279033Z" level=info msg="shim disconnected" id=5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c namespace=k8s.io Apr 30 03:31:16.599484 containerd[1459]: time="2025-04-30T03:31:16.599398622Z" level=warning msg="cleaning up after shim disconnected" id=5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c namespace=k8s.io Apr 30 03:31:16.599484 containerd[1459]: time="2025-04-30T03:31:16.599415522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:16.616834 containerd[1459]: time="2025-04-30T03:31:16.616381325Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:31:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:31:16.855579 kubelet[2565]: E0430 03:31:16.853685 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-99zjh" podUID="c0081508-c8e3-490a-b57b-b113cee0b31d" Apr 30 03:31:17.388393 systemd[1]: run-containerd-runc-k8s.io-5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c-runc.82oQlc.mount: Deactivated successfully. Apr 30 03:31:17.388532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d245b32b12e10d05903bec8936f105d3193e9fb2d4066c0c288dbe5c21aa72c-rootfs.mount: Deactivated successfully. Apr 30 03:31:17.437955 kubelet[2565]: E0430 03:31:17.437849 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:17.443723 containerd[1459]: time="2025-04-30T03:31:17.443291065Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:31:17.475606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089614499.mount: Deactivated successfully. Apr 30 03:31:17.479449 containerd[1459]: time="2025-04-30T03:31:17.479384211Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5\"" Apr 30 03:31:17.480553 containerd[1459]: time="2025-04-30T03:31:17.480477602Z" level=info msg="StartContainer for \"b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5\"" Apr 30 03:31:17.538184 systemd[1]: Started cri-containerd-b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5.scope - libcontainer container b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5. Apr 30 03:31:17.595370 containerd[1459]: time="2025-04-30T03:31:17.592320817Z" level=info msg="StartContainer for \"b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5\" returns successfully" Apr 30 03:31:17.603510 systemd[1]: cri-containerd-b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5.scope: Deactivated successfully. Apr 30 03:31:17.652174 containerd[1459]: time="2025-04-30T03:31:17.651636254Z" level=info msg="shim disconnected" id=b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5 namespace=k8s.io Apr 30 03:31:17.653103 containerd[1459]: time="2025-04-30T03:31:17.652719220Z" level=warning msg="cleaning up after shim disconnected" id=b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5 namespace=k8s.io Apr 30 03:31:17.653103 containerd[1459]: time="2025-04-30T03:31:17.652874225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:17.868490 containerd[1459]: time="2025-04-30T03:31:17.868294203Z" level=info msg="StopPodSandbox for \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\"" Apr 30 03:31:17.869014 containerd[1459]: time="2025-04-30T03:31:17.868764174Z" level=info msg="TearDown network for sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" successfully" Apr 30 03:31:17.869089 containerd[1459]: time="2025-04-30T03:31:17.869013993Z" level=info msg="StopPodSandbox for \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" returns successfully" Apr 30 03:31:17.870865 containerd[1459]: time="2025-04-30T03:31:17.870362820Z" level=info msg="RemovePodSandbox for \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\"" Apr 30 03:31:17.870865 containerd[1459]: time="2025-04-30T03:31:17.870426069Z" level=info msg="Forcibly stopping sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\"" Apr 30 03:31:17.870865 containerd[1459]: time="2025-04-30T03:31:17.870530163Z" level=info msg="TearDown network for sandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" successfully" Apr 30 03:31:17.875261 containerd[1459]: time="2025-04-30T03:31:17.875196835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:17.875680 containerd[1459]: time="2025-04-30T03:31:17.875527274Z" level=info msg="RemovePodSandbox \"f8f62713acba01a4744f89af6dfe0771bac018fce721057d3d0bac2047e3772b\" returns successfully" Apr 30 03:31:17.877046 containerd[1459]: time="2025-04-30T03:31:17.876810761Z" level=info msg="StopPodSandbox for \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\"" Apr 30 03:31:17.877046 containerd[1459]: time="2025-04-30T03:31:17.876940513Z" level=info msg="TearDown network for sandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" successfully" Apr 30 03:31:17.877046 containerd[1459]: time="2025-04-30T03:31:17.876959056Z" level=info msg="StopPodSandbox for \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" returns successfully" Apr 30 03:31:17.877837 containerd[1459]: time="2025-04-30T03:31:17.877583705Z" level=info msg="RemovePodSandbox for \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\"" Apr 30 03:31:17.877837 containerd[1459]: time="2025-04-30T03:31:17.877621125Z" level=info msg="Forcibly stopping sandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\"" Apr 30 03:31:17.877837 containerd[1459]: time="2025-04-30T03:31:17.877700960Z" level=info msg="TearDown network for sandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" successfully" Apr 30 03:31:17.905582 containerd[1459]: time="2025-04-30T03:31:17.905055219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:31:17.905582 containerd[1459]: time="2025-04-30T03:31:17.905193525Z" level=info msg="RemovePodSandbox \"a7eadf7620f7f757e4f0980cbeef1a56e92da00f5cb023d3def1c3dd3fa811dd\" returns successfully" Apr 30 03:31:17.977244 kubelet[2565]: E0430 03:31:17.977021 2565 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:31:18.388434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4e74d2688b1cba44237e84e105480d270b1827d09f6ee6c9c6d01f3e9a45aa5-rootfs.mount: Deactivated successfully. Apr 30 03:31:18.463361 kubelet[2565]: E0430 03:31:18.461227 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:18.470200 containerd[1459]: time="2025-04-30T03:31:18.470127021Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:31:18.498665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586923589.mount: Deactivated successfully. Apr 30 03:31:18.501555 containerd[1459]: time="2025-04-30T03:31:18.500920984Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5\"" Apr 30 03:31:18.504235 containerd[1459]: time="2025-04-30T03:31:18.503200959Z" level=info msg="StartContainer for \"06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5\"" Apr 30 03:31:18.565188 systemd[1]: Started cri-containerd-06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5.scope - libcontainer container 06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5. Apr 30 03:31:18.610841 systemd[1]: cri-containerd-06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5.scope: Deactivated successfully. Apr 30 03:31:18.618141 containerd[1459]: time="2025-04-30T03:31:18.616549534Z" level=info msg="StartContainer for \"06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5\" returns successfully" Apr 30 03:31:18.658513 containerd[1459]: time="2025-04-30T03:31:18.657885328Z" level=info msg="shim disconnected" id=06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5 namespace=k8s.io Apr 30 03:31:18.659413 containerd[1459]: time="2025-04-30T03:31:18.659017270Z" level=warning msg="cleaning up after shim disconnected" id=06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5 namespace=k8s.io Apr 30 03:31:18.659413 containerd[1459]: time="2025-04-30T03:31:18.659073948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:18.854768 kubelet[2565]: E0430 03:31:18.853911 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-99zjh" podUID="c0081508-c8e3-490a-b57b-b113cee0b31d" Apr 30 03:31:19.388686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06a1f0d3639e841fafc756947b64a7ef0a57f86abb1211d1c4b6ed330e73dfc5-rootfs.mount: Deactivated successfully. Apr 30 03:31:19.472579 kubelet[2565]: E0430 03:31:19.469806 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:19.478530 containerd[1459]: time="2025-04-30T03:31:19.478276634Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:31:19.504729 containerd[1459]: time="2025-04-30T03:31:19.504644351Z" level=info msg="CreateContainer within sandbox \"b2cd0299dcabf0c2f40365306ea5e9bdc6e96ce040c0508db80257e197ca06fe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90\"" Apr 30 03:31:19.507476 containerd[1459]: time="2025-04-30T03:31:19.507188129Z" level=info msg="StartContainer for \"f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90\"" Apr 30 03:31:19.565133 systemd[1]: Started cri-containerd-f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90.scope - libcontainer container f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90. Apr 30 03:31:19.614759 containerd[1459]: time="2025-04-30T03:31:19.614684202Z" level=info msg="StartContainer for \"f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90\" returns successfully" Apr 30 03:31:20.258893 kubelet[2565]: I0430 03:31:20.255819 2565 setters.go:580] "Node became not ready" node="ci-4081.3.3-a-32b52f0300" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T03:31:20Z","lastTransitionTime":"2025-04-30T03:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 03:31:20.468628 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 03:31:20.481495 kubelet[2565]: E0430 03:31:20.479158 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:20.507094 kubelet[2565]: I0430 03:31:20.505604 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m6mmq" podStartSLOduration=5.505571527 podStartE2EDuration="5.505571527s" podCreationTimestamp="2025-04-30 03:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:20.50547025 +0000 UTC m=+122.832247478" watchObservedRunningTime="2025-04-30 03:31:20.505571527 +0000 UTC m=+122.832348748" Apr 30 03:31:20.854424 kubelet[2565]: E0430 03:31:20.854288 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-99zjh" podUID="c0081508-c8e3-490a-b57b-b113cee0b31d" Apr 30 03:31:21.533261 kubelet[2565]: E0430 03:31:21.533195 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:21.972528 systemd[1]: run-containerd-runc-k8s.io-f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90-runc.buc3uY.mount: Deactivated successfully. Apr 30 03:31:22.856726 kubelet[2565]: E0430 03:31:22.856168 2565 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-99zjh" podUID="c0081508-c8e3-490a-b57b-b113cee0b31d" Apr 30 03:31:24.079772 systemd-networkd[1368]: lxc_health: Link UP Apr 30 03:31:24.093920 systemd-networkd[1368]: lxc_health: Gained carrier Apr 30 03:31:24.251891 systemd[1]: run-containerd-runc-k8s.io-f97f590fc7f352c9d5a38210620adcbb42d28d02dfadc40b2bc3a9c13d364b90-runc.Z0Rk50.mount: Deactivated successfully. Apr 30 03:31:24.364170 kubelet[2565]: E0430 03:31:24.363837 2565 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38200->127.0.0.1:33201: write tcp 127.0.0.1:38200->127.0.0.1:33201: write: connection reset by peer Apr 30 03:31:24.855938 kubelet[2565]: E0430 03:31:24.854985 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:25.535797 kubelet[2565]: E0430 03:31:25.534154 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:25.780006 systemd-networkd[1368]: lxc_health: Gained IPv6LL Apr 30 03:31:26.495200 kubelet[2565]: E0430 03:31:26.495164 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:27.498457 kubelet[2565]: E0430 03:31:27.498320 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 03:31:31.323895 sshd[4388]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:31.329164 systemd[1]: sshd@29-164.92.87.160:22-139.178.89.65:56796.service: Deactivated successfully. Apr 30 03:31:31.333228 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 03:31:31.335950 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit. Apr 30 03:31:31.337694 systemd-logind[1444]: Removed session 30.