Apr 30 00:15:55.951029 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 00:15:55.951061 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:15:55.951075 kernel: BIOS-provided physical RAM map: Apr 30 00:15:55.951082 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 00:15:55.951088 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 00:15:55.951094 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 00:15:55.951102 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 00:15:55.951109 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 00:15:55.951116 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:15:55.955178 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 00:15:55.955210 kernel: NX (Execute Disable) protection: active Apr 30 00:15:55.955222 kernel: APIC: Static calls initialized Apr 30 00:15:55.955241 kernel: SMBIOS 2.8 present. Apr 30 00:15:55.955253 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 00:15:55.955266 kernel: Hypervisor detected: KVM Apr 30 00:15:55.955293 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 00:15:55.955309 kernel: kvm-clock: using sched offset of 3093689680 cycles Apr 30 00:15:55.955322 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 00:15:55.955335 kernel: tsc: Detected 2494.136 MHz processor Apr 30 00:15:55.955346 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 00:15:55.955366 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 00:15:55.955377 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 00:15:55.955388 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 00:15:55.955400 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 00:15:55.955418 kernel: ACPI: Early table checksum verification disabled Apr 30 00:15:55.955428 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 00:15:55.955440 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955451 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955463 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955474 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 00:15:55.955486 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955498 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955511 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955524 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:15:55.955533 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 00:15:55.955545 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 00:15:55.955558 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 00:15:55.955570 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 00:15:55.955578 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 00:15:55.955586 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 00:15:55.955601 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 00:15:55.955609 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 00:15:55.955617 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 00:15:55.955626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 00:15:55.955634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 00:15:55.955647 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 00:15:55.955656 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 00:15:55.955667 kernel: Zone ranges: Apr 30 00:15:55.955676 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 00:15:55.955684 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 00:15:55.955692 kernel: Normal empty Apr 30 00:15:55.955701 kernel: Movable zone start for each node Apr 30 00:15:55.955710 kernel: Early memory node ranges Apr 30 00:15:55.955725 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 00:15:55.955737 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 00:15:55.955748 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 00:15:55.955765 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:15:55.955777 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 00:15:55.955793 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 00:15:55.955805 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 00:15:55.955817 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 00:15:55.955829 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 00:15:55.955842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 00:15:55.955854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 00:15:55.955868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 00:15:55.955881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 00:15:55.955894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 00:15:55.955906 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 00:15:55.955917 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 00:15:55.955925 kernel: TSC deadline timer available Apr 30 00:15:55.955934 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 00:15:55.955942 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 00:15:55.955950 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 00:15:55.955962 kernel: Booting paravirtualized kernel on KVM Apr 30 00:15:55.955971 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 00:15:55.955983 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 00:15:55.955992 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 00:15:55.956000 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 00:15:55.956008 kernel: pcpu-alloc: [0] 0 1 Apr 30 00:15:55.956017 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 00:15:55.956026 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:15:55.956035 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:15:55.956043 kernel: random: crng init done Apr 30 00:15:55.956055 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:15:55.956063 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 00:15:55.956072 kernel: Fallback order for Node 0: 0 Apr 30 00:15:55.956080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 00:15:55.956094 kernel: Policy zone: DMA32 Apr 30 00:15:55.956108 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:15:55.956120 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 125152K reserved, 0K cma-reserved) Apr 30 00:15:55.956204 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:15:55.956218 kernel: Kernel/User page tables isolation: enabled Apr 30 00:15:55.956226 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 00:15:55.956235 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 00:15:55.956243 kernel: Dynamic Preempt: voluntary Apr 30 00:15:55.956251 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:15:55.956267 kernel: rcu: RCU event tracing is enabled. Apr 30 00:15:55.956276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:15:55.956284 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:15:55.956293 kernel: Rude variant of Tasks RCU enabled. Apr 30 00:15:55.956301 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:15:55.956313 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:15:55.956321 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:15:55.956330 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 00:15:55.956338 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:15:55.956350 kernel: Console: colour VGA+ 80x25 Apr 30 00:15:55.956359 kernel: printk: console [tty0] enabled Apr 30 00:15:55.956367 kernel: printk: console [ttyS0] enabled Apr 30 00:15:55.956376 kernel: ACPI: Core revision 20230628 Apr 30 00:15:55.956385 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 00:15:55.956396 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 00:15:55.956404 kernel: x2apic enabled Apr 30 00:15:55.956413 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 00:15:55.956421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 00:15:55.956429 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Apr 30 00:15:55.956437 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Apr 30 00:15:55.956446 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 00:15:55.956454 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 00:15:55.956474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 00:15:55.956483 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 00:15:55.956492 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 00:15:55.956510 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 00:15:55.956523 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 00:15:55.956538 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 00:15:55.956551 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 00:15:55.956565 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 00:15:55.956580 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 00:15:55.956596 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 00:15:55.956605 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 00:15:55.956614 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 00:15:55.956623 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 00:15:55.956632 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 00:15:55.956641 kernel: Freeing SMP alternatives memory: 32K Apr 30 00:15:55.956650 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:15:55.956659 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:15:55.956675 kernel: landlock: Up and running. Apr 30 00:15:55.956689 kernel: SELinux: Initializing. Apr 30 00:15:55.956702 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:15:55.956716 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:15:55.956731 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 00:15:55.956743 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:15:55.956752 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:15:55.956761 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:15:55.956769 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 00:15:55.956782 kernel: signal: max sigframe size: 1776 Apr 30 00:15:55.956791 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:15:55.956801 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:15:55.956810 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 00:15:55.956819 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:15:55.956827 kernel: smpboot: x86: Booting SMP configuration: Apr 30 00:15:55.956836 kernel: .... node #0, CPUs: #1 Apr 30 00:15:55.956845 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:15:55.956857 kernel: smpboot: Max logical packages: 1 Apr 30 00:15:55.956870 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Apr 30 00:15:55.956878 kernel: devtmpfs: initialized Apr 30 00:15:55.956887 kernel: x86/mm: Memory block size: 128MB Apr 30 00:15:55.956896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:15:55.956905 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:15:55.956915 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:15:55.956924 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:15:55.956933 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:15:55.956948 kernel: audit: type=2000 audit(1745972154.518:1): state=initialized audit_enabled=0 res=1 Apr 30 00:15:55.956965 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:15:55.956977 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 00:15:55.956990 kernel: cpuidle: using governor menu Apr 30 00:15:55.957002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:15:55.957015 kernel: dca service started, version 1.12.1 Apr 30 00:15:55.957028 kernel: PCI: Using configuration type 1 for base access Apr 30 00:15:55.957037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 00:15:55.957046 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:15:55.957055 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:15:55.957067 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:15:55.957076 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:15:55.957085 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:15:55.957094 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:15:55.957102 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:15:55.957111 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 00:15:55.957120 kernel: ACPI: Interpreter enabled Apr 30 00:15:55.959201 kernel: ACPI: PM: (supports S0 S5) Apr 30 00:15:55.959222 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 00:15:55.959241 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 00:15:55.959250 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 00:15:55.959259 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 00:15:55.959268 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:15:55.959547 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:15:55.959701 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 00:15:55.959851 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 00:15:55.959879 kernel: acpiphp: Slot [3] registered Apr 30 00:15:55.959889 kernel: acpiphp: Slot [4] registered Apr 30 00:15:55.959898 kernel: acpiphp: Slot [5] registered Apr 30 00:15:55.959907 kernel: acpiphp: Slot [6] registered Apr 30 00:15:55.959916 kernel: acpiphp: Slot [7] registered Apr 30 00:15:55.959924 kernel: acpiphp: Slot [8] registered Apr 30 00:15:55.959933 kernel: acpiphp: Slot [9] registered Apr 30 00:15:55.959942 kernel: acpiphp: Slot [10] registered Apr 30 00:15:55.959951 kernel: acpiphp: Slot [11] registered Apr 30 00:15:55.959960 kernel: acpiphp: Slot [12] registered Apr 30 00:15:55.959971 kernel: acpiphp: Slot [13] registered Apr 30 00:15:55.959980 kernel: acpiphp: Slot [14] registered Apr 30 00:15:55.959989 kernel: acpiphp: Slot [15] registered Apr 30 00:15:55.959997 kernel: acpiphp: Slot [16] registered Apr 30 00:15:55.960006 kernel: acpiphp: Slot [17] registered Apr 30 00:15:55.960015 kernel: acpiphp: Slot [18] registered Apr 30 00:15:55.960023 kernel: acpiphp: Slot [19] registered Apr 30 00:15:55.960032 kernel: acpiphp: Slot [20] registered Apr 30 00:15:55.960041 kernel: acpiphp: Slot [21] registered Apr 30 00:15:55.960052 kernel: acpiphp: Slot [22] registered Apr 30 00:15:55.960060 kernel: acpiphp: Slot [23] registered Apr 30 00:15:55.960069 kernel: acpiphp: Slot [24] registered Apr 30 00:15:55.960078 kernel: acpiphp: Slot [25] registered Apr 30 00:15:55.960086 kernel: acpiphp: Slot [26] registered Apr 30 00:15:55.960095 kernel: acpiphp: Slot [27] registered Apr 30 00:15:55.960104 kernel: acpiphp: Slot [28] registered Apr 30 00:15:55.960113 kernel: acpiphp: Slot [29] registered Apr 30 00:15:55.960121 kernel: acpiphp: Slot [30] registered Apr 30 00:15:55.960366 kernel: acpiphp: Slot [31] registered Apr 30 00:15:55.960388 kernel: PCI host bridge to bus 0000:00 Apr 30 00:15:55.960584 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 00:15:55.960694 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 00:15:55.960801 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 00:15:55.960915 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 00:15:55.961025 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 00:15:55.963938 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:15:55.964218 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 00:15:55.964382 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 00:15:55.964510 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 00:15:55.964646 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 00:15:55.964770 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 00:15:55.964936 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 00:15:55.965062 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 00:15:55.966228 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 00:15:55.966447 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 00:15:55.966557 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 00:15:55.966661 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 00:15:55.966771 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 00:15:55.966980 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 00:15:55.967122 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 00:15:55.969398 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 00:15:55.969500 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 00:15:55.969596 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 00:15:55.969690 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 00:15:55.969809 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 00:15:55.969951 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 00:15:55.970046 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 00:15:55.970163 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 00:15:55.970256 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 00:15:55.970356 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 00:15:55.970458 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 00:15:55.970552 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 00:15:55.970651 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 00:15:55.970769 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 00:15:55.970863 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 00:15:55.970956 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 00:15:55.971049 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 00:15:55.973263 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 00:15:55.973462 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 00:15:55.973595 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 00:15:55.973692 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 00:15:55.973818 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 00:15:55.973922 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 00:15:55.974015 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 00:15:55.974108 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 00:15:55.975713 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 00:15:55.975836 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 00:15:55.975931 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 00:15:55.975943 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 00:15:55.975952 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 00:15:55.975962 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 00:15:55.975970 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 00:15:55.975979 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 00:15:55.975991 kernel: iommu: Default domain type: Translated Apr 30 00:15:55.976000 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 00:15:55.976009 kernel: PCI: Using ACPI for IRQ routing Apr 30 00:15:55.976018 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 00:15:55.976027 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 00:15:55.976035 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 00:15:55.976154 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 00:15:55.976250 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 00:15:55.976349 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 00:15:55.976361 kernel: vgaarb: loaded Apr 30 00:15:55.976371 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 00:15:55.976379 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 00:15:55.976389 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 00:15:55.976397 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:15:55.976407 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:15:55.976416 kernel: pnp: PnP ACPI init Apr 30 00:15:55.976424 kernel: pnp: PnP ACPI: found 4 devices Apr 30 00:15:55.976436 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 00:15:55.976445 kernel: NET: Registered PF_INET protocol family Apr 30 00:15:55.976455 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:15:55.976463 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 00:15:55.976472 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:15:55.976481 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 00:15:55.976490 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 00:15:55.976499 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 00:15:55.976516 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:15:55.976538 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:15:55.976554 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:15:55.976563 kernel: NET: Registered PF_XDP protocol family Apr 30 00:15:55.976663 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 00:15:55.976750 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 00:15:55.976840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 00:15:55.976929 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 00:15:55.977019 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 00:15:55.979190 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 00:15:55.979343 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 00:15:55.979358 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 00:15:55.979454 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 28762 usecs Apr 30 00:15:55.979467 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:15:55.979476 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 00:15:55.979485 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Apr 30 00:15:55.979494 kernel: Initialise system trusted keyrings Apr 30 00:15:55.979512 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 00:15:55.979521 kernel: Key type asymmetric registered Apr 30 00:15:55.979530 kernel: Asymmetric key parser 'x509' registered Apr 30 00:15:55.979543 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 00:15:55.979555 kernel: io scheduler mq-deadline registered Apr 30 00:15:55.979568 kernel: io scheduler kyber registered Apr 30 00:15:55.979580 kernel: io scheduler bfq registered Apr 30 00:15:55.979594 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 00:15:55.979606 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 00:15:55.979619 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 00:15:55.979637 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 00:15:55.979647 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:15:55.979657 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 00:15:55.979666 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 00:15:55.979675 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 00:15:55.979683 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 00:15:55.979693 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 00:15:55.979822 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 00:15:55.979917 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 00:15:55.980002 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T00:15:55 UTC (1745972155) Apr 30 00:15:55.980087 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 00:15:55.980098 kernel: intel_pstate: CPU model not supported Apr 30 00:15:55.980108 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:15:55.980117 kernel: Segment Routing with IPv6 Apr 30 00:15:55.980153 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:15:55.980162 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:15:55.980175 kernel: Key type dns_resolver registered Apr 30 00:15:55.980184 kernel: IPI shorthand broadcast: enabled Apr 30 00:15:55.980193 kernel: sched_clock: Marking stable (839004707, 91689504)->(1032351386, -101657175) Apr 30 00:15:55.980202 kernel: registered taskstats version 1 Apr 30 00:15:55.980210 kernel: Loading compiled-in X.509 certificates Apr 30 00:15:55.980219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 00:15:55.980228 kernel: Key type .fscrypt registered Apr 30 00:15:55.980237 kernel: Key type fscrypt-provisioning registered Apr 30 00:15:55.980246 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:15:55.980258 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:15:55.980266 kernel: ima: No architecture policies found Apr 30 00:15:55.980275 kernel: clk: Disabling unused clocks Apr 30 00:15:55.980284 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 00:15:55.980294 kernel: Write protecting the kernel read-only data: 36864k Apr 30 00:15:55.980341 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 00:15:55.980353 kernel: Run /init as init process Apr 30 00:15:55.980363 kernel: with arguments: Apr 30 00:15:55.980372 kernel: /init Apr 30 00:15:55.980384 kernel: with environment: Apr 30 00:15:55.980393 kernel: HOME=/ Apr 30 00:15:55.980402 kernel: TERM=linux Apr 30 00:15:55.980411 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:15:55.980424 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:15:55.980439 systemd[1]: Detected virtualization kvm. Apr 30 00:15:55.980448 systemd[1]: Detected architecture x86-64. Apr 30 00:15:55.980458 systemd[1]: Running in initrd. Apr 30 00:15:55.980470 systemd[1]: No hostname configured, using default hostname. Apr 30 00:15:55.980479 systemd[1]: Hostname set to . Apr 30 00:15:55.980489 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:15:55.980499 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:15:55.980508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:15:55.980518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:15:55.980528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:15:55.980538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:15:55.980551 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:15:55.980561 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:15:55.980572 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:15:55.980582 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:15:55.980592 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:15:55.980602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:15:55.980611 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:15:55.980624 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:15:55.980634 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:15:55.980646 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:15:55.980656 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:15:55.980665 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:15:55.980678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:15:55.980687 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:15:55.980697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:15:55.980707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:15:55.980717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:15:55.980726 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:15:55.980736 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:15:55.980745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:15:55.980755 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:15:55.980768 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:15:55.980777 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:15:55.980787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:15:55.980796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:15:55.980806 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:15:55.980816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:15:55.980855 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 00:15:55.980882 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:15:55.980892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:15:55.980907 systemd-journald[183]: Journal started Apr 30 00:15:55.980928 systemd-journald[183]: Runtime Journal (/run/log/journal/fd3e2d636ed141f986bc940211e95344) is 4.9M, max 39.3M, 34.4M free. Apr 30 00:15:55.986201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:15:55.949559 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 00:15:56.015253 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:15:56.015292 kernel: Bridge firewalling registered Apr 30 00:15:55.992207 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 00:15:56.012987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:15:56.013570 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:15:56.015012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:15:56.020495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:15:56.023538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:15:56.028380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:15:56.031330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:15:56.049400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:15:56.062023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:15:56.063612 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:15:56.065090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:15:56.070431 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:15:56.077460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:15:56.093980 dracut-cmdline[217]: dracut-dracut-053 Apr 30 00:15:56.101149 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:15:56.119984 systemd-resolved[218]: Positive Trust Anchors: Apr 30 00:15:56.120001 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:15:56.120047 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:15:56.123691 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 30 00:15:56.125009 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:15:56.125934 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:15:56.212184 kernel: SCSI subsystem initialized Apr 30 00:15:56.222163 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:15:56.234165 kernel: iscsi: registered transport (tcp) Apr 30 00:15:56.259189 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:15:56.259293 kernel: QLogic iSCSI HBA Driver Apr 30 00:15:56.318197 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:15:56.324433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:15:56.355490 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:15:56.355567 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:15:56.356756 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:15:56.406221 kernel: raid6: avx2x4 gen() 17009 MB/s Apr 30 00:15:56.423292 kernel: raid6: avx2x2 gen() 17009 MB/s Apr 30 00:15:56.440364 kernel: raid6: avx2x1 gen() 12830 MB/s Apr 30 00:15:56.440445 kernel: raid6: using algorithm avx2x4 gen() 17009 MB/s Apr 30 00:15:56.458397 kernel: raid6: .... xor() 6969 MB/s, rmw enabled Apr 30 00:15:56.458487 kernel: raid6: using avx2x2 recovery algorithm Apr 30 00:15:56.480163 kernel: xor: automatically using best checksumming function avx Apr 30 00:15:56.661188 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:15:56.677410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:15:56.684434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:15:56.707411 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 30 00:15:56.714947 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:15:56.725019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:15:56.747585 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 30 00:15:56.793603 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:15:56.800492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:15:56.892754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:15:56.901502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:15:56.940110 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:15:56.944586 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:15:56.945158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:15:56.948101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:15:56.955032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:15:56.993026 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:15:57.017161 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:15:57.024154 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 00:15:57.077112 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 00:15:57.077302 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 00:15:57.077317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:15:57.077353 kernel: GPT:9289727 != 125829119 Apr 30 00:15:57.077365 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:15:57.077377 kernel: GPT:9289727 != 125829119 Apr 30 00:15:57.077388 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:15:57.077411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:15:57.077423 kernel: ACPI: bus type USB registered Apr 30 00:15:57.077435 kernel: usbcore: registered new interface driver usbfs Apr 30 00:15:57.077446 kernel: usbcore: registered new interface driver hub Apr 30 00:15:57.077457 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 00:15:57.086034 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Apr 30 00:15:57.086248 kernel: usbcore: registered new device driver usb Apr 30 00:15:57.105319 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 00:15:57.105402 kernel: AES CTR mode by8 optimization enabled Apr 30 00:15:57.108170 kernel: libata version 3.00 loaded. Apr 30 00:15:57.125570 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 00:15:57.192390 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (448) Apr 30 00:15:57.192429 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) Apr 30 00:15:57.192453 kernel: scsi host1: ata_piix Apr 30 00:15:57.192729 kernel: scsi host2: ata_piix Apr 30 00:15:57.192905 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 00:15:57.192931 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 00:15:57.192953 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 00:15:57.209706 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 00:15:57.210106 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 00:15:57.211516 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 00:15:57.211796 kernel: hub 1-0:1.0: USB hub found Apr 30 00:15:57.212041 kernel: hub 1-0:1.0: 2 ports detected Apr 30 00:15:57.166050 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:15:57.167506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:15:57.167663 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:15:57.168332 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:15:57.170485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:15:57.170687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:15:57.171200 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:15:57.182535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:15:57.192376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:15:57.195361 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:15:57.213532 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:15:57.253538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:15:57.258257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:15:57.267254 disk-uuid[526]: Primary Header is updated. Apr 30 00:15:57.267254 disk-uuid[526]: Secondary Entries is updated. Apr 30 00:15:57.267254 disk-uuid[526]: Secondary Header is updated. Apr 30 00:15:57.269480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:15:57.275558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:15:57.282170 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:15:57.321470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:15:58.299256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:15:58.299699 disk-uuid[530]: The operation has completed successfully. Apr 30 00:15:58.349199 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:15:58.349345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:15:58.376491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:15:58.385091 sh[560]: Success Apr 30 00:15:58.404185 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 00:15:58.500945 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:15:58.502243 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:15:58.509468 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:15:58.548217 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 00:15:58.548375 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:15:58.548398 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:15:58.548922 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:15:58.551166 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:15:58.563313 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:15:58.565023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:15:58.571513 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:15:58.574317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:15:58.596779 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:15:58.596867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:15:58.596890 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:15:58.604244 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:15:58.619256 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:15:58.620149 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:15:58.630164 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:15:58.637106 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:15:58.751338 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:15:58.761493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:15:58.788587 ignition[660]: Ignition 2.20.0 Apr 30 00:15:58.788608 ignition[660]: Stage: fetch-offline Apr 30 00:15:58.788689 ignition[660]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:15:58.790935 systemd-networkd[749]: lo: Link UP Apr 30 00:15:58.788707 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:15:58.790942 systemd-networkd[749]: lo: Gained carrier Apr 30 00:15:58.788872 ignition[660]: parsed url from cmdline: "" Apr 30 00:15:58.791036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:15:58.788879 ignition[660]: no config URL provided Apr 30 00:15:58.794011 systemd-networkd[749]: Enumeration completed Apr 30 00:15:58.788890 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:15:58.794699 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 00:15:58.788905 ignition[660]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:15:58.794704 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 00:15:58.788912 ignition[660]: failed to fetch config: resource requires networking Apr 30 00:15:58.794956 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:15:58.789173 ignition[660]: Ignition finished successfully Apr 30 00:15:58.796090 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:15:58.796097 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:15:58.796385 systemd[1]: Reached target network.target - Network. Apr 30 00:15:58.797384 systemd-networkd[749]: eth0: Link UP Apr 30 00:15:58.797389 systemd-networkd[749]: eth0: Gained carrier Apr 30 00:15:58.797401 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 00:15:58.802831 systemd-networkd[749]: eth1: Link UP Apr 30 00:15:58.802835 systemd-networkd[749]: eth1: Gained carrier Apr 30 00:15:58.802850 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:15:58.806717 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:15:58.817724 systemd-networkd[749]: eth0: DHCPv4 address 134.199.212.184/20, gateway 134.199.208.1 acquired from 169.254.169.253 Apr 30 00:15:58.821364 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Apr 30 00:15:58.835115 ignition[754]: Ignition 2.20.0 Apr 30 00:15:58.835984 ignition[754]: Stage: fetch Apr 30 00:15:58.836322 ignition[754]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:15:58.836341 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:15:58.836503 ignition[754]: parsed url from cmdline: "" Apr 30 00:15:58.836509 ignition[754]: no config URL provided Apr 30 00:15:58.836516 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:15:58.836529 ignition[754]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:15:58.836573 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 00:15:58.853060 ignition[754]: GET result: OK Apr 30 00:15:58.853235 ignition[754]: parsing config with SHA512: 00da1f86b035c30e9e098a7c97d07b96709a7d921587c34c248886b0f8b147f65f3843f7c72e7b32cd573f541f7ab6fae19686b0a12326e46015277be3f90fa2 Apr 30 00:15:58.859094 unknown[754]: fetched base config from "system" Apr 30 00:15:58.859112 unknown[754]: fetched base config from "system" Apr 30 00:15:58.859626 ignition[754]: fetch: fetch complete Apr 30 00:15:58.859122 unknown[754]: fetched user config from "digitalocean" Apr 30 00:15:58.859632 ignition[754]: fetch: fetch passed Apr 30 00:15:58.861374 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:15:58.859689 ignition[754]: Ignition finished successfully Apr 30 00:15:58.867465 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:15:58.889977 ignition[762]: Ignition 2.20.0 Apr 30 00:15:58.889989 ignition[762]: Stage: kargs Apr 30 00:15:58.890238 ignition[762]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:15:58.890249 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:15:58.892489 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:15:58.891235 ignition[762]: kargs: kargs passed Apr 30 00:15:58.891293 ignition[762]: Ignition finished successfully Apr 30 00:15:58.903495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:15:58.923713 ignition[769]: Ignition 2.20.0 Apr 30 00:15:58.923725 ignition[769]: Stage: disks Apr 30 00:15:58.923983 ignition[769]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:15:58.923998 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:15:58.925325 ignition[769]: disks: disks passed Apr 30 00:15:58.925392 ignition[769]: Ignition finished successfully Apr 30 00:15:58.926598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:15:58.931004 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:15:58.931526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:15:58.932454 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:15:58.933200 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:15:58.933929 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:15:58.940410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:15:58.963076 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:15:58.966563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:15:58.972291 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:15:59.090230 kernel: EXT4-fs (vda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 00:15:59.091080 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:15:59.092183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:15:59.099327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:15:59.103322 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:15:59.106395 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Apr 30 00:15:59.114205 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Apr 30 00:15:59.114293 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:15:59.114313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:15:59.115187 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:15:59.121533 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:15:59.124963 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:15:59.123977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:15:59.124025 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:15:59.127932 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:15:59.131884 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:15:59.143397 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:15:59.203313 coreos-metadata[788]: Apr 30 00:15:59.203 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:15:59.208902 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:15:59.214835 coreos-metadata[802]: Apr 30 00:15:59.214 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:15:59.216202 coreos-metadata[788]: Apr 30 00:15:59.216 INFO Fetch successful Apr 30 00:15:59.216685 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:15:59.223679 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:15:59.225793 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Apr 30 00:15:59.226529 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Apr 30 00:15:59.228040 coreos-metadata[802]: Apr 30 00:15:59.227 INFO Fetch successful Apr 30 00:15:59.233081 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:15:59.234331 coreos-metadata[802]: Apr 30 00:15:59.233 INFO wrote hostname ci-4152.2.3-4-a907cca219 to /sysroot/etc/hostname Apr 30 00:15:59.235117 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:15:59.335238 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:15:59.339277 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:15:59.341321 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:15:59.353196 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:15:59.381537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:15:59.387985 ignition[908]: INFO : Ignition 2.20.0 Apr 30 00:15:59.387985 ignition[908]: INFO : Stage: mount Apr 30 00:15:59.389026 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:15:59.389026 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:15:59.390260 ignition[908]: INFO : mount: mount passed Apr 30 00:15:59.390260 ignition[908]: INFO : Ignition finished successfully Apr 30 00:15:59.391532 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:15:59.397309 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:15:59.541366 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:15:59.547449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:15:59.559187 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (919) Apr 30 00:15:59.561947 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:15:59.562015 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:15:59.562028 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:15:59.566152 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:15:59.568780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:15:59.596474 ignition[935]: INFO : Ignition 2.20.0 Apr 30 00:15:59.596474 ignition[935]: INFO : Stage: files Apr 30 00:15:59.597915 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:15:59.597915 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:15:59.597915 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:15:59.599909 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:15:59.599909 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:15:59.602998 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:15:59.603576 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:15:59.603576 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:15:59.603501 unknown[935]: wrote ssh authorized keys file for user: core Apr 30 00:15:59.605386 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 00:15:59.605386 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 00:15:59.654893 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:15:59.951316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 00:15:59.951316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:15:59.952788 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 00:16:00.375388 systemd-networkd[749]: eth0: Gained IPv6LL Apr 30 00:16:00.631686 systemd-networkd[749]: eth1: Gained IPv6LL Apr 30 00:16:00.653334 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:16:00.776584 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:16:00.777395 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:16:00.782405 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:16:00.782405 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:16:00.782405 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 00:16:00.782405 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 00:16:00.782405 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 00:16:00.782405 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 00:16:01.353182 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:16:02.451158 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 00:16:02.451158 ignition[935]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:16:02.452969 ignition[935]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:16:02.452969 ignition[935]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:16:02.452969 ignition[935]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:16:02.452969 ignition[935]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:16:02.452969 ignition[935]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:16:02.452969 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:16:02.452969 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:16:02.452969 ignition[935]: INFO : files: files passed Apr 30 00:16:02.459644 ignition[935]: INFO : Ignition finished successfully Apr 30 00:16:02.454938 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:16:02.464446 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:16:02.467455 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:16:02.469746 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:16:02.469869 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:16:02.496970 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:16:02.496970 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:16:02.500576 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:16:02.503326 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:16:02.504324 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:16:02.509488 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:16:02.556742 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:16:02.556872 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:16:02.557872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:16:02.558511 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:16:02.559267 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:16:02.564436 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:16:02.583093 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:16:02.588448 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:16:02.608221 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:16:02.609413 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:16:02.610484 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:16:02.610866 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:16:02.611014 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:16:02.612169 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:16:02.612655 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:16:02.613437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:16:02.614258 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:16:02.614860 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:16:02.615619 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:16:02.616275 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:16:02.617099 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:16:02.617864 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:16:02.618698 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:16:02.619368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:16:02.619499 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:16:02.620451 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:16:02.621262 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:16:02.621935 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:16:02.622113 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:16:02.622869 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:16:02.623054 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:16:02.624187 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:16:02.624356 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:16:02.625509 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:16:02.625689 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:16:02.626468 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:16:02.626598 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:16:02.633488 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:16:02.636406 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:16:02.636777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:16:02.636935 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:16:02.637437 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:16:02.637563 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:16:02.643850 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:16:02.643965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:16:02.661481 ignition[989]: INFO : Ignition 2.20.0 Apr 30 00:16:02.663326 ignition[989]: INFO : Stage: umount Apr 30 00:16:02.663326 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:16:02.663326 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:16:02.665646 ignition[989]: INFO : umount: umount passed Apr 30 00:16:02.665646 ignition[989]: INFO : Ignition finished successfully Apr 30 00:16:02.671342 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:16:02.671896 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:16:02.671987 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:16:02.672708 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:16:02.672829 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:16:02.674981 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:16:02.675091 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:16:02.675967 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:16:02.676028 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:16:02.676710 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:16:02.676762 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:16:02.677367 systemd[1]: Stopped target network.target - Network. Apr 30 00:16:02.678176 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:16:02.678249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:16:02.678937 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:16:02.679581 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:16:02.684297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:16:02.685153 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:16:02.686221 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:16:02.686879 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:16:02.686932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:16:02.687446 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:16:02.687480 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:16:02.688168 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:16:02.688227 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:16:02.688935 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:16:02.688987 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:16:02.689529 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:16:02.689578 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:16:02.690564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:16:02.691279 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:16:02.694245 systemd-networkd[749]: eth1: DHCPv6 lease lost Apr 30 00:16:02.698164 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:16:02.698325 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:16:02.700969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:16:02.701034 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:16:02.702354 systemd-networkd[749]: eth0: DHCPv6 lease lost Apr 30 00:16:02.704255 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:16:02.704441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:16:02.705806 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:16:02.705952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:16:02.710328 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:16:02.710708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:16:02.710778 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:16:02.711269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:16:02.711317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:16:02.712061 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:16:02.712120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:16:02.712817 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:16:02.726633 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:16:02.726810 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:16:02.742260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:16:02.742370 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:16:02.743156 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:16:02.743198 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:16:02.743926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:16:02.743976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:16:02.745031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:16:02.745095 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:16:02.745812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:16:02.745919 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:16:02.749257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:16:02.750216 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:16:02.750886 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:16:02.751486 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:16:02.751535 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:16:02.751882 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:16:02.751918 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:16:02.752366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:16:02.752422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:16:02.757521 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:16:02.757642 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:16:02.765163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:16:02.765318 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:16:02.766640 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:16:02.770366 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:16:02.783736 systemd[1]: Switching root. Apr 30 00:16:02.823796 systemd-journald[183]: Journal stopped Apr 30 00:16:04.022730 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 00:16:04.022822 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:16:04.022839 kernel: SELinux: policy capability open_perms=1 Apr 30 00:16:04.022851 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:16:04.022876 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:16:04.022894 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:16:04.022911 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:16:04.022928 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:16:04.022954 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:16:04.022971 kernel: audit: type=1403 audit(1745972163.014:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:16:04.022996 systemd[1]: Successfully loaded SELinux policy in 38.447ms. Apr 30 00:16:04.023048 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.707ms. Apr 30 00:16:04.023070 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:16:04.023095 systemd[1]: Detected virtualization kvm. Apr 30 00:16:04.023113 systemd[1]: Detected architecture x86-64. Apr 30 00:16:04.029173 systemd[1]: Detected first boot. Apr 30 00:16:04.029226 systemd[1]: Hostname set to . Apr 30 00:16:04.029239 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:16:04.029252 zram_generator::config[1033]: No configuration found. Apr 30 00:16:04.029267 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:16:04.029280 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:16:04.029298 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:16:04.029310 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:16:04.029325 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:16:04.029338 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:16:04.029351 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:16:04.029363 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:16:04.029375 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:16:04.029388 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:16:04.029403 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:16:04.029415 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:16:04.029428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:16:04.029441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:16:04.029453 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:16:04.029465 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:16:04.029479 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:16:04.029492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:16:04.029504 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:16:04.029519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:16:04.029532 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:16:04.029544 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:16:04.029557 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:16:04.029569 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:16:04.029581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:16:04.029596 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:16:04.029609 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:16:04.029621 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:16:04.029634 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:16:04.029646 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:16:04.029659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:16:04.029671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:16:04.029683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:16:04.029696 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:16:04.029708 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:16:04.029723 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:16:04.029737 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:16:04.029749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:04.029762 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:16:04.029775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:16:04.029788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:16:04.029801 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:16:04.029813 systemd[1]: Reached target machines.target - Containers. Apr 30 00:16:04.029842 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:16:04.029861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:16:04.029881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:16:04.029894 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:16:04.029907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:16:04.029919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:16:04.029932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:16:04.029944 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:16:04.029960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:16:04.029974 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:16:04.029987 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:16:04.030000 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:16:04.030012 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:16:04.030025 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:16:04.030038 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:16:04.030050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:16:04.030063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:16:04.030078 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:16:04.030091 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:16:04.030103 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:16:04.030115 systemd[1]: Stopped verity-setup.service. Apr 30 00:16:04.030137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:04.030150 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:16:04.030204 systemd-journald[1106]: Collecting audit messages is disabled. Apr 30 00:16:04.030229 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:16:04.030246 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:16:04.030259 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:16:04.030272 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:16:04.030285 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:16:04.030300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:16:04.030316 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:16:04.030329 systemd-journald[1106]: Journal started Apr 30 00:16:04.030359 systemd-journald[1106]: Runtime Journal (/run/log/journal/fd3e2d636ed141f986bc940211e95344) is 4.9M, max 39.3M, 34.4M free. Apr 30 00:16:03.763201 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:16:03.781249 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:16:03.781688 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:16:04.034696 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:16:04.034748 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:16:04.033934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:16:04.034506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:16:04.035691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:16:04.035819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:16:04.038030 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:16:04.039608 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:16:04.049916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:16:04.056287 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:16:04.059158 kernel: loop: module loaded Apr 30 00:16:04.068296 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:16:04.068777 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:16:04.068822 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:16:04.072357 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:16:04.075157 kernel: fuse: init (API version 7.39) Apr 30 00:16:04.081334 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:16:04.085363 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:16:04.086018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:16:04.095411 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:16:04.097297 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:16:04.098385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:16:04.104430 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:16:04.110579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:16:04.114609 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:16:04.118348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:16:04.121870 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:16:04.123220 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:16:04.124065 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:16:04.124267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:16:04.144795 kernel: ACPI: bus type drm_connector registered Apr 30 00:16:04.126692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:16:04.129578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:16:04.140289 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:16:04.142258 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:16:04.142686 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:16:04.144408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:16:04.154346 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:16:04.165516 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:16:04.198197 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:16:04.199896 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:16:04.213501 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:16:04.226433 systemd-journald[1106]: Time spent on flushing to /var/log/journal/fd3e2d636ed141f986bc940211e95344 is 99.821ms for 994 entries. Apr 30 00:16:04.226433 systemd-journald[1106]: System Journal (/var/log/journal/fd3e2d636ed141f986bc940211e95344) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:16:04.339899 systemd-journald[1106]: Received client request to flush runtime journal. Apr 30 00:16:04.339954 kernel: loop0: detected capacity change from 0 to 140992 Apr 30 00:16:04.339972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:16:04.339987 kernel: loop1: detected capacity change from 0 to 218376 Apr 30 00:16:04.294659 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Apr 30 00:16:04.294674 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Apr 30 00:16:04.319473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:16:04.327263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:16:04.334204 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:16:04.337423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:16:04.339214 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:16:04.346244 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:16:04.356393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:16:04.362353 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:16:04.387558 kernel: loop2: detected capacity change from 0 to 138184 Apr 30 00:16:04.413014 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:16:04.422578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:16:04.426247 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:16:04.463521 kernel: loop3: detected capacity change from 0 to 8 Apr 30 00:16:04.496850 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 30 00:16:04.497236 kernel: loop4: detected capacity change from 0 to 140992 Apr 30 00:16:04.497319 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Apr 30 00:16:04.516974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:16:04.525075 kernel: loop5: detected capacity change from 0 to 218376 Apr 30 00:16:04.541413 kernel: loop6: detected capacity change from 0 to 138184 Apr 30 00:16:04.564330 kernel: loop7: detected capacity change from 0 to 8 Apr 30 00:16:04.566648 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 00:16:04.567270 (sd-merge)[1180]: Merged extensions into '/usr'. Apr 30 00:16:04.577332 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:16:04.577353 systemd[1]: Reloading... Apr 30 00:16:04.754192 zram_generator::config[1210]: No configuration found. Apr 30 00:16:04.867349 ldconfig[1140]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:16:04.912644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:16:04.966283 systemd[1]: Reloading finished in 388 ms. Apr 30 00:16:04.988776 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:16:04.992527 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:16:05.001519 systemd[1]: Starting ensure-sysext.service... Apr 30 00:16:05.005388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:16:05.023729 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:16:05.023756 systemd[1]: Reloading... Apr 30 00:16:05.041079 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:16:05.041667 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:16:05.042831 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:16:05.043167 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Apr 30 00:16:05.043243 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Apr 30 00:16:05.047894 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:16:05.047909 systemd-tmpfiles[1251]: Skipping /boot Apr 30 00:16:05.064812 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:16:05.064841 systemd-tmpfiles[1251]: Skipping /boot Apr 30 00:16:05.163191 zram_generator::config[1281]: No configuration found. Apr 30 00:16:05.330447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:16:05.393485 systemd[1]: Reloading finished in 369 ms. Apr 30 00:16:05.410283 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:16:05.415707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:16:05.433430 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:16:05.438522 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:16:05.441816 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:16:05.451442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:16:05.455408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:16:05.460389 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:16:05.468965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.469208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:16:05.476487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:16:05.480541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:16:05.484546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:16:05.486223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:16:05.486389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.489963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.490347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:16:05.490680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:16:05.499731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:16:05.501296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.506435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.506680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:16:05.514543 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:16:05.515303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:16:05.515514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.519594 systemd[1]: Finished ensure-sysext.service. Apr 30 00:16:05.531186 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:16:05.553512 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:16:05.554576 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:16:05.554758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:16:05.573444 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:16:05.584299 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:16:05.585157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:16:05.587015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:16:05.587855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:16:05.588053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:16:05.588808 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:16:05.589021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:16:05.592480 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:16:05.592556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:16:05.606654 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Apr 30 00:16:05.624223 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:16:05.626139 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:16:05.629938 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:16:05.652039 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:16:05.662804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:16:05.671404 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:16:05.682834 augenrules[1373]: No rules Apr 30 00:16:05.684569 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:16:05.684927 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:16:05.798879 systemd-resolved[1326]: Positive Trust Anchors: Apr 30 00:16:05.798915 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:16:05.798975 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:16:05.818117 systemd-resolved[1326]: Using system hostname 'ci-4152.2.3-4-a907cca219'. Apr 30 00:16:05.820172 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:16:05.822265 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:16:05.831436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:16:05.832538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:16:05.855420 systemd-networkd[1369]: lo: Link UP Apr 30 00:16:05.855434 systemd-networkd[1369]: lo: Gained carrier Apr 30 00:16:05.859069 systemd-networkd[1369]: Enumeration completed Apr 30 00:16:05.859738 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:16:05.860547 systemd[1]: Reached target network.target - Network. Apr 30 00:16:05.871422 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:16:05.888298 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 00:16:05.888690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.888848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:16:05.897749 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:16:05.902422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:16:05.908441 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:16:05.909301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:16:05.909342 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:16:05.909359 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:16:05.922296 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 00:16:05.927338 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 00:16:05.932435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:16:05.933220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:16:05.940693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:16:05.950934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:16:05.956185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:16:05.964246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1375) Apr 30 00:16:05.965572 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:16:05.966235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:16:05.968065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:16:05.981766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 00:16:06.021171 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 00:16:06.028159 kernel: ACPI: button: Power Button [PWRF] Apr 30 00:16:06.057165 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 00:16:06.070038 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 00:16:06.130612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:16:06.149080 systemd-networkd[1369]: eth0: Configuring with /run/systemd/network/10-4a:0f:c9:11:a5:87.network. Apr 30 00:16:06.153265 systemd-networkd[1369]: eth0: Link UP Apr 30 00:16:06.153382 systemd-networkd[1369]: eth0: Gained carrier Apr 30 00:16:06.159105 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:06.166456 systemd-networkd[1369]: eth1: Configuring with /run/systemd/network/10-42:7c:b4:d8:06:35.network. Apr 30 00:16:06.169253 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:06.171302 systemd-networkd[1369]: eth1: Link UP Apr 30 00:16:06.171396 systemd-networkd[1369]: eth1: Gained carrier Apr 30 00:16:06.178142 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:06.220949 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:06.252180 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:16:06.255715 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:16:06.261619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:16:06.301903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:16:06.308538 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 00:16:06.309025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:16:06.312331 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 00:16:06.313438 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:16:06.314276 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:16:06.314320 kernel: [drm] features: -context_init Apr 30 00:16:06.316336 kernel: [drm] number of scanouts: 1 Apr 30 00:16:06.316424 kernel: [drm] number of cap sets: 0 Apr 30 00:16:06.321153 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 00:16:06.332800 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 00:16:06.332877 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:16:06.335033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:16:06.335213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:16:06.341862 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:16:06.340436 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:16:06.345512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:16:06.364150 kernel: EDAC MC: Ver: 3.0.0 Apr 30 00:16:06.369581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:16:06.369810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:16:06.380554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:16:06.392667 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:16:06.404555 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:16:06.418849 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:16:06.429986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:16:06.459158 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:16:06.461590 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:16:06.461794 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:16:06.462404 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:16:06.464437 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:16:06.466025 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:16:06.466263 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:16:06.466345 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:16:06.466405 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:16:06.466435 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:16:06.466504 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:16:06.468065 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:16:06.469894 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:16:06.477827 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:16:06.480584 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:16:06.485536 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:16:06.488394 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:16:06.488887 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:16:06.489356 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:16:06.489384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:16:06.493309 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:16:06.498709 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:16:06.497957 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:16:06.510436 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:16:06.518320 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:16:06.521404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:16:06.522886 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:16:06.530378 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:16:06.537805 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:16:06.547293 jq[1447]: false Apr 30 00:16:06.549280 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:16:06.560459 dbus-daemon[1446]: [system] SELinux support is enabled Apr 30 00:16:06.561340 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:16:06.576890 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:16:06.580304 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:16:06.581762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:16:06.591119 coreos-metadata[1445]: Apr 30 00:16:06.589 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:16:06.588482 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:16:06.604308 coreos-metadata[1445]: Apr 30 00:16:06.603 INFO Fetch successful Apr 30 00:16:06.605279 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:16:06.608749 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:16:06.617339 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:16:06.624721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:16:06.624904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:16:06.626407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:16:06.626626 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:16:06.638555 extend-filesystems[1448]: Found loop4 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found loop5 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found loop6 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found loop7 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda1 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda2 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda3 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found usr Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda4 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda6 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda7 Apr 30 00:16:06.638555 extend-filesystems[1448]: Found vda9 Apr 30 00:16:06.638555 extend-filesystems[1448]: Checking size of /dev/vda9 Apr 30 00:16:06.735055 update_engine[1456]: I20250430 00:16:06.635185 1456 main.cc:92] Flatcar Update Engine starting Apr 30 00:16:06.735055 update_engine[1456]: I20250430 00:16:06.637056 1456 update_check_scheduler.cc:74] Next update check in 6m32s Apr 30 00:16:06.651533 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:16:06.737595 jq[1457]: true Apr 30 00:16:06.651604 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:16:06.655857 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:16:06.655979 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 00:16:06.656023 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:16:06.669072 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:16:06.681391 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:16:06.745924 tar[1470]: linux-amd64/LICENSE Apr 30 00:16:06.746761 tar[1470]: linux-amd64/helm Apr 30 00:16:06.749913 extend-filesystems[1448]: Resized partition /dev/vda9 Apr 30 00:16:06.760343 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:16:06.768296 jq[1480]: true Apr 30 00:16:06.768509 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:16:06.760580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:16:06.773595 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:16:06.781558 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 00:16:06.808401 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:16:06.813574 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:16:06.875563 systemd-logind[1455]: New seat seat0. Apr 30 00:16:06.884866 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1375) Apr 30 00:16:06.893087 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 00:16:06.893143 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:16:06.893442 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:16:06.923154 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 00:16:06.969163 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:16:06.969163 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 00:16:06.969163 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 00:16:06.975708 extend-filesystems[1448]: Resized filesystem in /dev/vda9 Apr 30 00:16:06.975708 extend-filesystems[1448]: Found vdb Apr 30 00:16:06.977506 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:16:06.978105 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:16:06.990285 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:16:06.995576 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:16:06.996353 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:16:07.009632 systemd[1]: Starting sshkeys.service... Apr 30 00:16:07.028356 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:16:07.041043 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:16:07.052221 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:16:07.094586 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:16:07.109577 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:16:07.113269 coreos-metadata[1523]: Apr 30 00:16:07.113 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:16:07.120382 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:16:07.121720 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:16:07.130955 coreos-metadata[1523]: Apr 30 00:16:07.129 INFO Fetch successful Apr 30 00:16:07.135976 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:16:07.151264 unknown[1523]: wrote ssh authorized keys file for user: core Apr 30 00:16:07.189002 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:16:07.190402 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:16:07.195607 systemd[1]: Finished sshkeys.service. Apr 30 00:16:07.199875 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:16:07.213279 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:16:07.220533 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:16:07.221210 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:16:07.288225 containerd[1481]: time="2025-04-30T00:16:07.288010750Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:16:07.320439 containerd[1481]: time="2025-04-30T00:16:07.320088167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.323392 containerd[1481]: time="2025-04-30T00:16:07.323325104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:16:07.324162 containerd[1481]: time="2025-04-30T00:16:07.323640056Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:16:07.324162 containerd[1481]: time="2025-04-30T00:16:07.323691540Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:16:07.324162 containerd[1481]: time="2025-04-30T00:16:07.323918315Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:16:07.324162 containerd[1481]: time="2025-04-30T00:16:07.323939017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.324162 containerd[1481]: time="2025-04-30T00:16:07.324020795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:16:07.324162 containerd[1481]: time="2025-04-30T00:16:07.324042459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.324666 containerd[1481]: time="2025-04-30T00:16:07.324631060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.324740390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.324759684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.324769245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.324871363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.325119454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.325351184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.325373991Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.325502927Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:16:07.325627 containerd[1481]: time="2025-04-30T00:16:07.325575099Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:16:07.329585 containerd[1481]: time="2025-04-30T00:16:07.329519337Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:16:07.329823 containerd[1481]: time="2025-04-30T00:16:07.329805784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:16:07.330018 containerd[1481]: time="2025-04-30T00:16:07.330002756Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:16:07.330101 containerd[1481]: time="2025-04-30T00:16:07.330090308Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:16:07.330175 containerd[1481]: time="2025-04-30T00:16:07.330165188Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:16:07.330626 containerd[1481]: time="2025-04-30T00:16:07.330491330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:16:07.331156 containerd[1481]: time="2025-04-30T00:16:07.330988557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:16:07.331337 containerd[1481]: time="2025-04-30T00:16:07.331313164Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331406141Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331459971Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331486378Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331506887Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331525699Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331548216Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331576400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331599543Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331621118Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331640748Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331667243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331684807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331698947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.331962 containerd[1481]: time="2025-04-30T00:16:07.331713582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331726861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331740991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331754096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331767454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331780547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331795458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331809275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331822452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331834330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331849397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331877013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331891897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.332454 containerd[1481]: time="2025-04-30T00:16:07.331904119Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332750671Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332856584Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332871617Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332884306Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332893554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332907915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332920406Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:16:07.334695 containerd[1481]: time="2025-04-30T00:16:07.332931070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:16:07.334920 containerd[1481]: time="2025-04-30T00:16:07.333285370Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:16:07.334920 containerd[1481]: time="2025-04-30T00:16:07.333349628Z" level=info msg="Connect containerd service" Apr 30 00:16:07.334920 containerd[1481]: time="2025-04-30T00:16:07.333390087Z" level=info msg="using legacy CRI server" Apr 30 00:16:07.334920 containerd[1481]: time="2025-04-30T00:16:07.333398233Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:16:07.334920 containerd[1481]: time="2025-04-30T00:16:07.333533023Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:16:07.334920 containerd[1481]: time="2025-04-30T00:16:07.334422201Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:16:07.335353 containerd[1481]: time="2025-04-30T00:16:07.335318788Z" level=info msg="Start subscribing containerd event" Apr 30 00:16:07.335431 containerd[1481]: time="2025-04-30T00:16:07.335420058Z" level=info msg="Start recovering state" Apr 30 00:16:07.335534 containerd[1481]: time="2025-04-30T00:16:07.335523375Z" level=info msg="Start event monitor" Apr 30 00:16:07.335586 containerd[1481]: time="2025-04-30T00:16:07.335577648Z" level=info msg="Start snapshots syncer" Apr 30 00:16:07.335628 containerd[1481]: time="2025-04-30T00:16:07.335619888Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:16:07.335666 containerd[1481]: time="2025-04-30T00:16:07.335658901Z" level=info msg="Start streaming server" Apr 30 00:16:07.336124 containerd[1481]: time="2025-04-30T00:16:07.336097072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:16:07.336354 containerd[1481]: time="2025-04-30T00:16:07.336338957Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:16:07.336600 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:16:07.338232 containerd[1481]: time="2025-04-30T00:16:07.338205203Z" level=info msg="containerd successfully booted in 0.051794s" Apr 30 00:16:07.632172 tar[1470]: linux-amd64/README.md Apr 30 00:16:07.651969 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:16:08.055404 systemd-networkd[1369]: eth1: Gained IPv6LL Apr 30 00:16:08.056216 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:08.058383 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:16:08.061040 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:16:08.076713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:16:08.081081 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:16:08.116644 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:16:08.183449 systemd-networkd[1369]: eth0: Gained IPv6LL Apr 30 00:16:08.184277 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:09.028992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:16:09.038932 systemd[1]: Started sshd@0-134.199.212.184:22-147.75.109.163:49498.service - OpenSSH per-connection server daemon (147.75.109.163:49498). Apr 30 00:16:09.130394 sshd[1565]: Accepted publickey for core from 147.75.109.163 port 49498 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:09.132940 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:09.144093 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:16:09.153496 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:16:09.168179 systemd-logind[1455]: New session 1 of user core. Apr 30 00:16:09.187353 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:16:09.195497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:09.198116 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:16:09.202921 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:16:09.214657 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:16:09.221602 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:16:09.362208 systemd[1575]: Queued start job for default target default.target. Apr 30 00:16:09.370756 systemd[1575]: Created slice app.slice - User Application Slice. Apr 30 00:16:09.370806 systemd[1575]: Reached target paths.target - Paths. Apr 30 00:16:09.370828 systemd[1575]: Reached target timers.target - Timers. Apr 30 00:16:09.374895 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:16:09.396567 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:16:09.396748 systemd[1575]: Reached target sockets.target - Sockets. Apr 30 00:16:09.396766 systemd[1575]: Reached target basic.target - Basic System. Apr 30 00:16:09.396839 systemd[1575]: Reached target default.target - Main User Target. Apr 30 00:16:09.396883 systemd[1575]: Startup finished in 166ms. Apr 30 00:16:09.397020 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:16:09.404463 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:16:09.406174 systemd[1]: Startup finished in 984ms (kernel) + 7.307s (initrd) + 6.430s (userspace) = 14.722s. Apr 30 00:16:09.484891 systemd[1]: Started sshd@1-134.199.212.184:22-147.75.109.163:49500.service - OpenSSH per-connection server daemon (147.75.109.163:49500). Apr 30 00:16:09.540723 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 49500 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:09.543019 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:09.550639 systemd-logind[1455]: New session 2 of user core. Apr 30 00:16:09.556455 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:16:09.627224 sshd[1596]: Connection closed by 147.75.109.163 port 49500 Apr 30 00:16:09.627819 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:09.638086 systemd[1]: sshd@1-134.199.212.184:22-147.75.109.163:49500.service: Deactivated successfully. Apr 30 00:16:09.641980 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:16:09.644074 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:16:09.651527 systemd[1]: Started sshd@2-134.199.212.184:22-147.75.109.163:49512.service - OpenSSH per-connection server daemon (147.75.109.163:49512). Apr 30 00:16:09.653514 systemd-logind[1455]: Removed session 2. Apr 30 00:16:09.700682 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 49512 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:09.702390 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:09.709841 systemd-logind[1455]: New session 3 of user core. Apr 30 00:16:09.715380 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:16:09.779645 sshd[1603]: Connection closed by 147.75.109.163 port 49512 Apr 30 00:16:09.779437 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:09.797535 systemd[1]: sshd@2-134.199.212.184:22-147.75.109.163:49512.service: Deactivated successfully. Apr 30 00:16:09.799671 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:16:09.801305 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:16:09.808955 systemd[1]: Started sshd@3-134.199.212.184:22-147.75.109.163:49518.service - OpenSSH per-connection server daemon (147.75.109.163:49518). Apr 30 00:16:09.812175 systemd-logind[1455]: Removed session 3. Apr 30 00:16:09.871272 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 49518 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:09.872818 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:09.882175 systemd-logind[1455]: New session 4 of user core. Apr 30 00:16:09.889496 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:16:09.969800 sshd[1611]: Connection closed by 147.75.109.163 port 49518 Apr 30 00:16:09.970411 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:09.986492 systemd[1]: sshd@3-134.199.212.184:22-147.75.109.163:49518.service: Deactivated successfully. Apr 30 00:16:09.990704 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:16:09.994817 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:16:09.996302 kubelet[1571]: E0430 00:16:09.996255 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:16:10.003687 systemd[1]: Started sshd@4-134.199.212.184:22-147.75.109.163:49534.service - OpenSSH per-connection server daemon (147.75.109.163:49534). Apr 30 00:16:10.004207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:16:10.004392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:16:10.004712 systemd[1]: kubelet.service: Consumed 1.303s CPU time. Apr 30 00:16:10.008279 systemd-logind[1455]: Removed session 4. Apr 30 00:16:10.062144 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 49534 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:10.064623 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:10.072312 systemd-logind[1455]: New session 5 of user core. Apr 30 00:16:10.079446 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:16:10.153699 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:16:10.154727 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:16:10.169303 sudo[1621]: pam_unix(sudo:session): session closed for user root Apr 30 00:16:10.173851 sshd[1620]: Connection closed by 147.75.109.163 port 49534 Apr 30 00:16:10.173012 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:10.187274 systemd[1]: sshd@4-134.199.212.184:22-147.75.109.163:49534.service: Deactivated successfully. Apr 30 00:16:10.189598 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:16:10.191958 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:16:10.198561 systemd[1]: Started sshd@5-134.199.212.184:22-147.75.109.163:49538.service - OpenSSH per-connection server daemon (147.75.109.163:49538). Apr 30 00:16:10.200338 systemd-logind[1455]: Removed session 5. Apr 30 00:16:10.254162 sshd[1626]: Accepted publickey for core from 147.75.109.163 port 49538 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:10.255831 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:10.262491 systemd-logind[1455]: New session 6 of user core. Apr 30 00:16:10.268427 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:16:10.331170 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:16:10.331504 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:16:10.336714 sudo[1630]: pam_unix(sudo:session): session closed for user root Apr 30 00:16:10.345089 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:16:10.345534 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:16:10.360528 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:16:10.413534 augenrules[1652]: No rules Apr 30 00:16:10.415392 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:16:10.415804 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:16:10.417019 sudo[1629]: pam_unix(sudo:session): session closed for user root Apr 30 00:16:10.421224 sshd[1628]: Connection closed by 147.75.109.163 port 49538 Apr 30 00:16:10.422457 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:10.433464 systemd[1]: sshd@5-134.199.212.184:22-147.75.109.163:49538.service: Deactivated successfully. Apr 30 00:16:10.435860 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:16:10.438528 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:16:10.443630 systemd[1]: Started sshd@6-134.199.212.184:22-147.75.109.163:49550.service - OpenSSH per-connection server daemon (147.75.109.163:49550). Apr 30 00:16:10.449781 systemd-logind[1455]: Removed session 6. Apr 30 00:16:10.515552 sshd[1660]: Accepted publickey for core from 147.75.109.163 port 49550 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:16:10.517227 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:16:10.522881 systemd-logind[1455]: New session 7 of user core. Apr 30 00:16:10.530445 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:16:10.590494 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:16:10.590788 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:16:11.040632 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:16:11.041171 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:16:11.449732 dockerd[1682]: time="2025-04-30T00:16:11.449318862Z" level=info msg="Starting up" Apr 30 00:16:11.646425 dockerd[1682]: time="2025-04-30T00:16:11.646345718Z" level=info msg="Loading containers: start." Apr 30 00:16:11.845178 kernel: Initializing XFRM netlink socket Apr 30 00:16:11.872553 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:11.877096 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:11.887077 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:11.941326 systemd-networkd[1369]: docker0: Link UP Apr 30 00:16:11.941748 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Apr 30 00:16:11.977227 dockerd[1682]: time="2025-04-30T00:16:11.977167372Z" level=info msg="Loading containers: done." Apr 30 00:16:12.000252 dockerd[1682]: time="2025-04-30T00:16:11.999750482Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:16:12.000252 dockerd[1682]: time="2025-04-30T00:16:11.999907483Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:16:12.000252 dockerd[1682]: time="2025-04-30T00:16:12.000041044Z" level=info msg="Daemon has completed initialization" Apr 30 00:16:12.034977 dockerd[1682]: time="2025-04-30T00:16:12.034536039Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:16:12.035273 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:16:12.927901 containerd[1481]: time="2025-04-30T00:16:12.927843914Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:16:13.530241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800417094.mount: Deactivated successfully. Apr 30 00:16:15.339304 containerd[1481]: time="2025-04-30T00:16:15.339234908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:15.340497 containerd[1481]: time="2025-04-30T00:16:15.340405240Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 00:16:15.341265 containerd[1481]: time="2025-04-30T00:16:15.341205175Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:15.344637 containerd[1481]: time="2025-04-30T00:16:15.344293250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:15.346410 containerd[1481]: time="2025-04-30T00:16:15.346369897Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.418473688s" Apr 30 00:16:15.346410 containerd[1481]: time="2025-04-30T00:16:15.346408602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 00:16:15.347197 containerd[1481]: time="2025-04-30T00:16:15.346965928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:16:17.350864 containerd[1481]: time="2025-04-30T00:16:17.350785104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:17.352343 containerd[1481]: time="2025-04-30T00:16:17.352285266Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 00:16:17.353062 containerd[1481]: time="2025-04-30T00:16:17.352736715Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:17.356168 containerd[1481]: time="2025-04-30T00:16:17.355795135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:17.357041 containerd[1481]: time="2025-04-30T00:16:17.356913921Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.009913548s" Apr 30 00:16:17.357041 containerd[1481]: time="2025-04-30T00:16:17.356949550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 00:16:17.358814 containerd[1481]: time="2025-04-30T00:16:17.358767331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:16:19.063307 containerd[1481]: time="2025-04-30T00:16:19.061575881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:19.063307 containerd[1481]: time="2025-04-30T00:16:19.062931294Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 00:16:19.064256 containerd[1481]: time="2025-04-30T00:16:19.064192550Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:19.068424 containerd[1481]: time="2025-04-30T00:16:19.068372608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:19.070541 containerd[1481]: time="2025-04-30T00:16:19.070482993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.711664164s" Apr 30 00:16:19.070757 containerd[1481]: time="2025-04-30T00:16:19.070731504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 00:16:19.071433 containerd[1481]: time="2025-04-30T00:16:19.071401718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:16:19.073273 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 00:16:20.254767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:16:20.263497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:16:20.429572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:20.432913 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:16:20.438578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068127565.mount: Deactivated successfully. Apr 30 00:16:20.537419 kubelet[1948]: E0430 00:16:20.536997 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:16:20.541798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:16:20.542080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:16:21.133284 containerd[1481]: time="2025-04-30T00:16:21.133213284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:21.134817 containerd[1481]: time="2025-04-30T00:16:21.134621022Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 00:16:21.135541 containerd[1481]: time="2025-04-30T00:16:21.135389854Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:21.138357 containerd[1481]: time="2025-04-30T00:16:21.138275621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:21.139693 containerd[1481]: time="2025-04-30T00:16:21.139512717Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.068072997s" Apr 30 00:16:21.139693 containerd[1481]: time="2025-04-30T00:16:21.139564343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 00:16:21.140493 containerd[1481]: time="2025-04-30T00:16:21.140465180Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:16:21.645406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774443664.mount: Deactivated successfully. Apr 30 00:16:22.135376 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 00:16:22.545144 containerd[1481]: time="2025-04-30T00:16:22.543921782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:22.545802 containerd[1481]: time="2025-04-30T00:16:22.545751837Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 00:16:22.546756 containerd[1481]: time="2025-04-30T00:16:22.546686180Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:22.550950 containerd[1481]: time="2025-04-30T00:16:22.550848254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:22.553082 containerd[1481]: time="2025-04-30T00:16:22.552853188Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.412347429s" Apr 30 00:16:22.553082 containerd[1481]: time="2025-04-30T00:16:22.552908609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 00:16:22.553824 containerd[1481]: time="2025-04-30T00:16:22.553788066Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:16:22.985621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983247449.mount: Deactivated successfully. Apr 30 00:16:22.989425 containerd[1481]: time="2025-04-30T00:16:22.989368360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:22.991088 containerd[1481]: time="2025-04-30T00:16:22.991014880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 00:16:22.991530 containerd[1481]: time="2025-04-30T00:16:22.991472491Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:22.994635 containerd[1481]: time="2025-04-30T00:16:22.994544921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:22.995862 containerd[1481]: time="2025-04-30T00:16:22.995655629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 441.726214ms" Apr 30 00:16:22.995862 containerd[1481]: time="2025-04-30T00:16:22.995690275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 00:16:22.996777 containerd[1481]: time="2025-04-30T00:16:22.996721421Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:16:23.582766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273965856.mount: Deactivated successfully. Apr 30 00:16:26.067187 containerd[1481]: time="2025-04-30T00:16:26.066859956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:26.067187 containerd[1481]: time="2025-04-30T00:16:26.067108578Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 00:16:26.068543 containerd[1481]: time="2025-04-30T00:16:26.068477283Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:26.073277 containerd[1481]: time="2025-04-30T00:16:26.073225498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:26.075262 containerd[1481]: time="2025-04-30T00:16:26.075003686Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.078239615s" Apr 30 00:16:26.075262 containerd[1481]: time="2025-04-30T00:16:26.075060688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 00:16:28.757295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:28.766502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:16:28.810217 systemd[1]: Reloading requested from client PID 2099 ('systemctl') (unit session-7.scope)... Apr 30 00:16:28.810234 systemd[1]: Reloading... Apr 30 00:16:28.951174 zram_generator::config[2135]: No configuration found. Apr 30 00:16:29.109214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:16:29.199703 systemd[1]: Reloading finished in 389 ms. Apr 30 00:16:29.257355 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:16:29.257470 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:16:29.257821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:29.263638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:16:29.388479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:29.400736 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:16:29.463524 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:16:29.463524 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:16:29.463524 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:16:29.463524 kubelet[2192]: I0430 00:16:29.463276 2192 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:16:29.839361 kubelet[2192]: I0430 00:16:29.837772 2192 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:16:29.839361 kubelet[2192]: I0430 00:16:29.838490 2192 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:16:29.839361 kubelet[2192]: I0430 00:16:29.839028 2192 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:16:29.870986 kubelet[2192]: E0430 00:16:29.870920 2192 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://134.199.212.184:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:29.871219 kubelet[2192]: I0430 00:16:29.871016 2192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:16:29.888794 kubelet[2192]: E0430 00:16:29.887289 2192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:16:29.888794 kubelet[2192]: I0430 00:16:29.887372 2192 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:16:29.892868 kubelet[2192]: I0430 00:16:29.892827 2192 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:16:29.894409 kubelet[2192]: I0430 00:16:29.894309 2192 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:16:29.894690 kubelet[2192]: I0430 00:16:29.894400 2192 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-4-a907cca219","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:16:29.894690 kubelet[2192]: I0430 00:16:29.894666 2192 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:16:29.894690 kubelet[2192]: I0430 00:16:29.894678 2192 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:16:29.895065 kubelet[2192]: I0430 00:16:29.894837 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:16:29.898457 kubelet[2192]: I0430 00:16:29.898391 2192 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:16:29.898457 kubelet[2192]: I0430 00:16:29.898445 2192 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:16:29.899779 kubelet[2192]: I0430 00:16:29.898483 2192 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:16:29.899779 kubelet[2192]: I0430 00:16:29.898503 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:16:29.909304 kubelet[2192]: W0430 00:16:29.908825 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.212.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:29.909304 kubelet[2192]: E0430 00:16:29.908901 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.212.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:29.909304 kubelet[2192]: W0430 00:16:29.909180 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.212.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-4-a907cca219&limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:29.909304 kubelet[2192]: E0430 00:16:29.909281 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.212.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-4-a907cca219&limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:29.909537 kubelet[2192]: I0430 00:16:29.909471 2192 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:16:29.913909 kubelet[2192]: I0430 00:16:29.913857 2192 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:16:29.914179 kubelet[2192]: W0430 00:16:29.913976 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:16:29.915703 kubelet[2192]: I0430 00:16:29.915106 2192 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:16:29.915703 kubelet[2192]: I0430 00:16:29.915182 2192 server.go:1287] "Started kubelet" Apr 30 00:16:29.917875 kubelet[2192]: I0430 00:16:29.917736 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:16:29.918921 kubelet[2192]: I0430 00:16:29.918878 2192 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:16:29.922276 kubelet[2192]: I0430 00:16:29.921893 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:16:29.928496 kubelet[2192]: E0430 00:16:29.926803 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.212.184:6443/api/v1/namespaces/default/events\": dial tcp 134.199.212.184:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-4-a907cca219.183af074c79a6a79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-4-a907cca219,UID:ci-4152.2.3-4-a907cca219,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-4-a907cca219,},FirstTimestamp:2025-04-30 00:16:29.915146873 +0000 UTC m=+0.507916055,LastTimestamp:2025-04-30 00:16:29.915146873 +0000 UTC m=+0.507916055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-4-a907cca219,}" Apr 30 00:16:29.932242 kubelet[2192]: I0430 00:16:29.932186 2192 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:16:29.935168 kubelet[2192]: I0430 00:16:29.933457 2192 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:16:29.935168 kubelet[2192]: E0430 00:16:29.933867 2192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:29.935168 kubelet[2192]: I0430 00:16:29.934709 2192 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:16:29.936580 kubelet[2192]: I0430 00:16:29.936550 2192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:16:29.941323 kubelet[2192]: I0430 00:16:29.939787 2192 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:16:29.941323 kubelet[2192]: I0430 00:16:29.939925 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:16:29.941673 kubelet[2192]: E0430 00:16:29.941369 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-4-a907cca219?timeout=10s\": dial tcp 134.199.212.184:6443: connect: connection refused" interval="200ms" Apr 30 00:16:29.942837 kubelet[2192]: I0430 00:16:29.942799 2192 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:16:29.945763 kubelet[2192]: I0430 00:16:29.945727 2192 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:16:29.946050 kubelet[2192]: I0430 00:16:29.946033 2192 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:16:29.967238 kubelet[2192]: I0430 00:16:29.967059 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:16:29.972788 kubelet[2192]: I0430 00:16:29.972739 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:16:29.973050 kubelet[2192]: I0430 00:16:29.973026 2192 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:16:29.973154 kubelet[2192]: I0430 00:16:29.973083 2192 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:16:29.973154 kubelet[2192]: I0430 00:16:29.973096 2192 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:16:29.973256 kubelet[2192]: E0430 00:16:29.973204 2192 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:16:29.973354 kubelet[2192]: W0430 00:16:29.967741 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.212.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:29.973502 kubelet[2192]: E0430 00:16:29.973474 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.212.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:29.980459 kubelet[2192]: W0430 00:16:29.980370 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.212.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:29.980611 kubelet[2192]: E0430 00:16:29.980470 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.212.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:29.981600 kubelet[2192]: I0430 00:16:29.981562 2192 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:16:29.981600 kubelet[2192]: I0430 00:16:29.981587 2192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:16:29.981771 kubelet[2192]: I0430 00:16:29.981626 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:16:29.984246 kubelet[2192]: I0430 00:16:29.984203 2192 policy_none.go:49] "None policy: Start" Apr 30 00:16:29.984246 kubelet[2192]: I0430 00:16:29.984249 2192 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:16:29.984482 kubelet[2192]: I0430 00:16:29.984264 2192 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:16:29.993381 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:16:30.004936 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:16:30.008282 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:16:30.017668 kubelet[2192]: I0430 00:16:30.017430 2192 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:16:30.017813 kubelet[2192]: I0430 00:16:30.017686 2192 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:16:30.017813 kubelet[2192]: I0430 00:16:30.017700 2192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:16:30.018236 kubelet[2192]: I0430 00:16:30.018210 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:16:30.020876 kubelet[2192]: E0430 00:16:30.020654 2192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:16:30.021541 kubelet[2192]: E0430 00:16:30.020851 2192 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:30.087666 systemd[1]: Created slice kubepods-burstable-pod7e65067a7411e04a0a009d4dae5df8bc.slice - libcontainer container kubepods-burstable-pod7e65067a7411e04a0a009d4dae5df8bc.slice. Apr 30 00:16:30.102901 kubelet[2192]: E0430 00:16:30.102469 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.106040 systemd[1]: Created slice kubepods-burstable-podf3f45c6527e0185e3d10593cfa18bc96.slice - libcontainer container kubepods-burstable-podf3f45c6527e0185e3d10593cfa18bc96.slice. Apr 30 00:16:30.120021 kubelet[2192]: I0430 00:16:30.119498 2192 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.120021 kubelet[2192]: E0430 00:16:30.119567 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.120428 kubelet[2192]: E0430 00:16:30.120398 2192 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://134.199.212.184:6443/api/v1/nodes\": dial tcp 134.199.212.184:6443: connect: connection refused" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.124737 systemd[1]: Created slice kubepods-burstable-pod24e4411030894bdc6b5db32fce3b5e77.slice - libcontainer container kubepods-burstable-pod24e4411030894bdc6b5db32fce3b5e77.slice. Apr 30 00:16:30.127748 kubelet[2192]: E0430 00:16:30.127717 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.142843 kubelet[2192]: E0430 00:16:30.142787 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-4-a907cca219?timeout=10s\": dial tcp 134.199.212.184:6443: connect: connection refused" interval="400ms" Apr 30 00:16:30.247898 kubelet[2192]: I0430 00:16:30.247843 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248272 kubelet[2192]: I0430 00:16:30.248243 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24e4411030894bdc6b5db32fce3b5e77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-4-a907cca219\" (UID: \"24e4411030894bdc6b5db32fce3b5e77\") " pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248421 kubelet[2192]: I0430 00:16:30.248399 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248757 kubelet[2192]: I0430 00:16:30.248536 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24e4411030894bdc6b5db32fce3b5e77-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-4-a907cca219\" (UID: \"24e4411030894bdc6b5db32fce3b5e77\") " pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248757 kubelet[2192]: I0430 00:16:30.248571 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248757 kubelet[2192]: I0430 00:16:30.248605 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248757 kubelet[2192]: I0430 00:16:30.248639 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248757 kubelet[2192]: I0430 00:16:30.248661 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e65067a7411e04a0a009d4dae5df8bc-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-4-a907cca219\" (UID: \"7e65067a7411e04a0a009d4dae5df8bc\") " pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.248942 kubelet[2192]: I0430 00:16:30.248683 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24e4411030894bdc6b5db32fce3b5e77-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-4-a907cca219\" (UID: \"24e4411030894bdc6b5db32fce3b5e77\") " pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.344545 kubelet[2192]: I0430 00:16:30.344456 2192 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.344947 kubelet[2192]: E0430 00:16:30.344900 2192 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://134.199.212.184:6443/api/v1/nodes\": dial tcp 134.199.212.184:6443: connect: connection refused" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.404253 kubelet[2192]: E0430 00:16:30.404058 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:30.404957 containerd[1481]: time="2025-04-30T00:16:30.404900840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-4-a907cca219,Uid:7e65067a7411e04a0a009d4dae5df8bc,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:30.406901 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Apr 30 00:16:30.420373 kubelet[2192]: E0430 00:16:30.420331 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:30.422480 containerd[1481]: time="2025-04-30T00:16:30.422418785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-4-a907cca219,Uid:f3f45c6527e0185e3d10593cfa18bc96,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:30.429771 kubelet[2192]: E0430 00:16:30.429706 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:30.434113 containerd[1481]: time="2025-04-30T00:16:30.434045326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-4-a907cca219,Uid:24e4411030894bdc6b5db32fce3b5e77,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:30.544161 kubelet[2192]: E0430 00:16:30.544045 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-4-a907cca219?timeout=10s\": dial tcp 134.199.212.184:6443: connect: connection refused" interval="800ms" Apr 30 00:16:30.746917 kubelet[2192]: I0430 00:16:30.746759 2192 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.747904 kubelet[2192]: E0430 00:16:30.747868 2192 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://134.199.212.184:6443/api/v1/nodes\": dial tcp 134.199.212.184:6443: connect: connection refused" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:30.832110 kubelet[2192]: W0430 00:16:30.832022 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.212.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:30.832332 kubelet[2192]: E0430 00:16:30.832149 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.212.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:30.885676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199154549.mount: Deactivated successfully. Apr 30 00:16:30.892175 containerd[1481]: time="2025-04-30T00:16:30.890645690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:16:30.892175 containerd[1481]: time="2025-04-30T00:16:30.891591816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:16:30.893048 containerd[1481]: time="2025-04-30T00:16:30.893009875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:16:30.893201 containerd[1481]: time="2025-04-30T00:16:30.893185010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 00:16:30.893619 containerd[1481]: time="2025-04-30T00:16:30.893589369Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:16:30.894871 containerd[1481]: time="2025-04-30T00:16:30.894828403Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:16:30.895505 containerd[1481]: time="2025-04-30T00:16:30.895428881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:16:30.898178 containerd[1481]: time="2025-04-30T00:16:30.897261385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:16:30.900365 containerd[1481]: time="2025-04-30T00:16:30.900173105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.648811ms" Apr 30 00:16:30.904793 containerd[1481]: time="2025-04-30T00:16:30.904493105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.302326ms" Apr 30 00:16:30.904793 containerd[1481]: time="2025-04-30T00:16:30.904727799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.697805ms" Apr 30 00:16:31.015465 kubelet[2192]: W0430 00:16:31.014702 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.212.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-4-a907cca219&limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:31.015465 kubelet[2192]: E0430 00:16:31.014805 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.212.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-4-a907cca219&limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:31.094218 containerd[1481]: time="2025-04-30T00:16:31.093860949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:31.094218 containerd[1481]: time="2025-04-30T00:16:31.093931634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:31.094218 containerd[1481]: time="2025-04-30T00:16:31.093943791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:31.094218 containerd[1481]: time="2025-04-30T00:16:31.094094140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:31.095407 containerd[1481]: time="2025-04-30T00:16:31.090651165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:31.095407 containerd[1481]: time="2025-04-30T00:16:31.095198686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:31.095407 containerd[1481]: time="2025-04-30T00:16:31.095220208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:31.095407 containerd[1481]: time="2025-04-30T00:16:31.095333988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:31.102742 containerd[1481]: time="2025-04-30T00:16:31.102641177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:31.104402 containerd[1481]: time="2025-04-30T00:16:31.104037792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:31.104402 containerd[1481]: time="2025-04-30T00:16:31.104076897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:31.104402 containerd[1481]: time="2025-04-30T00:16:31.104224178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:31.127062 systemd[1]: Started cri-containerd-5673c308cc6d467dea20ce4790ce6a524d3ffbe30ac4c9559a61cbdd21562da9.scope - libcontainer container 5673c308cc6d467dea20ce4790ce6a524d3ffbe30ac4c9559a61cbdd21562da9. Apr 30 00:16:31.142405 systemd[1]: Started cri-containerd-b01a92a9bd22de577d43499afe1c5acf0da408e10c1482bce37652df1fdd679b.scope - libcontainer container b01a92a9bd22de577d43499afe1c5acf0da408e10c1482bce37652df1fdd679b. Apr 30 00:16:31.146518 systemd[1]: Started cri-containerd-9b1cec74eade94de1a9bfb31afa2b2301a44d82434090dee4b170ce3c8ea013a.scope - libcontainer container 9b1cec74eade94de1a9bfb31afa2b2301a44d82434090dee4b170ce3c8ea013a. Apr 30 00:16:31.152632 kubelet[2192]: W0430 00:16:31.152522 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.212.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:31.152632 kubelet[2192]: E0430 00:16:31.152594 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.212.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:31.214035 containerd[1481]: time="2025-04-30T00:16:31.213932807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-4-a907cca219,Uid:24e4411030894bdc6b5db32fce3b5e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"b01a92a9bd22de577d43499afe1c5acf0da408e10c1482bce37652df1fdd679b\"" Apr 30 00:16:31.216309 kubelet[2192]: E0430 00:16:31.216253 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:31.223471 containerd[1481]: time="2025-04-30T00:16:31.223267785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-4-a907cca219,Uid:f3f45c6527e0185e3d10593cfa18bc96,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b1cec74eade94de1a9bfb31afa2b2301a44d82434090dee4b170ce3c8ea013a\"" Apr 30 00:16:31.224259 containerd[1481]: time="2025-04-30T00:16:31.224122618Z" level=info msg="CreateContainer within sandbox \"b01a92a9bd22de577d43499afe1c5acf0da408e10c1482bce37652df1fdd679b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:16:31.225269 kubelet[2192]: E0430 00:16:31.225231 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:31.230202 containerd[1481]: time="2025-04-30T00:16:31.228407566Z" level=info msg="CreateContainer within sandbox \"9b1cec74eade94de1a9bfb31afa2b2301a44d82434090dee4b170ce3c8ea013a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:16:31.237764 containerd[1481]: time="2025-04-30T00:16:31.237723984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-4-a907cca219,Uid:7e65067a7411e04a0a009d4dae5df8bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5673c308cc6d467dea20ce4790ce6a524d3ffbe30ac4c9559a61cbdd21562da9\"" Apr 30 00:16:31.238950 kubelet[2192]: E0430 00:16:31.238926 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:31.241997 containerd[1481]: time="2025-04-30T00:16:31.241962165Z" level=info msg="CreateContainer within sandbox \"5673c308cc6d467dea20ce4790ce6a524d3ffbe30ac4c9559a61cbdd21562da9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:16:31.249784 containerd[1481]: time="2025-04-30T00:16:31.249735099Z" level=info msg="CreateContainer within sandbox \"9b1cec74eade94de1a9bfb31afa2b2301a44d82434090dee4b170ce3c8ea013a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"385aa5163242d1925019b400fdccf722aa4cdd37bd82dfdd059d74bbcbdd468b\"" Apr 30 00:16:31.250670 containerd[1481]: time="2025-04-30T00:16:31.250284203Z" level=info msg="CreateContainer within sandbox \"b01a92a9bd22de577d43499afe1c5acf0da408e10c1482bce37652df1fdd679b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa40e202eeeee66c141aa4853c76da471168b0810383c4aea8b75170e2b5cce2\"" Apr 30 00:16:31.252370 containerd[1481]: time="2025-04-30T00:16:31.250999190Z" level=info msg="StartContainer for \"aa40e202eeeee66c141aa4853c76da471168b0810383c4aea8b75170e2b5cce2\"" Apr 30 00:16:31.252598 containerd[1481]: time="2025-04-30T00:16:31.252572849Z" level=info msg="StartContainer for \"385aa5163242d1925019b400fdccf722aa4cdd37bd82dfdd059d74bbcbdd468b\"" Apr 30 00:16:31.263560 containerd[1481]: time="2025-04-30T00:16:31.263512691Z" level=info msg="CreateContainer within sandbox \"5673c308cc6d467dea20ce4790ce6a524d3ffbe30ac4c9559a61cbdd21562da9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0995c2f6579f24955d374bcbc51e24835cc2e5a4df8d7989a2da74d27441e6f\"" Apr 30 00:16:31.264503 containerd[1481]: time="2025-04-30T00:16:31.264475202Z" level=info msg="StartContainer for \"f0995c2f6579f24955d374bcbc51e24835cc2e5a4df8d7989a2da74d27441e6f\"" Apr 30 00:16:31.294775 systemd[1]: Started cri-containerd-385aa5163242d1925019b400fdccf722aa4cdd37bd82dfdd059d74bbcbdd468b.scope - libcontainer container 385aa5163242d1925019b400fdccf722aa4cdd37bd82dfdd059d74bbcbdd468b. Apr 30 00:16:31.307843 systemd[1]: Started cri-containerd-aa40e202eeeee66c141aa4853c76da471168b0810383c4aea8b75170e2b5cce2.scope - libcontainer container aa40e202eeeee66c141aa4853c76da471168b0810383c4aea8b75170e2b5cce2. Apr 30 00:16:31.337625 systemd[1]: Started cri-containerd-f0995c2f6579f24955d374bcbc51e24835cc2e5a4df8d7989a2da74d27441e6f.scope - libcontainer container f0995c2f6579f24955d374bcbc51e24835cc2e5a4df8d7989a2da74d27441e6f. Apr 30 00:16:31.345295 kubelet[2192]: E0430 00:16:31.345213 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.212.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-4-a907cca219?timeout=10s\": dial tcp 134.199.212.184:6443: connect: connection refused" interval="1.6s" Apr 30 00:16:31.356293 kubelet[2192]: W0430 00:16:31.356191 2192 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.212.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.212.184:6443: connect: connection refused Apr 30 00:16:31.356293 kubelet[2192]: E0430 00:16:31.356258 2192 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.212.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.212.184:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:16:31.390412 containerd[1481]: time="2025-04-30T00:16:31.389464014Z" level=info msg="StartContainer for \"aa40e202eeeee66c141aa4853c76da471168b0810383c4aea8b75170e2b5cce2\" returns successfully" Apr 30 00:16:31.396220 containerd[1481]: time="2025-04-30T00:16:31.396152231Z" level=info msg="StartContainer for \"385aa5163242d1925019b400fdccf722aa4cdd37bd82dfdd059d74bbcbdd468b\" returns successfully" Apr 30 00:16:31.445102 containerd[1481]: time="2025-04-30T00:16:31.444354140Z" level=info msg="StartContainer for \"f0995c2f6579f24955d374bcbc51e24835cc2e5a4df8d7989a2da74d27441e6f\" returns successfully" Apr 30 00:16:31.551635 kubelet[2192]: I0430 00:16:31.550662 2192 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:31.553189 kubelet[2192]: E0430 00:16:31.551116 2192 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://134.199.212.184:6443/api/v1/nodes\": dial tcp 134.199.212.184:6443: connect: connection refused" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:31.990379 kubelet[2192]: E0430 00:16:31.990236 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:31.990617 kubelet[2192]: E0430 00:16:31.990435 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:31.994563 kubelet[2192]: E0430 00:16:31.994532 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:31.994704 kubelet[2192]: E0430 00:16:31.994672 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:31.999699 kubelet[2192]: E0430 00:16:31.999669 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:31.999848 kubelet[2192]: E0430 00:16:31.999835 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:33.003513 kubelet[2192]: E0430 00:16:33.003476 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.004162 kubelet[2192]: E0430 00:16:33.003617 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:33.004162 kubelet[2192]: E0430 00:16:33.003929 2192 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.004162 kubelet[2192]: E0430 00:16:33.004022 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:33.156243 kubelet[2192]: I0430 00:16:33.154518 2192 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.343917 kubelet[2192]: E0430 00:16:33.343766 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.3-4-a907cca219\" not found" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.443624 kubelet[2192]: I0430 00:16:33.443374 2192 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.443624 kubelet[2192]: E0430 00:16:33.443427 2192 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4152.2.3-4-a907cca219\": node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:33.448876 kubelet[2192]: E0430 00:16:33.448828 2192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:33.549635 kubelet[2192]: E0430 00:16:33.549582 2192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:33.650688 kubelet[2192]: E0430 00:16:33.650508 2192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:33.750948 kubelet[2192]: E0430 00:16:33.750890 2192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:33.904011 kubelet[2192]: I0430 00:16:33.903584 2192 apiserver.go:52] "Watching apiserver" Apr 30 00:16:33.935572 kubelet[2192]: I0430 00:16:33.935479 2192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.943609 kubelet[2192]: E0430 00:16:33.943102 2192 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152.2.3-4-a907cca219\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.943609 kubelet[2192]: I0430 00:16:33.943169 2192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.945967 kubelet[2192]: E0430 00:16:33.945644 2192 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.945967 kubelet[2192]: I0430 00:16:33.945686 2192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:33.946776 kubelet[2192]: I0430 00:16:33.946627 2192 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:16:33.948882 kubelet[2192]: E0430 00:16:33.948853 2192 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152.2.3-4-a907cca219\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:34.579808 kubelet[2192]: I0430 00:16:34.579533 2192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:34.587609 kubelet[2192]: W0430 00:16:34.587562 2192 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:34.587939 kubelet[2192]: E0430 00:16:34.587891 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:35.007443 kubelet[2192]: E0430 00:16:35.007268 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:35.317401 kubelet[2192]: I0430 00:16:35.317206 2192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:35.329004 kubelet[2192]: W0430 00:16:35.328168 2192 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:35.329004 kubelet[2192]: E0430 00:16:35.328508 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:35.874002 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-7.scope)... Apr 30 00:16:35.874535 systemd[1]: Reloading... Apr 30 00:16:35.940835 kubelet[2192]: I0430 00:16:35.940789 2192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:35.950192 kubelet[2192]: W0430 00:16:35.949655 2192 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:35.950631 kubelet[2192]: E0430 00:16:35.950585 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:36.012159 kubelet[2192]: E0430 00:16:36.010013 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:36.012159 kubelet[2192]: E0430 00:16:36.010487 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:36.033213 zram_generator::config[2514]: No configuration found. Apr 30 00:16:36.181018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:16:36.295104 systemd[1]: Reloading finished in 419 ms. Apr 30 00:16:36.342798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:16:36.355880 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:16:36.356116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:36.356196 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 119.4M memory peak, 0B memory swap peak. Apr 30 00:16:36.361647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:16:36.503207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:16:36.518510 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:16:36.595647 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:16:36.596409 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:16:36.596409 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:16:36.596409 kubelet[2559]: I0430 00:16:36.596148 2559 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:16:36.604528 kubelet[2559]: I0430 00:16:36.604479 2559 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:16:36.606009 kubelet[2559]: I0430 00:16:36.604713 2559 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:16:36.606009 kubelet[2559]: I0430 00:16:36.605086 2559 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:16:36.608179 kubelet[2559]: I0430 00:16:36.608149 2559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:16:36.619818 kubelet[2559]: I0430 00:16:36.619576 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:16:36.627285 kubelet[2559]: E0430 00:16:36.627227 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:16:36.627501 kubelet[2559]: I0430 00:16:36.627487 2559 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:16:36.632039 kubelet[2559]: I0430 00:16:36.632000 2559 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:16:36.632831 kubelet[2559]: I0430 00:16:36.632495 2559 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:16:36.632831 kubelet[2559]: I0430 00:16:36.632536 2559 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-4-a907cca219","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:16:36.632831 kubelet[2559]: I0430 00:16:36.632743 2559 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:16:36.632831 kubelet[2559]: I0430 00:16:36.632752 2559 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:16:36.633072 kubelet[2559]: I0430 00:16:36.632795 2559 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:16:36.633366 kubelet[2559]: I0430 00:16:36.633344 2559 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:16:36.633617 kubelet[2559]: I0430 00:16:36.633477 2559 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:16:36.633617 kubelet[2559]: I0430 00:16:36.633516 2559 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:16:36.633617 kubelet[2559]: I0430 00:16:36.633534 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:16:36.639824 kubelet[2559]: I0430 00:16:36.638967 2559 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:16:36.641914 kubelet[2559]: I0430 00:16:36.640492 2559 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:16:36.641914 kubelet[2559]: I0430 00:16:36.641173 2559 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:16:36.641914 kubelet[2559]: I0430 00:16:36.641221 2559 server.go:1287] "Started kubelet" Apr 30 00:16:36.650104 kubelet[2559]: I0430 00:16:36.650025 2559 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:16:36.653258 kubelet[2559]: I0430 00:16:36.653229 2559 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:16:36.665479 kubelet[2559]: I0430 00:16:36.665449 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:16:36.669612 kubelet[2559]: I0430 00:16:36.665920 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:16:36.669769 kubelet[2559]: I0430 00:16:36.669252 2559 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:16:36.670298 kubelet[2559]: I0430 00:16:36.670275 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:16:36.673183 kubelet[2559]: I0430 00:16:36.672696 2559 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:16:36.673183 kubelet[2559]: E0430 00:16:36.672993 2559 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.3-4-a907cca219\" not found" Apr 30 00:16:36.680091 kubelet[2559]: I0430 00:16:36.680053 2559 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:16:36.680554 kubelet[2559]: I0430 00:16:36.680412 2559 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:16:36.685093 kubelet[2559]: I0430 00:16:36.684955 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:16:36.686345 kubelet[2559]: I0430 00:16:36.686319 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:16:36.686493 kubelet[2559]: I0430 00:16:36.686483 2559 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:16:36.686554 kubelet[2559]: I0430 00:16:36.686545 2559 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:16:36.686609 kubelet[2559]: I0430 00:16:36.686602 2559 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:16:36.686706 kubelet[2559]: E0430 00:16:36.686691 2559 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:16:36.691288 kubelet[2559]: I0430 00:16:36.690880 2559 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:16:36.691288 kubelet[2559]: I0430 00:16:36.691017 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:16:36.700092 kubelet[2559]: I0430 00:16:36.698196 2559 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:16:36.701546 kubelet[2559]: E0430 00:16:36.701521 2559 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761383 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761409 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761437 2559 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761669 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761680 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761701 2559 policy_none.go:49] "None policy: Start" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761717 2559 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761728 2559 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:16:36.763042 kubelet[2559]: I0430 00:16:36.761879 2559 state_mem.go:75] "Updated machine memory state" Apr 30 00:16:36.772964 kubelet[2559]: I0430 00:16:36.772930 2559 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:16:36.773445 kubelet[2559]: I0430 00:16:36.773420 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:16:36.773729 kubelet[2559]: I0430 00:16:36.773678 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:16:36.776256 kubelet[2559]: I0430 00:16:36.776229 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:16:36.786068 kubelet[2559]: E0430 00:16:36.785931 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:16:36.797520 kubelet[2559]: I0430 00:16:36.797476 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.802167 kubelet[2559]: I0430 00:16:36.800597 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.807162 kubelet[2559]: I0430 00:16:36.804542 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.816999 kubelet[2559]: W0430 00:16:36.816960 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:36.817203 kubelet[2559]: E0430 00:16:36.817028 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.817465 kubelet[2559]: W0430 00:16:36.817449 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:36.817546 kubelet[2559]: E0430 00:16:36.817525 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152.2.3-4-a907cca219\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.817612 kubelet[2559]: W0430 00:16:36.817598 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:36.817708 kubelet[2559]: E0430 00:16:36.817640 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152.2.3-4-a907cca219\" already exists" pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.877689 kubelet[2559]: I0430 00:16:36.877647 2559 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.880006 sudo[2592]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:16:36.881217 sudo[2592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:16:36.882319 kubelet[2559]: I0430 00:16:36.882234 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24e4411030894bdc6b5db32fce3b5e77-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-4-a907cca219\" (UID: \"24e4411030894bdc6b5db32fce3b5e77\") " pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.882621 kubelet[2559]: I0430 00:16:36.882498 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24e4411030894bdc6b5db32fce3b5e77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-4-a907cca219\" (UID: \"24e4411030894bdc6b5db32fce3b5e77\") " pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.882621 kubelet[2559]: I0430 00:16:36.882533 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.882621 kubelet[2559]: I0430 00:16:36.882569 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.882621 kubelet[2559]: I0430 00:16:36.882602 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e65067a7411e04a0a009d4dae5df8bc-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-4-a907cca219\" (UID: \"7e65067a7411e04a0a009d4dae5df8bc\") " pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.883822 kubelet[2559]: I0430 00:16:36.883344 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24e4411030894bdc6b5db32fce3b5e77-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-4-a907cca219\" (UID: \"24e4411030894bdc6b5db32fce3b5e77\") " pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.883822 kubelet[2559]: I0430 00:16:36.883414 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.883822 kubelet[2559]: I0430 00:16:36.883434 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.883822 kubelet[2559]: I0430 00:16:36.883471 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f45c6527e0185e3d10593cfa18bc96-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" (UID: \"f3f45c6527e0185e3d10593cfa18bc96\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.888197 kubelet[2559]: I0430 00:16:36.887622 2559 kubelet_node_status.go:125] "Node was previously registered" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:36.888197 kubelet[2559]: I0430 00:16:36.887703 2559 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152.2.3-4-a907cca219" Apr 30 00:16:37.118119 kubelet[2559]: E0430 00:16:37.117932 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:37.118893 kubelet[2559]: E0430 00:16:37.118846 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:37.120633 kubelet[2559]: E0430 00:16:37.119258 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:37.492684 sudo[2592]: pam_unix(sudo:session): session closed for user root Apr 30 00:16:37.637067 kubelet[2559]: I0430 00:16:37.636837 2559 apiserver.go:52] "Watching apiserver" Apr 30 00:16:37.681291 kubelet[2559]: I0430 00:16:37.681216 2559 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:16:37.711925 kubelet[2559]: I0430 00:16:37.711759 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.3-4-a907cca219" podStartSLOduration=2.7117373909999998 podStartE2EDuration="2.711737391s" podCreationTimestamp="2025-04-30 00:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:16:37.701597905 +0000 UTC m=+1.177788660" watchObservedRunningTime="2025-04-30 00:16:37.711737391 +0000 UTC m=+1.187928145" Apr 30 00:16:37.724930 kubelet[2559]: I0430 00:16:37.724315 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.3-4-a907cca219" podStartSLOduration=3.724287044 podStartE2EDuration="3.724287044s" podCreationTimestamp="2025-04-30 00:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:16:37.713064074 +0000 UTC m=+1.189254829" watchObservedRunningTime="2025-04-30 00:16:37.724287044 +0000 UTC m=+1.200477800" Apr 30 00:16:37.734863 kubelet[2559]: E0430 00:16:37.734827 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:37.735992 kubelet[2559]: E0430 00:16:37.735856 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:37.736253 kubelet[2559]: I0430 00:16:37.736059 2559 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:37.748158 kubelet[2559]: I0430 00:16:37.747476 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" podStartSLOduration=2.747447932 podStartE2EDuration="2.747447932s" podCreationTimestamp="2025-04-30 00:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:16:37.72645316 +0000 UTC m=+1.202643913" watchObservedRunningTime="2025-04-30 00:16:37.747447932 +0000 UTC m=+1.223638688" Apr 30 00:16:37.752883 kubelet[2559]: W0430 00:16:37.751994 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:16:37.752883 kubelet[2559]: E0430 00:16:37.752074 2559 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152.2.3-4-a907cca219\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.3-4-a907cca219" Apr 30 00:16:37.752883 kubelet[2559]: E0430 00:16:37.752315 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:38.739148 kubelet[2559]: E0430 00:16:38.737873 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:38.740437 kubelet[2559]: E0430 00:16:38.740370 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:38.740960 kubelet[2559]: E0430 00:16:38.740939 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:39.114918 sudo[1663]: pam_unix(sudo:session): session closed for user root Apr 30 00:16:39.118889 sshd[1662]: Connection closed by 147.75.109.163 port 49550 Apr 30 00:16:39.121053 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Apr 30 00:16:39.124272 systemd[1]: sshd@6-134.199.212.184:22-147.75.109.163:49550.service: Deactivated successfully. Apr 30 00:16:39.126814 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:16:39.127040 systemd[1]: session-7.scope: Consumed 4.989s CPU time, 143.8M memory peak, 0B memory swap peak. Apr 30 00:16:39.128945 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:16:39.130763 systemd-logind[1455]: Removed session 7. Apr 30 00:16:39.740243 kubelet[2559]: E0430 00:16:39.740073 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:40.009057 kubelet[2559]: I0430 00:16:40.008903 2559 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:16:40.009839 containerd[1481]: time="2025-04-30T00:16:40.009758321Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:16:40.010980 kubelet[2559]: I0430 00:16:40.010210 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:16:40.669036 systemd[1]: Created slice kubepods-besteffort-pod544e4d98_bdb9_47a1_9c35_ba58ff06fb25.slice - libcontainer container kubepods-besteffort-pod544e4d98_bdb9_47a1_9c35_ba58ff06fb25.slice. Apr 30 00:16:40.683519 kubelet[2559]: W0430 00:16:40.683323 2559 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152.2.3-4-a907cca219" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.3-4-a907cca219' and this object Apr 30 00:16:40.683519 kubelet[2559]: E0430 00:16:40.683371 2559 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4152.2.3-4-a907cca219\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.3-4-a907cca219' and this object" logger="UnhandledError" Apr 30 00:16:40.683519 kubelet[2559]: W0430 00:16:40.683324 2559 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152.2.3-4-a907cca219" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.3-4-a907cca219' and this object Apr 30 00:16:40.683519 kubelet[2559]: E0430 00:16:40.683399 2559 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4152.2.3-4-a907cca219\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.3-4-a907cca219' and this object" logger="UnhandledError" Apr 30 00:16:40.683519 kubelet[2559]: W0430 00:16:40.683426 2559 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152.2.3-4-a907cca219" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.3-4-a907cca219' and this object Apr 30 00:16:40.683846 kubelet[2559]: E0430 00:16:40.683462 2559 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4152.2.3-4-a907cca219\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.3-4-a907cca219' and this object" logger="UnhandledError" Apr 30 00:16:40.690866 systemd[1]: Created slice kubepods-burstable-pod9abf5968_316e_4629_8d2d_db7b79bc7cb5.slice - libcontainer container kubepods-burstable-pod9abf5968_316e_4629_8d2d_db7b79bc7cb5.slice. Apr 30 00:16:40.700162 kubelet[2559]: E0430 00:16:40.698607 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:40.714922 kubelet[2559]: I0430 00:16:40.713222 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-cgroup\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.714922 kubelet[2559]: I0430 00:16:40.713263 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-lib-modules\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.714922 kubelet[2559]: I0430 00:16:40.713297 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-config-path\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.714922 kubelet[2559]: I0430 00:16:40.713317 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hostproc\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.714922 kubelet[2559]: I0430 00:16:40.713334 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-etc-cni-netd\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.714922 kubelet[2559]: I0430 00:16:40.713350 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-net\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715271 kubelet[2559]: I0430 00:16:40.713367 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hubble-tls\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715271 kubelet[2559]: I0430 00:16:40.713389 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544e4d98-bdb9-47a1-9c35-ba58ff06fb25-lib-modules\") pod \"kube-proxy-2c9rh\" (UID: \"544e4d98-bdb9-47a1-9c35-ba58ff06fb25\") " pod="kube-system/kube-proxy-2c9rh" Apr 30 00:16:40.715271 kubelet[2559]: I0430 00:16:40.713414 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l98s7\" (UniqueName: \"kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-kube-api-access-l98s7\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715271 kubelet[2559]: I0430 00:16:40.713454 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-run\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715271 kubelet[2559]: I0430 00:16:40.713483 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-kernel\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715402 kubelet[2559]: I0430 00:16:40.713508 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544e4d98-bdb9-47a1-9c35-ba58ff06fb25-xtables-lock\") pod \"kube-proxy-2c9rh\" (UID: \"544e4d98-bdb9-47a1-9c35-ba58ff06fb25\") " pod="kube-system/kube-proxy-2c9rh" Apr 30 00:16:40.715402 kubelet[2559]: I0430 00:16:40.713531 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-xtables-lock\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715402 kubelet[2559]: I0430 00:16:40.713552 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9abf5968-316e-4629-8d2d-db7b79bc7cb5-clustermesh-secrets\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.715402 kubelet[2559]: I0430 00:16:40.713570 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/544e4d98-bdb9-47a1-9c35-ba58ff06fb25-kube-proxy\") pod \"kube-proxy-2c9rh\" (UID: \"544e4d98-bdb9-47a1-9c35-ba58ff06fb25\") " pod="kube-system/kube-proxy-2c9rh" Apr 30 00:16:40.716545 kubelet[2559]: I0430 00:16:40.713587 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbzbl\" (UniqueName: \"kubernetes.io/projected/544e4d98-bdb9-47a1-9c35-ba58ff06fb25-kube-api-access-bbzbl\") pod \"kube-proxy-2c9rh\" (UID: \"544e4d98-bdb9-47a1-9c35-ba58ff06fb25\") " pod="kube-system/kube-proxy-2c9rh" Apr 30 00:16:40.716545 kubelet[2559]: I0430 00:16:40.716333 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-bpf-maps\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.716545 kubelet[2559]: I0430 00:16:40.716469 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cni-path\") pod \"cilium-wj2lt\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " pod="kube-system/cilium-wj2lt" Apr 30 00:16:40.983360 kubelet[2559]: E0430 00:16:40.981427 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:40.984476 containerd[1481]: time="2025-04-30T00:16:40.984200718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c9rh,Uid:544e4d98-bdb9-47a1-9c35-ba58ff06fb25,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:41.018475 containerd[1481]: time="2025-04-30T00:16:41.018052856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:41.018475 containerd[1481]: time="2025-04-30T00:16:41.018151152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:41.018475 containerd[1481]: time="2025-04-30T00:16:41.018168236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:41.018475 containerd[1481]: time="2025-04-30T00:16:41.018268634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:41.080449 systemd[1]: Started cri-containerd-a63778018079d39d85e64fc1270b366040da983961c00515b1f943db8c0d1e0d.scope - libcontainer container a63778018079d39d85e64fc1270b366040da983961c00515b1f943db8c0d1e0d. Apr 30 00:16:41.157059 systemd[1]: Created slice kubepods-besteffort-pod823adc51_02c6_4efc_89cd_d3f19977b86c.slice - libcontainer container kubepods-besteffort-pod823adc51_02c6_4efc_89cd_d3f19977b86c.slice. Apr 30 00:16:41.179969 containerd[1481]: time="2025-04-30T00:16:41.179929014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c9rh,Uid:544e4d98-bdb9-47a1-9c35-ba58ff06fb25,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63778018079d39d85e64fc1270b366040da983961c00515b1f943db8c0d1e0d\"" Apr 30 00:16:41.182371 kubelet[2559]: E0430 00:16:41.181348 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:41.187469 containerd[1481]: time="2025-04-30T00:16:41.187416193Z" level=info msg="CreateContainer within sandbox \"a63778018079d39d85e64fc1270b366040da983961c00515b1f943db8c0d1e0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:16:41.201678 containerd[1481]: time="2025-04-30T00:16:41.201626617Z" level=info msg="CreateContainer within sandbox \"a63778018079d39d85e64fc1270b366040da983961c00515b1f943db8c0d1e0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63cc5f2bde56eb893f606e8d9763a7bb1049552cbfe6bd78469988e56f3dd3b3\"" Apr 30 00:16:41.203426 containerd[1481]: time="2025-04-30T00:16:41.203367775Z" level=info msg="StartContainer for \"63cc5f2bde56eb893f606e8d9763a7bb1049552cbfe6bd78469988e56f3dd3b3\"" Apr 30 00:16:41.221899 kubelet[2559]: I0430 00:16:41.221836 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/823adc51-02c6-4efc-89cd-d3f19977b86c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-b6xt6\" (UID: \"823adc51-02c6-4efc-89cd-d3f19977b86c\") " pod="kube-system/cilium-operator-6c4d7847fc-b6xt6" Apr 30 00:16:41.221899 kubelet[2559]: I0430 00:16:41.221902 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khj6\" (UniqueName: \"kubernetes.io/projected/823adc51-02c6-4efc-89cd-d3f19977b86c-kube-api-access-6khj6\") pod \"cilium-operator-6c4d7847fc-b6xt6\" (UID: \"823adc51-02c6-4efc-89cd-d3f19977b86c\") " pod="kube-system/cilium-operator-6c4d7847fc-b6xt6" Apr 30 00:16:41.246410 systemd[1]: Started cri-containerd-63cc5f2bde56eb893f606e8d9763a7bb1049552cbfe6bd78469988e56f3dd3b3.scope - libcontainer container 63cc5f2bde56eb893f606e8d9763a7bb1049552cbfe6bd78469988e56f3dd3b3. Apr 30 00:16:41.291541 containerd[1481]: time="2025-04-30T00:16:41.291489462Z" level=info msg="StartContainer for \"63cc5f2bde56eb893f606e8d9763a7bb1049552cbfe6bd78469988e56f3dd3b3\" returns successfully" Apr 30 00:16:41.746599 kubelet[2559]: E0430 00:16:41.745770 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:41.762382 kubelet[2559]: I0430 00:16:41.761512 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2c9rh" podStartSLOduration=1.7614705370000001 podStartE2EDuration="1.761470537s" podCreationTimestamp="2025-04-30 00:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:16:41.761403332 +0000 UTC m=+5.237594088" watchObservedRunningTime="2025-04-30 00:16:41.761470537 +0000 UTC m=+5.237661294" Apr 30 00:16:41.819589 kubelet[2559]: E0430 00:16:41.819516 2559 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 30 00:16:41.819774 kubelet[2559]: E0430 00:16:41.819684 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9abf5968-316e-4629-8d2d-db7b79bc7cb5-clustermesh-secrets podName:9abf5968-316e-4629-8d2d-db7b79bc7cb5 nodeName:}" failed. No retries permitted until 2025-04-30 00:16:42.319646304 +0000 UTC m=+5.795837048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9abf5968-316e-4629-8d2d-db7b79bc7cb5-clustermesh-secrets") pod "cilium-wj2lt" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5") : failed to sync secret cache: timed out waiting for the condition Apr 30 00:16:41.819774 kubelet[2559]: E0430 00:16:41.819733 2559 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:16:41.819774 kubelet[2559]: E0430 00:16:41.819777 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-config-path podName:9abf5968-316e-4629-8d2d-db7b79bc7cb5 nodeName:}" failed. No retries permitted until 2025-04-30 00:16:42.31976675 +0000 UTC m=+5.795957485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-config-path") pod "cilium-wj2lt" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:16:42.064642 kubelet[2559]: E0430 00:16:42.064434 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:42.066775 containerd[1481]: time="2025-04-30T00:16:42.066250798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b6xt6,Uid:823adc51-02c6-4efc-89cd-d3f19977b86c,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:42.100823 containerd[1481]: time="2025-04-30T00:16:42.100503135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:42.100823 containerd[1481]: time="2025-04-30T00:16:42.100612784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:42.100823 containerd[1481]: time="2025-04-30T00:16:42.100677129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:42.101437 containerd[1481]: time="2025-04-30T00:16:42.101016847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:42.141485 systemd[1]: Started cri-containerd-d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e.scope - libcontainer container d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e. Apr 30 00:16:42.206485 containerd[1481]: time="2025-04-30T00:16:42.206436825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-b6xt6,Uid:823adc51-02c6-4efc-89cd-d3f19977b86c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\"" Apr 30 00:16:42.208924 kubelet[2559]: E0430 00:16:42.208333 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:42.211608 containerd[1481]: time="2025-04-30T00:16:42.211554859Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:16:42.248085 systemd-timesyncd[1341]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). Apr 30 00:16:42.248231 systemd-timesyncd[1341]: Initial clock synchronization to Wed 2025-04-30 00:16:42.189403 UTC. Apr 30 00:16:42.497408 kubelet[2559]: E0430 00:16:42.497209 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:42.499232 containerd[1481]: time="2025-04-30T00:16:42.498421780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wj2lt,Uid:9abf5968-316e-4629-8d2d-db7b79bc7cb5,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:42.535324 containerd[1481]: time="2025-04-30T00:16:42.535092088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:16:42.535324 containerd[1481]: time="2025-04-30T00:16:42.535232985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:16:42.535324 containerd[1481]: time="2025-04-30T00:16:42.535257392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:42.536216 containerd[1481]: time="2025-04-30T00:16:42.535715245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:16:42.562530 systemd[1]: Started cri-containerd-965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563.scope - libcontainer container 965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563. Apr 30 00:16:42.612518 containerd[1481]: time="2025-04-30T00:16:42.612459512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wj2lt,Uid:9abf5968-316e-4629-8d2d-db7b79bc7cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\"" Apr 30 00:16:42.614494 kubelet[2559]: E0430 00:16:42.614446 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:43.917866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325381095.mount: Deactivated successfully. Apr 30 00:16:44.449377 containerd[1481]: time="2025-04-30T00:16:44.449324064Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:44.450872 containerd[1481]: time="2025-04-30T00:16:44.450818773Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 00:16:44.451819 containerd[1481]: time="2025-04-30T00:16:44.451787553Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:44.454096 containerd[1481]: time="2025-04-30T00:16:44.454024855Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.242418676s" Apr 30 00:16:44.454096 containerd[1481]: time="2025-04-30T00:16:44.454100554Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 00:16:44.457767 containerd[1481]: time="2025-04-30T00:16:44.457720136Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:16:44.459673 containerd[1481]: time="2025-04-30T00:16:44.459630796Z" level=info msg="CreateContainer within sandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:16:44.474982 containerd[1481]: time="2025-04-30T00:16:44.474828091Z" level=info msg="CreateContainer within sandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\"" Apr 30 00:16:44.476182 containerd[1481]: time="2025-04-30T00:16:44.475772793Z" level=info msg="StartContainer for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\"" Apr 30 00:16:44.521526 systemd[1]: Started cri-containerd-a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb.scope - libcontainer container a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb. Apr 30 00:16:44.555404 containerd[1481]: time="2025-04-30T00:16:44.555348612Z" level=info msg="StartContainer for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" returns successfully" Apr 30 00:16:44.759831 kubelet[2559]: E0430 00:16:44.759239 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:45.763614 kubelet[2559]: E0430 00:16:45.763513 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:47.390161 kubelet[2559]: E0430 00:16:47.389738 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:47.411487 kubelet[2559]: I0430 00:16:47.411273 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-b6xt6" podStartSLOduration=4.164908835 podStartE2EDuration="6.409947507s" podCreationTimestamp="2025-04-30 00:16:41 +0000 UTC" firstStartedPulling="2025-04-30 00:16:42.210255895 +0000 UTC m=+5.686446639" lastFinishedPulling="2025-04-30 00:16:44.455294564 +0000 UTC m=+7.931485311" observedRunningTime="2025-04-30 00:16:44.810018406 +0000 UTC m=+8.286209172" watchObservedRunningTime="2025-04-30 00:16:47.409947507 +0000 UTC m=+10.886138262" Apr 30 00:16:48.029052 kubelet[2559]: E0430 00:16:48.028858 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:49.521084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873343635.mount: Deactivated successfully. Apr 30 00:16:50.709759 kubelet[2559]: E0430 00:16:50.708536 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:50.802266 kubelet[2559]: E0430 00:16:50.802214 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:51.686212 update_engine[1456]: I20250430 00:16:51.686045 1456 update_attempter.cc:509] Updating boot flags... Apr 30 00:16:51.727375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3001) Apr 30 00:16:51.807588 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3003) Apr 30 00:16:53.964884 containerd[1481]: time="2025-04-30T00:16:53.964711706Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:53.967168 containerd[1481]: time="2025-04-30T00:16:53.966862790Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 00:16:53.968166 containerd[1481]: time="2025-04-30T00:16:53.967358001Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:16:53.970153 containerd[1481]: time="2025-04-30T00:16:53.969836211Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.512062898s" Apr 30 00:16:53.970153 containerd[1481]: time="2025-04-30T00:16:53.969884034Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 00:16:53.974864 containerd[1481]: time="2025-04-30T00:16:53.974780251Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:16:54.051533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517106590.mount: Deactivated successfully. Apr 30 00:16:54.055237 containerd[1481]: time="2025-04-30T00:16:54.054924547Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\"" Apr 30 00:16:54.058178 containerd[1481]: time="2025-04-30T00:16:54.057912493Z" level=info msg="StartContainer for \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\"" Apr 30 00:16:54.163418 systemd[1]: Started cri-containerd-ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd.scope - libcontainer container ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd. Apr 30 00:16:54.201359 containerd[1481]: time="2025-04-30T00:16:54.201309750Z" level=info msg="StartContainer for \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\" returns successfully" Apr 30 00:16:54.217485 systemd[1]: cri-containerd-ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd.scope: Deactivated successfully. Apr 30 00:16:54.283675 containerd[1481]: time="2025-04-30T00:16:54.269111558Z" level=info msg="shim disconnected" id=ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd namespace=k8s.io Apr 30 00:16:54.284261 containerd[1481]: time="2025-04-30T00:16:54.283976317Z" level=warning msg="cleaning up after shim disconnected" id=ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd namespace=k8s.io Apr 30 00:16:54.284261 containerd[1481]: time="2025-04-30T00:16:54.284010612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:16:54.819832 kubelet[2559]: E0430 00:16:54.818077 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:54.822531 containerd[1481]: time="2025-04-30T00:16:54.822379547Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:16:54.852658 containerd[1481]: time="2025-04-30T00:16:54.852614364Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\"" Apr 30 00:16:54.853488 containerd[1481]: time="2025-04-30T00:16:54.853457207Z" level=info msg="StartContainer for \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\"" Apr 30 00:16:54.887449 systemd[1]: Started cri-containerd-5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef.scope - libcontainer container 5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef. Apr 30 00:16:54.916386 containerd[1481]: time="2025-04-30T00:16:54.916335907Z" level=info msg="StartContainer for \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\" returns successfully" Apr 30 00:16:54.933633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:16:54.934262 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:16:54.934901 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:16:54.943734 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:16:54.944175 systemd[1]: cri-containerd-5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef.scope: Deactivated successfully. Apr 30 00:16:54.981581 containerd[1481]: time="2025-04-30T00:16:54.981500773Z" level=info msg="shim disconnected" id=5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef namespace=k8s.io Apr 30 00:16:54.981581 containerd[1481]: time="2025-04-30T00:16:54.981556886Z" level=warning msg="cleaning up after shim disconnected" id=5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef namespace=k8s.io Apr 30 00:16:54.981581 containerd[1481]: time="2025-04-30T00:16:54.981566287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:16:54.994839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:16:55.048231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd-rootfs.mount: Deactivated successfully. Apr 30 00:16:55.846860 kubelet[2559]: E0430 00:16:55.845857 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:55.852522 containerd[1481]: time="2025-04-30T00:16:55.852323815Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:16:55.905364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1235521095.mount: Deactivated successfully. Apr 30 00:16:55.911930 containerd[1481]: time="2025-04-30T00:16:55.911873547Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\"" Apr 30 00:16:55.913759 containerd[1481]: time="2025-04-30T00:16:55.913313815Z" level=info msg="StartContainer for \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\"" Apr 30 00:16:55.968510 systemd[1]: Started cri-containerd-542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225.scope - libcontainer container 542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225. Apr 30 00:16:56.017353 containerd[1481]: time="2025-04-30T00:16:56.017301858Z" level=info msg="StartContainer for \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\" returns successfully" Apr 30 00:16:56.032328 systemd[1]: cri-containerd-542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225.scope: Deactivated successfully. Apr 30 00:16:56.060728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225-rootfs.mount: Deactivated successfully. Apr 30 00:16:56.063699 containerd[1481]: time="2025-04-30T00:16:56.063614598Z" level=info msg="shim disconnected" id=542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225 namespace=k8s.io Apr 30 00:16:56.063699 containerd[1481]: time="2025-04-30T00:16:56.063691265Z" level=warning msg="cleaning up after shim disconnected" id=542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225 namespace=k8s.io Apr 30 00:16:56.063699 containerd[1481]: time="2025-04-30T00:16:56.063704380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:16:56.850891 kubelet[2559]: E0430 00:16:56.850837 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:56.858962 containerd[1481]: time="2025-04-30T00:16:56.858480608Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:16:56.885754 containerd[1481]: time="2025-04-30T00:16:56.885495152Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\"" Apr 30 00:16:56.892201 containerd[1481]: time="2025-04-30T00:16:56.887880461Z" level=info msg="StartContainer for \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\"" Apr 30 00:16:56.929591 systemd[1]: Started cri-containerd-acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361.scope - libcontainer container acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361. Apr 30 00:16:56.976656 systemd[1]: cri-containerd-acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361.scope: Deactivated successfully. Apr 30 00:16:56.979644 containerd[1481]: time="2025-04-30T00:16:56.979509316Z" level=info msg="StartContainer for \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\" returns successfully" Apr 30 00:16:57.011596 containerd[1481]: time="2025-04-30T00:16:57.011528327Z" level=info msg="shim disconnected" id=acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361 namespace=k8s.io Apr 30 00:16:57.011968 containerd[1481]: time="2025-04-30T00:16:57.011946739Z" level=warning msg="cleaning up after shim disconnected" id=acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361 namespace=k8s.io Apr 30 00:16:57.012140 containerd[1481]: time="2025-04-30T00:16:57.012089312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:16:57.059713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361-rootfs.mount: Deactivated successfully. Apr 30 00:16:57.856053 kubelet[2559]: E0430 00:16:57.855318 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:57.866764 containerd[1481]: time="2025-04-30T00:16:57.866673084Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:16:57.887807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461270282.mount: Deactivated successfully. Apr 30 00:16:57.889294 containerd[1481]: time="2025-04-30T00:16:57.888768552Z" level=info msg="CreateContainer within sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\"" Apr 30 00:16:57.890597 containerd[1481]: time="2025-04-30T00:16:57.890407174Z" level=info msg="StartContainer for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\"" Apr 30 00:16:57.932483 systemd[1]: Started cri-containerd-a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b.scope - libcontainer container a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b. Apr 30 00:16:57.974115 containerd[1481]: time="2025-04-30T00:16:57.974048915Z" level=info msg="StartContainer for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" returns successfully" Apr 30 00:16:58.153280 kubelet[2559]: I0430 00:16:58.152952 2559 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:16:58.212087 systemd[1]: Created slice kubepods-burstable-pod7a5b6473_11d3_4e34_afc2_16750a19d5eb.slice - libcontainer container kubepods-burstable-pod7a5b6473_11d3_4e34_afc2_16750a19d5eb.slice. Apr 30 00:16:58.225012 systemd[1]: Created slice kubepods-burstable-pod5e533b5e_28c4_4be8_8898_bb4e1944dc97.slice - libcontainer container kubepods-burstable-pod5e533b5e_28c4_4be8_8898_bb4e1944dc97.slice. Apr 30 00:16:58.259361 kubelet[2559]: I0430 00:16:58.258371 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e533b5e-28c4-4be8-8898-bb4e1944dc97-config-volume\") pod \"coredns-668d6bf9bc-j6r4c\" (UID: \"5e533b5e-28c4-4be8-8898-bb4e1944dc97\") " pod="kube-system/coredns-668d6bf9bc-j6r4c" Apr 30 00:16:58.259361 kubelet[2559]: I0430 00:16:58.258451 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5b6473-11d3-4e34-afc2-16750a19d5eb-config-volume\") pod \"coredns-668d6bf9bc-6656w\" (UID: \"7a5b6473-11d3-4e34-afc2-16750a19d5eb\") " pod="kube-system/coredns-668d6bf9bc-6656w" Apr 30 00:16:58.259361 kubelet[2559]: I0430 00:16:58.258524 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrrx\" (UniqueName: \"kubernetes.io/projected/7a5b6473-11d3-4e34-afc2-16750a19d5eb-kube-api-access-mwrrx\") pod \"coredns-668d6bf9bc-6656w\" (UID: \"7a5b6473-11d3-4e34-afc2-16750a19d5eb\") " pod="kube-system/coredns-668d6bf9bc-6656w" Apr 30 00:16:58.259361 kubelet[2559]: I0430 00:16:58.258583 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rlm\" (UniqueName: \"kubernetes.io/projected/5e533b5e-28c4-4be8-8898-bb4e1944dc97-kube-api-access-62rlm\") pod \"coredns-668d6bf9bc-j6r4c\" (UID: \"5e533b5e-28c4-4be8-8898-bb4e1944dc97\") " pod="kube-system/coredns-668d6bf9bc-j6r4c" Apr 30 00:16:58.522295 kubelet[2559]: E0430 00:16:58.522163 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:58.524109 containerd[1481]: time="2025-04-30T00:16:58.523708345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6656w,Uid:7a5b6473-11d3-4e34-afc2-16750a19d5eb,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:58.533360 kubelet[2559]: E0430 00:16:58.533291 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:58.535336 containerd[1481]: time="2025-04-30T00:16:58.535296840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6r4c,Uid:5e533b5e-28c4-4be8-8898-bb4e1944dc97,Namespace:kube-system,Attempt:0,}" Apr 30 00:16:58.863446 kubelet[2559]: E0430 00:16:58.861419 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:16:58.882071 kubelet[2559]: I0430 00:16:58.881190 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wj2lt" podStartSLOduration=7.525678038 podStartE2EDuration="18.881124315s" podCreationTimestamp="2025-04-30 00:16:40 +0000 UTC" firstStartedPulling="2025-04-30 00:16:42.615730464 +0000 UTC m=+6.091921213" lastFinishedPulling="2025-04-30 00:16:53.971176743 +0000 UTC m=+17.447367490" observedRunningTime="2025-04-30 00:16:58.880657816 +0000 UTC m=+22.356848571" watchObservedRunningTime="2025-04-30 00:16:58.881124315 +0000 UTC m=+22.357315087" Apr 30 00:16:59.863372 kubelet[2559]: E0430 00:16:59.863251 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:00.331719 systemd-networkd[1369]: cilium_host: Link UP Apr 30 00:17:00.332977 systemd-networkd[1369]: cilium_net: Link UP Apr 30 00:17:00.334352 systemd-networkd[1369]: cilium_net: Gained carrier Apr 30 00:17:00.334562 systemd-networkd[1369]: cilium_host: Gained carrier Apr 30 00:17:00.371280 systemd-networkd[1369]: cilium_host: Gained IPv6LL Apr 30 00:17:00.487293 systemd-networkd[1369]: cilium_vxlan: Link UP Apr 30 00:17:00.487302 systemd-networkd[1369]: cilium_vxlan: Gained carrier Apr 30 00:17:00.631390 systemd-networkd[1369]: cilium_net: Gained IPv6LL Apr 30 00:17:00.866447 kubelet[2559]: E0430 00:17:00.866416 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:00.959167 kernel: NET: Registered PF_ALG protocol family Apr 30 00:17:01.882245 systemd-networkd[1369]: cilium_vxlan: Gained IPv6LL Apr 30 00:17:02.144835 systemd-networkd[1369]: lxc_health: Link UP Apr 30 00:17:02.155792 systemd-networkd[1369]: lxc_health: Gained carrier Apr 30 00:17:02.500879 kubelet[2559]: E0430 00:17:02.500744 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:02.639400 systemd-networkd[1369]: lxc5adc275bc00a: Link UP Apr 30 00:17:02.648253 kernel: eth0: renamed from tmpf59c1 Apr 30 00:17:02.656011 systemd-networkd[1369]: lxc5adc275bc00a: Gained carrier Apr 30 00:17:02.669012 systemd-networkd[1369]: lxcd989454479ab: Link UP Apr 30 00:17:02.677425 kernel: eth0: renamed from tmp4e754 Apr 30 00:17:02.686693 systemd-networkd[1369]: lxcd989454479ab: Gained carrier Apr 30 00:17:03.479466 systemd-networkd[1369]: lxc_health: Gained IPv6LL Apr 30 00:17:03.671765 systemd-networkd[1369]: lxc5adc275bc00a: Gained IPv6LL Apr 30 00:17:04.696098 systemd-networkd[1369]: lxcd989454479ab: Gained IPv6LL Apr 30 00:17:08.219273 containerd[1481]: time="2025-04-30T00:17:08.216415310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:17:08.220156 containerd[1481]: time="2025-04-30T00:17:08.219689286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:17:08.221390 containerd[1481]: time="2025-04-30T00:17:08.220346464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:17:08.221390 containerd[1481]: time="2025-04-30T00:17:08.220492988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:17:08.263664 containerd[1481]: time="2025-04-30T00:17:08.262025810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:17:08.263664 containerd[1481]: time="2025-04-30T00:17:08.262094712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:17:08.263664 containerd[1481]: time="2025-04-30T00:17:08.262109772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:17:08.263664 containerd[1481]: time="2025-04-30T00:17:08.263384447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:17:08.276416 systemd[1]: Started cri-containerd-f59c17cb7bcbd16bb07630e5eb4db69b955c804cab332b9f731c0e5c953acb41.scope - libcontainer container f59c17cb7bcbd16bb07630e5eb4db69b955c804cab332b9f731c0e5c953acb41. Apr 30 00:17:08.323436 systemd[1]: Started cri-containerd-4e754144a1ef174ee556881586f37f3ad9ab66efa18f06e75f504f1cf840c911.scope - libcontainer container 4e754144a1ef174ee556881586f37f3ad9ab66efa18f06e75f504f1cf840c911. Apr 30 00:17:08.400198 containerd[1481]: time="2025-04-30T00:17:08.399811809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6r4c,Uid:5e533b5e-28c4-4be8-8898-bb4e1944dc97,Namespace:kube-system,Attempt:0,} returns sandbox id \"f59c17cb7bcbd16bb07630e5eb4db69b955c804cab332b9f731c0e5c953acb41\"" Apr 30 00:17:08.414158 kubelet[2559]: E0430 00:17:08.412341 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:08.421394 containerd[1481]: time="2025-04-30T00:17:08.421349741Z" level=info msg="CreateContainer within sandbox \"f59c17cb7bcbd16bb07630e5eb4db69b955c804cab332b9f731c0e5c953acb41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:17:08.423670 containerd[1481]: time="2025-04-30T00:17:08.423597500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6656w,Uid:7a5b6473-11d3-4e34-afc2-16750a19d5eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e754144a1ef174ee556881586f37f3ad9ab66efa18f06e75f504f1cf840c911\"" Apr 30 00:17:08.428369 kubelet[2559]: E0430 00:17:08.427710 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:08.453289 containerd[1481]: time="2025-04-30T00:17:08.450442467Z" level=info msg="CreateContainer within sandbox \"4e754144a1ef174ee556881586f37f3ad9ab66efa18f06e75f504f1cf840c911\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:17:08.473557 containerd[1481]: time="2025-04-30T00:17:08.472843419Z" level=info msg="CreateContainer within sandbox \"f59c17cb7bcbd16bb07630e5eb4db69b955c804cab332b9f731c0e5c953acb41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc69e4f368b61cbab99e3856e5eb5dc556b03c1477dc038be53affaa51c3c62d\"" Apr 30 00:17:08.484381 containerd[1481]: time="2025-04-30T00:17:08.484337542Z" level=info msg="StartContainer for \"fc69e4f368b61cbab99e3856e5eb5dc556b03c1477dc038be53affaa51c3c62d\"" Apr 30 00:17:08.492368 containerd[1481]: time="2025-04-30T00:17:08.492307774Z" level=info msg="CreateContainer within sandbox \"4e754144a1ef174ee556881586f37f3ad9ab66efa18f06e75f504f1cf840c911\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81d1419bcc70c0202d8683db598b7fb8271d20a9064944b41bcc84fc98fda101\"" Apr 30 00:17:08.497183 containerd[1481]: time="2025-04-30T00:17:08.493299087Z" level=info msg="StartContainer for \"81d1419bcc70c0202d8683db598b7fb8271d20a9064944b41bcc84fc98fda101\"" Apr 30 00:17:08.538427 systemd[1]: Started cri-containerd-fc69e4f368b61cbab99e3856e5eb5dc556b03c1477dc038be53affaa51c3c62d.scope - libcontainer container fc69e4f368b61cbab99e3856e5eb5dc556b03c1477dc038be53affaa51c3c62d. Apr 30 00:17:08.546877 systemd[1]: Started cri-containerd-81d1419bcc70c0202d8683db598b7fb8271d20a9064944b41bcc84fc98fda101.scope - libcontainer container 81d1419bcc70c0202d8683db598b7fb8271d20a9064944b41bcc84fc98fda101. Apr 30 00:17:08.596661 containerd[1481]: time="2025-04-30T00:17:08.595886345Z" level=info msg="StartContainer for \"81d1419bcc70c0202d8683db598b7fb8271d20a9064944b41bcc84fc98fda101\" returns successfully" Apr 30 00:17:08.601409 containerd[1481]: time="2025-04-30T00:17:08.601292287Z" level=info msg="StartContainer for \"fc69e4f368b61cbab99e3856e5eb5dc556b03c1477dc038be53affaa51c3c62d\" returns successfully" Apr 30 00:17:08.903773 kubelet[2559]: E0430 00:17:08.903301 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:08.909015 kubelet[2559]: E0430 00:17:08.908314 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:08.923226 kubelet[2559]: I0430 00:17:08.922866 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6656w" podStartSLOduration=27.922837697 podStartE2EDuration="27.922837697s" podCreationTimestamp="2025-04-30 00:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:17:08.921222737 +0000 UTC m=+32.397413493" watchObservedRunningTime="2025-04-30 00:17:08.922837697 +0000 UTC m=+32.399028659" Apr 30 00:17:09.228885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077757905.mount: Deactivated successfully. Apr 30 00:17:09.910243 kubelet[2559]: E0430 00:17:09.910193 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:09.910963 kubelet[2559]: E0430 00:17:09.910951 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:09.962362 kubelet[2559]: I0430 00:17:09.962308 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:17:09.964826 kubelet[2559]: E0430 00:17:09.964415 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:09.991446 kubelet[2559]: I0430 00:17:09.991007 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j6r4c" podStartSLOduration=28.990984891 podStartE2EDuration="28.990984891s" podCreationTimestamp="2025-04-30 00:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:17:08.965398969 +0000 UTC m=+32.441589727" watchObservedRunningTime="2025-04-30 00:17:09.990984891 +0000 UTC m=+33.467175698" Apr 30 00:17:10.912150 kubelet[2559]: E0430 00:17:10.912040 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:10.912150 kubelet[2559]: E0430 00:17:10.912069 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:10.912641 kubelet[2559]: E0430 00:17:10.912332 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:20.153557 systemd[1]: Started sshd@7-134.199.212.184:22-147.75.109.163:35706.service - OpenSSH per-connection server daemon (147.75.109.163:35706). Apr 30 00:17:20.257378 sshd[3946]: Accepted publickey for core from 147.75.109.163 port 35706 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:20.260158 sshd-session[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:20.266023 systemd-logind[1455]: New session 8 of user core. Apr 30 00:17:20.281434 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:17:20.894988 sshd[3948]: Connection closed by 147.75.109.163 port 35706 Apr 30 00:17:20.897396 sshd-session[3946]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:20.902794 systemd[1]: sshd@7-134.199.212.184:22-147.75.109.163:35706.service: Deactivated successfully. Apr 30 00:17:20.905509 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:17:20.906915 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:17:20.908501 systemd-logind[1455]: Removed session 8. Apr 30 00:17:25.919612 systemd[1]: Started sshd@8-134.199.212.184:22-147.75.109.163:35722.service - OpenSSH per-connection server daemon (147.75.109.163:35722). Apr 30 00:17:25.979685 sshd[3959]: Accepted publickey for core from 147.75.109.163 port 35722 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:25.980576 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:25.987244 systemd-logind[1455]: New session 9 of user core. Apr 30 00:17:25.994462 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:17:26.172620 sshd[3961]: Connection closed by 147.75.109.163 port 35722 Apr 30 00:17:26.173433 sshd-session[3959]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:26.178978 systemd[1]: sshd@8-134.199.212.184:22-147.75.109.163:35722.service: Deactivated successfully. Apr 30 00:17:26.181768 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:17:26.184080 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:17:26.185852 systemd-logind[1455]: Removed session 9. Apr 30 00:17:31.195579 systemd[1]: Started sshd@9-134.199.212.184:22-147.75.109.163:52808.service - OpenSSH per-connection server daemon (147.75.109.163:52808). Apr 30 00:17:31.245134 sshd[3973]: Accepted publickey for core from 147.75.109.163 port 52808 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:31.246308 sshd-session[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:31.253250 systemd-logind[1455]: New session 10 of user core. Apr 30 00:17:31.260505 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:17:31.420622 sshd[3975]: Connection closed by 147.75.109.163 port 52808 Apr 30 00:17:31.422436 sshd-session[3973]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:31.427279 systemd[1]: sshd@9-134.199.212.184:22-147.75.109.163:52808.service: Deactivated successfully. Apr 30 00:17:31.430648 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:17:31.431938 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:17:31.433573 systemd-logind[1455]: Removed session 10. Apr 30 00:17:36.444673 systemd[1]: Started sshd@10-134.199.212.184:22-147.75.109.163:52812.service - OpenSSH per-connection server daemon (147.75.109.163:52812). Apr 30 00:17:36.495661 sshd[3987]: Accepted publickey for core from 147.75.109.163 port 52812 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:36.497496 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:36.503226 systemd-logind[1455]: New session 11 of user core. Apr 30 00:17:36.507408 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:17:36.654316 sshd[3989]: Connection closed by 147.75.109.163 port 52812 Apr 30 00:17:36.654805 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:36.668588 systemd[1]: sshd@10-134.199.212.184:22-147.75.109.163:52812.service: Deactivated successfully. Apr 30 00:17:36.672640 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:17:36.679058 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:17:36.685607 systemd[1]: Started sshd@11-134.199.212.184:22-147.75.109.163:52824.service - OpenSSH per-connection server daemon (147.75.109.163:52824). Apr 30 00:17:36.689432 systemd-logind[1455]: Removed session 11. Apr 30 00:17:36.765006 sshd[4001]: Accepted publickey for core from 147.75.109.163 port 52824 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:36.767396 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:36.773572 systemd-logind[1455]: New session 12 of user core. Apr 30 00:17:36.779418 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:17:36.988617 sshd[4006]: Connection closed by 147.75.109.163 port 52824 Apr 30 00:17:36.990429 sshd-session[4001]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:37.003227 systemd[1]: sshd@11-134.199.212.184:22-147.75.109.163:52824.service: Deactivated successfully. Apr 30 00:17:37.008307 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:17:37.014704 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:17:37.025108 systemd[1]: Started sshd@12-134.199.212.184:22-147.75.109.163:60226.service - OpenSSH per-connection server daemon (147.75.109.163:60226). Apr 30 00:17:37.028467 systemd-logind[1455]: Removed session 12. Apr 30 00:17:37.104288 sshd[4015]: Accepted publickey for core from 147.75.109.163 port 60226 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:37.106275 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:37.112081 systemd-logind[1455]: New session 13 of user core. Apr 30 00:17:37.124419 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:17:37.287513 sshd[4017]: Connection closed by 147.75.109.163 port 60226 Apr 30 00:17:37.288249 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:37.294014 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:17:37.294342 systemd[1]: sshd@12-134.199.212.184:22-147.75.109.163:60226.service: Deactivated successfully. Apr 30 00:17:37.297256 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:17:37.298318 systemd-logind[1455]: Removed session 13. Apr 30 00:17:42.308516 systemd[1]: Started sshd@13-134.199.212.184:22-147.75.109.163:60232.service - OpenSSH per-connection server daemon (147.75.109.163:60232). Apr 30 00:17:42.359223 sshd[4034]: Accepted publickey for core from 147.75.109.163 port 60232 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:42.360006 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:42.367519 systemd-logind[1455]: New session 14 of user core. Apr 30 00:17:42.380791 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:17:42.531290 sshd[4036]: Connection closed by 147.75.109.163 port 60232 Apr 30 00:17:42.532380 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:42.540059 systemd[1]: sshd@13-134.199.212.184:22-147.75.109.163:60232.service: Deactivated successfully. Apr 30 00:17:42.543483 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:17:42.545220 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:17:42.546769 systemd-logind[1455]: Removed session 14. Apr 30 00:17:47.563062 systemd[1]: Started sshd@14-134.199.212.184:22-147.75.109.163:43324.service - OpenSSH per-connection server daemon (147.75.109.163:43324). Apr 30 00:17:47.612180 sshd[4047]: Accepted publickey for core from 147.75.109.163 port 43324 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:47.613299 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:47.619947 systemd-logind[1455]: New session 15 of user core. Apr 30 00:17:47.631444 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:17:47.785310 sshd[4049]: Connection closed by 147.75.109.163 port 43324 Apr 30 00:17:47.785879 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:47.792269 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:17:47.792555 systemd[1]: sshd@14-134.199.212.184:22-147.75.109.163:43324.service: Deactivated successfully. Apr 30 00:17:47.795488 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:17:47.799095 systemd-logind[1455]: Removed session 15. Apr 30 00:17:50.688024 kubelet[2559]: E0430 00:17:50.687485 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:52.688547 kubelet[2559]: E0430 00:17:52.687993 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:52.817833 systemd[1]: Started sshd@15-134.199.212.184:22-147.75.109.163:43328.service - OpenSSH per-connection server daemon (147.75.109.163:43328). Apr 30 00:17:52.871632 sshd[4060]: Accepted publickey for core from 147.75.109.163 port 43328 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:52.872392 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:52.878046 systemd-logind[1455]: New session 16 of user core. Apr 30 00:17:52.889489 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:17:53.031429 sshd[4062]: Connection closed by 147.75.109.163 port 43328 Apr 30 00:17:53.032159 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:53.045266 systemd[1]: sshd@15-134.199.212.184:22-147.75.109.163:43328.service: Deactivated successfully. Apr 30 00:17:53.048989 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:17:53.053393 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:17:53.059627 systemd[1]: Started sshd@16-134.199.212.184:22-147.75.109.163:43338.service - OpenSSH per-connection server daemon (147.75.109.163:43338). Apr 30 00:17:53.062119 systemd-logind[1455]: Removed session 16. Apr 30 00:17:53.126195 sshd[4073]: Accepted publickey for core from 147.75.109.163 port 43338 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:53.128245 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:53.135015 systemd-logind[1455]: New session 17 of user core. Apr 30 00:17:53.145546 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:17:53.478356 sshd[4075]: Connection closed by 147.75.109.163 port 43338 Apr 30 00:17:53.478209 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:53.488332 systemd[1]: sshd@16-134.199.212.184:22-147.75.109.163:43338.service: Deactivated successfully. Apr 30 00:17:53.491556 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:17:53.495485 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:17:53.500630 systemd[1]: Started sshd@17-134.199.212.184:22-147.75.109.163:43350.service - OpenSSH per-connection server daemon (147.75.109.163:43350). Apr 30 00:17:53.503251 systemd-logind[1455]: Removed session 17. Apr 30 00:17:53.586374 sshd[4084]: Accepted publickey for core from 147.75.109.163 port 43350 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:53.588586 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:53.594748 systemd-logind[1455]: New session 18 of user core. Apr 30 00:17:53.609620 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:17:54.688764 kubelet[2559]: E0430 00:17:54.688723 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:17:54.924248 sshd[4086]: Connection closed by 147.75.109.163 port 43350 Apr 30 00:17:54.925247 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:54.941533 systemd[1]: Started sshd@18-134.199.212.184:22-147.75.109.163:43356.service - OpenSSH per-connection server daemon (147.75.109.163:43356). Apr 30 00:17:54.944224 systemd[1]: sshd@17-134.199.212.184:22-147.75.109.163:43350.service: Deactivated successfully. Apr 30 00:17:54.948989 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:17:54.956793 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:17:54.962450 systemd-logind[1455]: Removed session 18. Apr 30 00:17:55.012831 sshd[4099]: Accepted publickey for core from 147.75.109.163 port 43356 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:55.015428 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:55.023592 systemd-logind[1455]: New session 19 of user core. Apr 30 00:17:55.031471 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:17:55.324196 sshd[4104]: Connection closed by 147.75.109.163 port 43356 Apr 30 00:17:55.325465 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:55.339295 systemd[1]: sshd@18-134.199.212.184:22-147.75.109.163:43356.service: Deactivated successfully. Apr 30 00:17:55.345472 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:17:55.352153 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:17:55.359121 systemd[1]: Started sshd@19-134.199.212.184:22-147.75.109.163:43370.service - OpenSSH per-connection server daemon (147.75.109.163:43370). Apr 30 00:17:55.361047 systemd-logind[1455]: Removed session 19. Apr 30 00:17:55.429017 sshd[4112]: Accepted publickey for core from 147.75.109.163 port 43370 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:17:55.431296 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:17:55.441568 systemd-logind[1455]: New session 20 of user core. Apr 30 00:17:55.446380 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:17:55.605373 sshd[4114]: Connection closed by 147.75.109.163 port 43370 Apr 30 00:17:55.606202 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Apr 30 00:17:55.612037 systemd[1]: sshd@19-134.199.212.184:22-147.75.109.163:43370.service: Deactivated successfully. Apr 30 00:17:55.614752 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:17:55.615932 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:17:55.617398 systemd-logind[1455]: Removed session 20. Apr 30 00:18:00.619967 systemd[1]: Started sshd@20-134.199.212.184:22-147.75.109.163:43352.service - OpenSSH per-connection server daemon (147.75.109.163:43352). Apr 30 00:18:00.688072 sshd[4124]: Accepted publickey for core from 147.75.109.163 port 43352 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:00.691045 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:00.697238 systemd-logind[1455]: New session 21 of user core. Apr 30 00:18:00.702427 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:18:00.861322 sshd[4126]: Connection closed by 147.75.109.163 port 43352 Apr 30 00:18:00.862298 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:00.866984 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:18:00.868383 systemd[1]: sshd@20-134.199.212.184:22-147.75.109.163:43352.service: Deactivated successfully. Apr 30 00:18:00.872403 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:18:00.876766 systemd-logind[1455]: Removed session 21. Apr 30 00:18:05.886750 systemd[1]: Started sshd@21-134.199.212.184:22-147.75.109.163:43358.service - OpenSSH per-connection server daemon (147.75.109.163:43358). Apr 30 00:18:05.960199 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 43358 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:05.962956 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:05.970681 systemd-logind[1455]: New session 22 of user core. Apr 30 00:18:05.976493 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:18:06.126395 sshd[4141]: Connection closed by 147.75.109.163 port 43358 Apr 30 00:18:06.127246 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:06.131629 systemd[1]: sshd@21-134.199.212.184:22-147.75.109.163:43358.service: Deactivated successfully. Apr 30 00:18:06.133677 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:18:06.134823 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:18:06.136079 systemd-logind[1455]: Removed session 22. Apr 30 00:18:11.154554 systemd[1]: Started sshd@22-134.199.212.184:22-147.75.109.163:42186.service - OpenSSH per-connection server daemon (147.75.109.163:42186). Apr 30 00:18:11.205631 sshd[4154]: Accepted publickey for core from 147.75.109.163 port 42186 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:11.206343 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:11.213265 systemd-logind[1455]: New session 23 of user core. Apr 30 00:18:11.217770 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:18:11.358222 sshd[4156]: Connection closed by 147.75.109.163 port 42186 Apr 30 00:18:11.359062 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:11.363724 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:18:11.364643 systemd[1]: sshd@22-134.199.212.184:22-147.75.109.163:42186.service: Deactivated successfully. Apr 30 00:18:11.367484 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:18:11.369257 systemd-logind[1455]: Removed session 23. Apr 30 00:18:11.689797 kubelet[2559]: E0430 00:18:11.688219 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:12.689163 kubelet[2559]: E0430 00:18:12.688641 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:16.377659 systemd[1]: Started sshd@23-134.199.212.184:22-147.75.109.163:42200.service - OpenSSH per-connection server daemon (147.75.109.163:42200). Apr 30 00:18:16.448282 sshd[4169]: Accepted publickey for core from 147.75.109.163 port 42200 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:16.449941 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:16.456228 systemd-logind[1455]: New session 24 of user core. Apr 30 00:18:16.463507 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:18:16.601999 sshd[4171]: Connection closed by 147.75.109.163 port 42200 Apr 30 00:18:16.602806 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:16.612902 systemd[1]: sshd@23-134.199.212.184:22-147.75.109.163:42200.service: Deactivated successfully. Apr 30 00:18:16.616453 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:18:16.620538 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:18:16.626923 systemd[1]: Started sshd@24-134.199.212.184:22-147.75.109.163:42206.service - OpenSSH per-connection server daemon (147.75.109.163:42206). Apr 30 00:18:16.629260 systemd-logind[1455]: Removed session 24. Apr 30 00:18:16.675817 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 42206 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:16.678023 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:16.684470 systemd-logind[1455]: New session 25 of user core. Apr 30 00:18:16.689314 kubelet[2559]: E0430 00:18:16.689270 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:16.692579 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:18:18.234425 containerd[1481]: time="2025-04-30T00:18:18.234373292Z" level=info msg="StopContainer for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" with timeout 30 (s)" Apr 30 00:18:18.240392 containerd[1481]: time="2025-04-30T00:18:18.237151032Z" level=info msg="Stop container \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" with signal terminated" Apr 30 00:18:18.253920 containerd[1481]: time="2025-04-30T00:18:18.253794093Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:18:18.263353 containerd[1481]: time="2025-04-30T00:18:18.263111386Z" level=info msg="StopContainer for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" with timeout 2 (s)" Apr 30 00:18:18.263613 systemd[1]: cri-containerd-a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb.scope: Deactivated successfully. Apr 30 00:18:18.265942 containerd[1481]: time="2025-04-30T00:18:18.265479423Z" level=info msg="Stop container \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" with signal terminated" Apr 30 00:18:18.277440 systemd-networkd[1369]: lxc_health: Link DOWN Apr 30 00:18:18.277452 systemd-networkd[1369]: lxc_health: Lost carrier Apr 30 00:18:18.306017 systemd[1]: cri-containerd-a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b.scope: Deactivated successfully. Apr 30 00:18:18.306452 systemd[1]: cri-containerd-a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b.scope: Consumed 9.485s CPU time. Apr 30 00:18:18.325694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb-rootfs.mount: Deactivated successfully. Apr 30 00:18:18.332585 containerd[1481]: time="2025-04-30T00:18:18.332485714Z" level=info msg="shim disconnected" id=a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb namespace=k8s.io Apr 30 00:18:18.333362 containerd[1481]: time="2025-04-30T00:18:18.333279943Z" level=warning msg="cleaning up after shim disconnected" id=a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb namespace=k8s.io Apr 30 00:18:18.333362 containerd[1481]: time="2025-04-30T00:18:18.333304744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:18.359443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b-rootfs.mount: Deactivated successfully. Apr 30 00:18:18.362496 containerd[1481]: time="2025-04-30T00:18:18.362391293Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:18:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:18:18.365120 containerd[1481]: time="2025-04-30T00:18:18.365025349Z" level=info msg="shim disconnected" id=a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b namespace=k8s.io Apr 30 00:18:18.365120 containerd[1481]: time="2025-04-30T00:18:18.365088895Z" level=warning msg="cleaning up after shim disconnected" id=a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b namespace=k8s.io Apr 30 00:18:18.365120 containerd[1481]: time="2025-04-30T00:18:18.365097850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:18.366106 containerd[1481]: time="2025-04-30T00:18:18.365945125Z" level=info msg="StopContainer for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" returns successfully" Apr 30 00:18:18.366879 containerd[1481]: time="2025-04-30T00:18:18.366728435Z" level=info msg="StopPodSandbox for \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\"" Apr 30 00:18:18.368571 containerd[1481]: time="2025-04-30T00:18:18.368500423Z" level=info msg="Container to stop \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:18:18.375377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e-shm.mount: Deactivated successfully. Apr 30 00:18:18.393622 systemd[1]: cri-containerd-d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e.scope: Deactivated successfully. Apr 30 00:18:18.395796 containerd[1481]: time="2025-04-30T00:18:18.395727886Z" level=info msg="StopContainer for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" returns successfully" Apr 30 00:18:18.396432 containerd[1481]: time="2025-04-30T00:18:18.396404683Z" level=info msg="StopPodSandbox for \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\"" Apr 30 00:18:18.396869 containerd[1481]: time="2025-04-30T00:18:18.396781931Z" level=info msg="Container to stop \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:18:18.397000 containerd[1481]: time="2025-04-30T00:18:18.396970202Z" level=info msg="Container to stop \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:18:18.397000 containerd[1481]: time="2025-04-30T00:18:18.396986617Z" level=info msg="Container to stop \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:18:18.397632 containerd[1481]: time="2025-04-30T00:18:18.397607368Z" level=info msg="Container to stop \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:18:18.397823 containerd[1481]: time="2025-04-30T00:18:18.397695762Z" level=info msg="Container to stop \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:18:18.400892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563-shm.mount: Deactivated successfully. Apr 30 00:18:18.412772 systemd[1]: cri-containerd-965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563.scope: Deactivated successfully. Apr 30 00:18:18.443204 containerd[1481]: time="2025-04-30T00:18:18.442939419Z" level=info msg="shim disconnected" id=d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e namespace=k8s.io Apr 30 00:18:18.443204 containerd[1481]: time="2025-04-30T00:18:18.442996795Z" level=warning msg="cleaning up after shim disconnected" id=d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e namespace=k8s.io Apr 30 00:18:18.443204 containerd[1481]: time="2025-04-30T00:18:18.443025521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:18.464789 containerd[1481]: time="2025-04-30T00:18:18.464580029Z" level=info msg="TearDown network for sandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" successfully" Apr 30 00:18:18.464789 containerd[1481]: time="2025-04-30T00:18:18.464622916Z" level=info msg="StopPodSandbox for \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" returns successfully" Apr 30 00:18:18.466011 containerd[1481]: time="2025-04-30T00:18:18.465808067Z" level=info msg="shim disconnected" id=965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563 namespace=k8s.io Apr 30 00:18:18.466011 containerd[1481]: time="2025-04-30T00:18:18.465858504Z" level=warning msg="cleaning up after shim disconnected" id=965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563 namespace=k8s.io Apr 30 00:18:18.466011 containerd[1481]: time="2025-04-30T00:18:18.465865904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:18.488780 containerd[1481]: time="2025-04-30T00:18:18.488619490Z" level=info msg="TearDown network for sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" successfully" Apr 30 00:18:18.488780 containerd[1481]: time="2025-04-30T00:18:18.488651558Z" level=info msg="StopPodSandbox for \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" returns successfully" Apr 30 00:18:18.527921 kubelet[2559]: I0430 00:18:18.525734 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6khj6\" (UniqueName: \"kubernetes.io/projected/823adc51-02c6-4efc-89cd-d3f19977b86c-kube-api-access-6khj6\") pod \"823adc51-02c6-4efc-89cd-d3f19977b86c\" (UID: \"823adc51-02c6-4efc-89cd-d3f19977b86c\") " Apr 30 00:18:18.527921 kubelet[2559]: I0430 00:18:18.525798 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/823adc51-02c6-4efc-89cd-d3f19977b86c-cilium-config-path\") pod \"823adc51-02c6-4efc-89cd-d3f19977b86c\" (UID: \"823adc51-02c6-4efc-89cd-d3f19977b86c\") " Apr 30 00:18:18.535652 kubelet[2559]: I0430 00:18:18.535589 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823adc51-02c6-4efc-89cd-d3f19977b86c-kube-api-access-6khj6" (OuterVolumeSpecName: "kube-api-access-6khj6") pod "823adc51-02c6-4efc-89cd-d3f19977b86c" (UID: "823adc51-02c6-4efc-89cd-d3f19977b86c"). InnerVolumeSpecName "kube-api-access-6khj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:18:18.540325 kubelet[2559]: I0430 00:18:18.540042 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/823adc51-02c6-4efc-89cd-d3f19977b86c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "823adc51-02c6-4efc-89cd-d3f19977b86c" (UID: "823adc51-02c6-4efc-89cd-d3f19977b86c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 00:18:18.627157 kubelet[2559]: I0430 00:18:18.627060 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-cgroup\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627157 kubelet[2559]: I0430 00:18:18.627148 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l98s7\" (UniqueName: \"kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-kube-api-access-l98s7\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627408 kubelet[2559]: I0430 00:18:18.627172 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9abf5968-316e-4629-8d2d-db7b79bc7cb5-clustermesh-secrets\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627408 kubelet[2559]: I0430 00:18:18.627195 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-lib-modules\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627408 kubelet[2559]: I0430 00:18:18.627212 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-net\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627408 kubelet[2559]: I0430 00:18:18.627229 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cni-path\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627408 kubelet[2559]: I0430 00:18:18.627247 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-kernel\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627408 kubelet[2559]: I0430 00:18:18.627263 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-bpf-maps\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627683 kubelet[2559]: I0430 00:18:18.627282 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hostproc\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627683 kubelet[2559]: I0430 00:18:18.627299 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hubble-tls\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627683 kubelet[2559]: I0430 00:18:18.627315 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-xtables-lock\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627683 kubelet[2559]: I0430 00:18:18.627335 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-config-path\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627683 kubelet[2559]: I0430 00:18:18.627352 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-etc-cni-netd\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627683 kubelet[2559]: I0430 00:18:18.627378 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-run\") pod \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\" (UID: \"9abf5968-316e-4629-8d2d-db7b79bc7cb5\") " Apr 30 00:18:18.627970 kubelet[2559]: I0430 00:18:18.627450 2559 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6khj6\" (UniqueName: \"kubernetes.io/projected/823adc51-02c6-4efc-89cd-d3f19977b86c-kube-api-access-6khj6\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.627970 kubelet[2559]: I0430 00:18:18.627464 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/823adc51-02c6-4efc-89cd-d3f19977b86c-cilium-config-path\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.627970 kubelet[2559]: I0430 00:18:18.627546 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.627970 kubelet[2559]: I0430 00:18:18.627599 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.629746 kubelet[2559]: I0430 00:18:18.629258 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.632419 kubelet[2559]: I0430 00:18:18.632363 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.632643 kubelet[2559]: I0430 00:18:18.632625 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.632744 kubelet[2559]: I0430 00:18:18.632730 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cni-path" (OuterVolumeSpecName: "cni-path") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.632841 kubelet[2559]: I0430 00:18:18.632827 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.633144 kubelet[2559]: I0430 00:18:18.632939 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.633144 kubelet[2559]: I0430 00:18:18.632975 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hostproc" (OuterVolumeSpecName: "hostproc") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.637692 kubelet[2559]: I0430 00:18:18.636925 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:18:18.641216 kubelet[2559]: I0430 00:18:18.640597 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-kube-api-access-l98s7" (OuterVolumeSpecName: "kube-api-access-l98s7") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "kube-api-access-l98s7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:18:18.641452 kubelet[2559]: I0430 00:18:18.641422 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abf5968-316e-4629-8d2d-db7b79bc7cb5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 00:18:18.642732 kubelet[2559]: I0430 00:18:18.642687 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 00:18:18.643081 kubelet[2559]: I0430 00:18:18.643006 2559 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9abf5968-316e-4629-8d2d-db7b79bc7cb5" (UID: "9abf5968-316e-4629-8d2d-db7b79bc7cb5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:18:18.700889 systemd[1]: Removed slice kubepods-burstable-pod9abf5968_316e_4629_8d2d_db7b79bc7cb5.slice - libcontainer container kubepods-burstable-pod9abf5968_316e_4629_8d2d_db7b79bc7cb5.slice. Apr 30 00:18:18.701609 systemd[1]: kubepods-burstable-pod9abf5968_316e_4629_8d2d_db7b79bc7cb5.slice: Consumed 9.591s CPU time. Apr 30 00:18:18.704028 systemd[1]: Removed slice kubepods-besteffort-pod823adc51_02c6_4efc_89cd_d3f19977b86c.slice - libcontainer container kubepods-besteffort-pod823adc51_02c6_4efc_89cd_d3f19977b86c.slice. Apr 30 00:18:18.728034 kubelet[2559]: I0430 00:18:18.727956 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-cgroup\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728034 kubelet[2559]: I0430 00:18:18.728005 2559 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l98s7\" (UniqueName: \"kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-kube-api-access-l98s7\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728034 kubelet[2559]: I0430 00:18:18.728021 2559 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9abf5968-316e-4629-8d2d-db7b79bc7cb5-clustermesh-secrets\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728034 kubelet[2559]: I0430 00:18:18.728053 2559 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-lib-modules\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728068 2559 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-net\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728083 2559 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-bpf-maps\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728100 2559 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cni-path\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728113 2559 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-host-proc-sys-kernel\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728137 2559 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-xtables-lock\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728150 2559 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hostproc\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728163 2559 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9abf5968-316e-4629-8d2d-db7b79bc7cb5-hubble-tls\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728405 kubelet[2559]: I0430 00:18:18.728179 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-config-path\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728721 kubelet[2559]: I0430 00:18:18.728192 2559 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-etc-cni-netd\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:18.728721 kubelet[2559]: I0430 00:18:18.728206 2559 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9abf5968-316e-4629-8d2d-db7b79bc7cb5-cilium-run\") on node \"ci-4152.2.3-4-a907cca219\" DevicePath \"\"" Apr 30 00:18:19.078707 kubelet[2559]: I0430 00:18:19.078593 2559 scope.go:117] "RemoveContainer" containerID="a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b" Apr 30 00:18:19.094392 containerd[1481]: time="2025-04-30T00:18:19.093856141Z" level=info msg="RemoveContainer for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\"" Apr 30 00:18:19.102189 containerd[1481]: time="2025-04-30T00:18:19.101998956Z" level=info msg="RemoveContainer for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" returns successfully" Apr 30 00:18:19.103221 kubelet[2559]: I0430 00:18:19.103174 2559 scope.go:117] "RemoveContainer" containerID="acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361" Apr 30 00:18:19.105470 containerd[1481]: time="2025-04-30T00:18:19.105438021Z" level=info msg="RemoveContainer for \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\"" Apr 30 00:18:19.109949 containerd[1481]: time="2025-04-30T00:18:19.109263832Z" level=info msg="RemoveContainer for \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\" returns successfully" Apr 30 00:18:19.110424 kubelet[2559]: I0430 00:18:19.110402 2559 scope.go:117] "RemoveContainer" containerID="542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225" Apr 30 00:18:19.114423 containerd[1481]: time="2025-04-30T00:18:19.113512592Z" level=info msg="RemoveContainer for \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\"" Apr 30 00:18:19.125200 containerd[1481]: time="2025-04-30T00:18:19.124193904Z" level=info msg="RemoveContainer for \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\" returns successfully" Apr 30 00:18:19.127158 kubelet[2559]: I0430 00:18:19.126977 2559 scope.go:117] "RemoveContainer" containerID="5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef" Apr 30 00:18:19.130665 containerd[1481]: time="2025-04-30T00:18:19.129960285Z" level=info msg="RemoveContainer for \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\"" Apr 30 00:18:19.136734 containerd[1481]: time="2025-04-30T00:18:19.136665949Z" level=info msg="RemoveContainer for \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\" returns successfully" Apr 30 00:18:19.136997 kubelet[2559]: I0430 00:18:19.136972 2559 scope.go:117] "RemoveContainer" containerID="ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd" Apr 30 00:18:19.138551 containerd[1481]: time="2025-04-30T00:18:19.138476805Z" level=info msg="RemoveContainer for \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\"" Apr 30 00:18:19.142489 containerd[1481]: time="2025-04-30T00:18:19.142333489Z" level=info msg="RemoveContainer for \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\" returns successfully" Apr 30 00:18:19.142738 kubelet[2559]: I0430 00:18:19.142619 2559 scope.go:117] "RemoveContainer" containerID="a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b" Apr 30 00:18:19.142990 containerd[1481]: time="2025-04-30T00:18:19.142941252Z" level=error msg="ContainerStatus for \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\": not found" Apr 30 00:18:19.143476 kubelet[2559]: E0430 00:18:19.143328 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\": not found" containerID="a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b" Apr 30 00:18:19.155192 kubelet[2559]: I0430 00:18:19.143367 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b"} err="failed to get container status \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7d9642cb9db149afb8b1e945cb99d4b744d74ef09ef65de468d0a222ce8bd4b\": not found" Apr 30 00:18:19.155192 kubelet[2559]: I0430 00:18:19.154646 2559 scope.go:117] "RemoveContainer" containerID="acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361" Apr 30 00:18:19.155406 containerd[1481]: time="2025-04-30T00:18:19.155001040Z" level=error msg="ContainerStatus for \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\": not found" Apr 30 00:18:19.156080 kubelet[2559]: E0430 00:18:19.155612 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\": not found" containerID="acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361" Apr 30 00:18:19.156080 kubelet[2559]: I0430 00:18:19.155662 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361"} err="failed to get container status \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\": rpc error: code = NotFound desc = an error occurred when try to find container \"acb290b144687769ba5f6a63fa106d1b8264b73e9e400b7cadba194ae32fd361\": not found" Apr 30 00:18:19.156080 kubelet[2559]: I0430 00:18:19.155692 2559 scope.go:117] "RemoveContainer" containerID="542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225" Apr 30 00:18:19.156327 containerd[1481]: time="2025-04-30T00:18:19.155994854Z" level=error msg="ContainerStatus for \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\": not found" Apr 30 00:18:19.156383 kubelet[2559]: E0430 00:18:19.156177 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\": not found" containerID="542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225" Apr 30 00:18:19.156383 kubelet[2559]: I0430 00:18:19.156208 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225"} err="failed to get container status \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\": rpc error: code = NotFound desc = an error occurred when try to find container \"542cdfc47547e84514e40f2336a06e95888f3eb336230d0b3b325811fe9ec225\": not found" Apr 30 00:18:19.156383 kubelet[2559]: I0430 00:18:19.156231 2559 scope.go:117] "RemoveContainer" containerID="5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef" Apr 30 00:18:19.157286 containerd[1481]: time="2025-04-30T00:18:19.156759928Z" level=error msg="ContainerStatus for \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\": not found" Apr 30 00:18:19.157286 containerd[1481]: time="2025-04-30T00:18:19.157189570Z" level=error msg="ContainerStatus for \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\": not found" Apr 30 00:18:19.157497 kubelet[2559]: E0430 00:18:19.156930 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\": not found" containerID="5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef" Apr 30 00:18:19.157497 kubelet[2559]: I0430 00:18:19.156952 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef"} err="failed to get container status \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a148252bc5c3760552d3633266ca762fd4606b3b8afbf88d1941b25eb7062ef\": not found" Apr 30 00:18:19.157497 kubelet[2559]: I0430 00:18:19.156969 2559 scope.go:117] "RemoveContainer" containerID="ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd" Apr 30 00:18:19.157779 kubelet[2559]: E0430 00:18:19.157424 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\": not found" containerID="ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd" Apr 30 00:18:19.157779 kubelet[2559]: I0430 00:18:19.157678 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd"} err="failed to get container status \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba10ce0dfdcc6a621d5aaa58dfb3dc115e674fb321c251a84202b5401f5461bd\": not found" Apr 30 00:18:19.157779 kubelet[2559]: I0430 00:18:19.157694 2559 scope.go:117] "RemoveContainer" containerID="a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb" Apr 30 00:18:19.158844 containerd[1481]: time="2025-04-30T00:18:19.158816877Z" level=info msg="RemoveContainer for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\"" Apr 30 00:18:19.161687 containerd[1481]: time="2025-04-30T00:18:19.161551984Z" level=info msg="RemoveContainer for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" returns successfully" Apr 30 00:18:19.161872 kubelet[2559]: I0430 00:18:19.161843 2559 scope.go:117] "RemoveContainer" containerID="a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb" Apr 30 00:18:19.162181 containerd[1481]: time="2025-04-30T00:18:19.162118475Z" level=error msg="ContainerStatus for \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\": not found" Apr 30 00:18:19.162400 kubelet[2559]: E0430 00:18:19.162370 2559 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\": not found" containerID="a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb" Apr 30 00:18:19.162485 kubelet[2559]: I0430 00:18:19.162401 2559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb"} err="failed to get container status \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0f4b89fc5da9afde9d0f6114d92a92976aa026fe7b1fec17129d02487e60dcb\": not found" Apr 30 00:18:19.228549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563-rootfs.mount: Deactivated successfully. Apr 30 00:18:19.228950 systemd[1]: var-lib-kubelet-pods-9abf5968\x2d316e\x2d4629\x2d8d2d\x2ddb7b79bc7cb5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:18:19.229027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e-rootfs.mount: Deactivated successfully. Apr 30 00:18:19.229085 systemd[1]: var-lib-kubelet-pods-9abf5968\x2d316e\x2d4629\x2d8d2d\x2ddb7b79bc7cb5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:18:19.229175 systemd[1]: var-lib-kubelet-pods-823adc51\x2d02c6\x2d4efc\x2d89cd\x2dd3f19977b86c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6khj6.mount: Deactivated successfully. Apr 30 00:18:19.229256 systemd[1]: var-lib-kubelet-pods-9abf5968\x2d316e\x2d4629\x2d8d2d\x2ddb7b79bc7cb5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl98s7.mount: Deactivated successfully. Apr 30 00:18:20.146464 sshd[4184]: Connection closed by 147.75.109.163 port 42206 Apr 30 00:18:20.147659 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:20.164311 systemd[1]: sshd@24-134.199.212.184:22-147.75.109.163:42206.service: Deactivated successfully. Apr 30 00:18:20.167546 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:18:20.170183 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:18:20.175715 systemd[1]: Started sshd@25-134.199.212.184:22-147.75.109.163:34474.service - OpenSSH per-connection server daemon (147.75.109.163:34474). Apr 30 00:18:20.178287 systemd-logind[1455]: Removed session 25. Apr 30 00:18:20.273748 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 34474 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:20.275805 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:20.282164 systemd-logind[1455]: New session 26 of user core. Apr 30 00:18:20.290549 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:18:20.690666 kubelet[2559]: I0430 00:18:20.690619 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823adc51-02c6-4efc-89cd-d3f19977b86c" path="/var/lib/kubelet/pods/823adc51-02c6-4efc-89cd-d3f19977b86c/volumes" Apr 30 00:18:20.692382 kubelet[2559]: I0430 00:18:20.691787 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abf5968-316e-4629-8d2d-db7b79bc7cb5" path="/var/lib/kubelet/pods/9abf5968-316e-4629-8d2d-db7b79bc7cb5/volumes" Apr 30 00:18:21.072416 sshd[4346]: Connection closed by 147.75.109.163 port 34474 Apr 30 00:18:21.072870 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:21.085512 systemd[1]: sshd@25-134.199.212.184:22-147.75.109.163:34474.service: Deactivated successfully. Apr 30 00:18:21.088297 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:18:21.090850 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:18:21.100535 systemd[1]: Started sshd@26-134.199.212.184:22-147.75.109.163:34484.service - OpenSSH per-connection server daemon (147.75.109.163:34484). Apr 30 00:18:21.101993 systemd-logind[1455]: Removed session 26. Apr 30 00:18:21.136545 kubelet[2559]: I0430 00:18:21.133768 2559 memory_manager.go:355] "RemoveStaleState removing state" podUID="9abf5968-316e-4629-8d2d-db7b79bc7cb5" containerName="cilium-agent" Apr 30 00:18:21.136545 kubelet[2559]: I0430 00:18:21.133809 2559 memory_manager.go:355] "RemoveStaleState removing state" podUID="823adc51-02c6-4efc-89cd-d3f19977b86c" containerName="cilium-operator" Apr 30 00:18:21.154348 systemd[1]: Created slice kubepods-burstable-poda01e5fed_f563_4e12_ba5d_17f35330a21f.slice - libcontainer container kubepods-burstable-poda01e5fed_f563_4e12_ba5d_17f35330a21f.slice. Apr 30 00:18:21.215779 sshd[4356]: Accepted publickey for core from 147.75.109.163 port 34484 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:21.218247 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:21.225257 systemd-logind[1455]: New session 27 of user core. Apr 30 00:18:21.232429 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:18:21.244986 kubelet[2559]: I0430 00:18:21.243679 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-xtables-lock\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.244986 kubelet[2559]: I0430 00:18:21.243726 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-host-proc-sys-kernel\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.244986 kubelet[2559]: I0430 00:18:21.243773 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-bpf-maps\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.244986 kubelet[2559]: I0430 00:18:21.243794 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a01e5fed-f563-4e12-ba5d-17f35330a21f-cilium-config-path\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.244986 kubelet[2559]: I0430 00:18:21.243809 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-host-proc-sys-net\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245352 kubelet[2559]: I0430 00:18:21.243826 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fx4\" (UniqueName: \"kubernetes.io/projected/a01e5fed-f563-4e12-ba5d-17f35330a21f-kube-api-access-52fx4\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245352 kubelet[2559]: I0430 00:18:21.243847 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-cni-path\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245352 kubelet[2559]: I0430 00:18:21.243864 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a01e5fed-f563-4e12-ba5d-17f35330a21f-cilium-ipsec-secrets\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245352 kubelet[2559]: I0430 00:18:21.243880 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-hostproc\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245352 kubelet[2559]: I0430 00:18:21.243895 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a01e5fed-f563-4e12-ba5d-17f35330a21f-hubble-tls\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245352 kubelet[2559]: I0430 00:18:21.243916 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-etc-cni-netd\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245585 kubelet[2559]: I0430 00:18:21.243933 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a01e5fed-f563-4e12-ba5d-17f35330a21f-clustermesh-secrets\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245585 kubelet[2559]: I0430 00:18:21.243947 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-cilium-cgroup\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245585 kubelet[2559]: I0430 00:18:21.243964 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-lib-modules\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.245585 kubelet[2559]: I0430 00:18:21.243982 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a01e5fed-f563-4e12-ba5d-17f35330a21f-cilium-run\") pod \"cilium-vvjwc\" (UID: \"a01e5fed-f563-4e12-ba5d-17f35330a21f\") " pod="kube-system/cilium-vvjwc" Apr 30 00:18:21.298890 sshd[4358]: Connection closed by 147.75.109.163 port 34484 Apr 30 00:18:21.298734 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:21.315392 systemd[1]: sshd@26-134.199.212.184:22-147.75.109.163:34484.service: Deactivated successfully. Apr 30 00:18:21.319363 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:18:21.321757 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:18:21.327559 systemd[1]: Started sshd@27-134.199.212.184:22-147.75.109.163:34486.service - OpenSSH per-connection server daemon (147.75.109.163:34486). Apr 30 00:18:21.331672 systemd-logind[1455]: Removed session 27. Apr 30 00:18:21.402844 sshd[4364]: Accepted publickey for core from 147.75.109.163 port 34486 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:18:21.405212 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:18:21.412275 systemd-logind[1455]: New session 28 of user core. Apr 30 00:18:21.418517 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:18:21.475174 kubelet[2559]: E0430 00:18:21.474471 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:21.475319 containerd[1481]: time="2025-04-30T00:18:21.475107315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvjwc,Uid:a01e5fed-f563-4e12-ba5d-17f35330a21f,Namespace:kube-system,Attempt:0,}" Apr 30 00:18:21.520162 containerd[1481]: time="2025-04-30T00:18:21.519153824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:18:21.520162 containerd[1481]: time="2025-04-30T00:18:21.519334300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:18:21.520162 containerd[1481]: time="2025-04-30T00:18:21.519396368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:18:21.520162 containerd[1481]: time="2025-04-30T00:18:21.519570626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:18:21.546795 systemd[1]: Started cri-containerd-0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766.scope - libcontainer container 0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766. Apr 30 00:18:21.585574 containerd[1481]: time="2025-04-30T00:18:21.585213306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvjwc,Uid:a01e5fed-f563-4e12-ba5d-17f35330a21f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\"" Apr 30 00:18:21.589338 kubelet[2559]: E0430 00:18:21.588650 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:21.594595 containerd[1481]: time="2025-04-30T00:18:21.591798512Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:18:21.611150 containerd[1481]: time="2025-04-30T00:18:21.610659744Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5\"" Apr 30 00:18:21.612487 containerd[1481]: time="2025-04-30T00:18:21.611625678Z" level=info msg="StartContainer for \"1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5\"" Apr 30 00:18:21.666456 systemd[1]: Started cri-containerd-1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5.scope - libcontainer container 1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5. Apr 30 00:18:21.705297 containerd[1481]: time="2025-04-30T00:18:21.704011966Z" level=info msg="StartContainer for \"1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5\" returns successfully" Apr 30 00:18:21.721908 systemd[1]: cri-containerd-1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5.scope: Deactivated successfully. Apr 30 00:18:21.768538 containerd[1481]: time="2025-04-30T00:18:21.768396257Z" level=info msg="shim disconnected" id=1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5 namespace=k8s.io Apr 30 00:18:21.768538 containerd[1481]: time="2025-04-30T00:18:21.768474898Z" level=warning msg="cleaning up after shim disconnected" id=1e266bc69464e61ff06b27691b3dc820a33a40df72edb22a6ce59fe7f0070cc5 namespace=k8s.io Apr 30 00:18:21.768538 containerd[1481]: time="2025-04-30T00:18:21.768484274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:21.786577 containerd[1481]: time="2025-04-30T00:18:21.786514539Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:18:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:18:21.825462 kubelet[2559]: E0430 00:18:21.825398 2559 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:18:22.103915 kubelet[2559]: E0430 00:18:22.103673 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:22.106644 containerd[1481]: time="2025-04-30T00:18:22.106607039Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:18:22.127372 containerd[1481]: time="2025-04-30T00:18:22.127226330Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d\"" Apr 30 00:18:22.136168 containerd[1481]: time="2025-04-30T00:18:22.135362346Z" level=info msg="StartContainer for \"09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d\"" Apr 30 00:18:22.171377 systemd[1]: Started cri-containerd-09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d.scope - libcontainer container 09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d. Apr 30 00:18:22.203832 containerd[1481]: time="2025-04-30T00:18:22.203763748Z" level=info msg="StartContainer for \"09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d\" returns successfully" Apr 30 00:18:22.213661 systemd[1]: cri-containerd-09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d.scope: Deactivated successfully. Apr 30 00:18:22.249291 containerd[1481]: time="2025-04-30T00:18:22.249189646Z" level=info msg="shim disconnected" id=09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d namespace=k8s.io Apr 30 00:18:22.249734 containerd[1481]: time="2025-04-30T00:18:22.249529434Z" level=warning msg="cleaning up after shim disconnected" id=09d684cd483f93913a7bb56687bb98266ee0032b473869a26067a94994e3a17d namespace=k8s.io Apr 30 00:18:22.249734 containerd[1481]: time="2025-04-30T00:18:22.249545243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:23.106766 kubelet[2559]: E0430 00:18:23.106731 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:23.113356 containerd[1481]: time="2025-04-30T00:18:23.112909074Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:18:23.134314 containerd[1481]: time="2025-04-30T00:18:23.131422614Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28\"" Apr 30 00:18:23.134314 containerd[1481]: time="2025-04-30T00:18:23.132546424Z" level=info msg="StartContainer for \"7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28\"" Apr 30 00:18:23.138848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489980308.mount: Deactivated successfully. Apr 30 00:18:23.196413 systemd[1]: Started cri-containerd-7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28.scope - libcontainer container 7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28. Apr 30 00:18:23.281023 containerd[1481]: time="2025-04-30T00:18:23.280944400Z" level=info msg="StartContainer for \"7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28\" returns successfully" Apr 30 00:18:23.291070 systemd[1]: cri-containerd-7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28.scope: Deactivated successfully. Apr 30 00:18:23.321658 containerd[1481]: time="2025-04-30T00:18:23.321435585Z" level=info msg="shim disconnected" id=7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28 namespace=k8s.io Apr 30 00:18:23.321658 containerd[1481]: time="2025-04-30T00:18:23.321519764Z" level=warning msg="cleaning up after shim disconnected" id=7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28 namespace=k8s.io Apr 30 00:18:23.321658 containerd[1481]: time="2025-04-30T00:18:23.321532150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:23.358693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7be4a725c45fb63c115d62d3c5349184cd31dcdf81bd65b07fccc0e569867c28-rootfs.mount: Deactivated successfully. Apr 30 00:18:24.113490 kubelet[2559]: E0430 00:18:24.111663 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:24.115496 containerd[1481]: time="2025-04-30T00:18:24.115450074Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:18:24.136637 containerd[1481]: time="2025-04-30T00:18:24.136272897Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9\"" Apr 30 00:18:24.137247 containerd[1481]: time="2025-04-30T00:18:24.137031183Z" level=info msg="StartContainer for \"c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9\"" Apr 30 00:18:24.189486 systemd[1]: Started cri-containerd-c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9.scope - libcontainer container c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9. Apr 30 00:18:24.230464 systemd[1]: cri-containerd-c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9.scope: Deactivated successfully. Apr 30 00:18:24.233792 containerd[1481]: time="2025-04-30T00:18:24.232325975Z" level=info msg="StartContainer for \"c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9\" returns successfully" Apr 30 00:18:24.266588 containerd[1481]: time="2025-04-30T00:18:24.266482973Z" level=info msg="shim disconnected" id=c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9 namespace=k8s.io Apr 30 00:18:24.267553 containerd[1481]: time="2025-04-30T00:18:24.267034330Z" level=warning msg="cleaning up after shim disconnected" id=c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9 namespace=k8s.io Apr 30 00:18:24.267553 containerd[1481]: time="2025-04-30T00:18:24.267070175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:18:24.358932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0966a43b0b6517ab5f90c4cfe49379fb679e64dd0244b3231e503a5a50cbfd9-rootfs.mount: Deactivated successfully. Apr 30 00:18:25.118262 kubelet[2559]: E0430 00:18:25.117712 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:25.122820 containerd[1481]: time="2025-04-30T00:18:25.121820046Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:18:25.143398 containerd[1481]: time="2025-04-30T00:18:25.143304504Z" level=info msg="CreateContainer within sandbox \"0a45f34659bafe901575543ce9513b1ead929c68b03a6f5bf1aa83ff38e0d766\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490\"" Apr 30 00:18:25.150110 containerd[1481]: time="2025-04-30T00:18:25.147009908Z" level=info msg="StartContainer for \"22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490\"" Apr 30 00:18:25.194383 systemd[1]: Started cri-containerd-22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490.scope - libcontainer container 22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490. Apr 30 00:18:25.229283 containerd[1481]: time="2025-04-30T00:18:25.229228369Z" level=info msg="StartContainer for \"22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490\" returns successfully" Apr 30 00:18:25.700199 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 00:18:26.124838 kubelet[2559]: E0430 00:18:26.124780 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:27.477021 kubelet[2559]: E0430 00:18:27.476979 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:28.690265 kubelet[2559]: E0430 00:18:28.689206 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:29.115234 systemd-networkd[1369]: lxc_health: Link UP Apr 30 00:18:29.125324 systemd-networkd[1369]: lxc_health: Gained carrier Apr 30 00:18:29.481093 kubelet[2559]: E0430 00:18:29.479529 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:29.505300 kubelet[2559]: I0430 00:18:29.505234 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvjwc" podStartSLOduration=8.505215076 podStartE2EDuration="8.505215076s" podCreationTimestamp="2025-04-30 00:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:18:26.151283549 +0000 UTC m=+109.627474306" watchObservedRunningTime="2025-04-30 00:18:29.505215076 +0000 UTC m=+112.981405831" Apr 30 00:18:30.136199 kubelet[2559]: E0430 00:18:30.136156 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:30.407195 systemd[1]: run-containerd-runc-k8s.io-22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490-runc.RIGEbM.mount: Deactivated successfully. Apr 30 00:18:30.968246 systemd-networkd[1369]: lxc_health: Gained IPv6LL Apr 30 00:18:31.138299 kubelet[2559]: E0430 00:18:31.138260 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Apr 30 00:18:34.757771 systemd[1]: run-containerd-runc-k8s.io-22e5abdbccaa2543ce067ff359398719f3849765b2ea5f3cd6509edcc6ed7490-runc.fwPSN8.mount: Deactivated successfully. Apr 30 00:18:36.701171 containerd[1481]: time="2025-04-30T00:18:36.700886450Z" level=info msg="StopPodSandbox for \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\"" Apr 30 00:18:36.701171 containerd[1481]: time="2025-04-30T00:18:36.701031731Z" level=info msg="TearDown network for sandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" successfully" Apr 30 00:18:36.701171 containerd[1481]: time="2025-04-30T00:18:36.701050559Z" level=info msg="StopPodSandbox for \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" returns successfully" Apr 30 00:18:36.703857 containerd[1481]: time="2025-04-30T00:18:36.702766333Z" level=info msg="RemovePodSandbox for \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\"" Apr 30 00:18:36.703857 containerd[1481]: time="2025-04-30T00:18:36.702813044Z" level=info msg="Forcibly stopping sandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\"" Apr 30 00:18:36.703857 containerd[1481]: time="2025-04-30T00:18:36.702909889Z" level=info msg="TearDown network for sandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" successfully" Apr 30 00:18:36.708974 containerd[1481]: time="2025-04-30T00:18:36.708919102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:18:36.709382 containerd[1481]: time="2025-04-30T00:18:36.709219971Z" level=info msg="RemovePodSandbox \"d70a861270c05f815e405218ef529dd2219f2f958133cdf5847836892d4cca2e\" returns successfully" Apr 30 00:18:36.710500 containerd[1481]: time="2025-04-30T00:18:36.710067321Z" level=info msg="StopPodSandbox for \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\"" Apr 30 00:18:36.710500 containerd[1481]: time="2025-04-30T00:18:36.710203199Z" level=info msg="TearDown network for sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" successfully" Apr 30 00:18:36.711110 containerd[1481]: time="2025-04-30T00:18:36.710222488Z" level=info msg="StopPodSandbox for \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" returns successfully" Apr 30 00:18:36.713195 containerd[1481]: time="2025-04-30T00:18:36.712742422Z" level=info msg="RemovePodSandbox for \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\"" Apr 30 00:18:36.713195 containerd[1481]: time="2025-04-30T00:18:36.712781505Z" level=info msg="Forcibly stopping sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\"" Apr 30 00:18:36.716835 containerd[1481]: time="2025-04-30T00:18:36.715200981Z" level=info msg="TearDown network for sandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" successfully" Apr 30 00:18:36.718362 containerd[1481]: time="2025-04-30T00:18:36.718305586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:18:36.718555 containerd[1481]: time="2025-04-30T00:18:36.718537507Z" level=info msg="RemovePodSandbox \"965ab0421e311dbc8c22c21d566d203bc00aa3a58e9b9df8eec1b59f603c8563\" returns successfully" Apr 30 00:18:37.021095 sshd[4370]: Connection closed by 147.75.109.163 port 34486 Apr 30 00:18:37.022604 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Apr 30 00:18:37.026547 systemd[1]: sshd@27-134.199.212.184:22-147.75.109.163:34486.service: Deactivated successfully. Apr 30 00:18:37.029992 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:18:37.032479 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:18:37.034476 systemd-logind[1455]: Removed session 28.