Apr 30 00:19:58.021352 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 00:19:58.021400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:19:58.021420 kernel: BIOS-provided physical RAM map: Apr 30 00:19:58.021428 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 00:19:58.021434 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 00:19:58.021441 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 00:19:58.021449 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Apr 30 00:19:58.021456 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Apr 30 00:19:58.021463 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:19:58.021473 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 00:19:58.021480 kernel: NX (Execute Disable) protection: active Apr 30 00:19:58.021487 kernel: APIC: Static calls initialized Apr 30 00:19:58.021498 kernel: SMBIOS 2.8 present. Apr 30 00:19:58.021506 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Apr 30 00:19:58.021531 kernel: Hypervisor detected: KVM Apr 30 00:19:58.021543 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 00:19:58.021555 kernel: kvm-clock: using sched offset of 3517730197 cycles Apr 30 00:19:58.021564 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 00:19:58.021572 kernel: tsc: Detected 2494.140 MHz processor Apr 30 00:19:58.021580 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 00:19:58.021589 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 00:19:58.021597 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Apr 30 00:19:58.021605 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 00:19:58.021613 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 00:19:58.021625 kernel: ACPI: Early table checksum verification disabled Apr 30 00:19:58.021633 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Apr 30 00:19:58.021641 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021649 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021657 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021665 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 30 00:19:58.021673 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021681 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021689 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021700 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:19:58.021708 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Apr 30 00:19:58.021717 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Apr 30 00:19:58.021732 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 30 00:19:58.021743 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Apr 30 00:19:58.021754 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Apr 30 00:19:58.021764 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Apr 30 00:19:58.021787 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Apr 30 00:19:58.021800 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 00:19:58.021811 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 00:19:58.021824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 00:19:58.021835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 00:19:58.021852 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Apr 30 00:19:58.021864 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Apr 30 00:19:58.021880 kernel: Zone ranges: Apr 30 00:19:58.021892 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 00:19:58.021904 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Apr 30 00:19:58.021919 kernel: Normal empty Apr 30 00:19:58.021931 kernel: Movable zone start for each node Apr 30 00:19:58.021943 kernel: Early memory node ranges Apr 30 00:19:58.021955 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 00:19:58.021967 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Apr 30 00:19:58.021980 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Apr 30 00:19:58.021994 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:19:58.022003 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 00:19:58.022015 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Apr 30 00:19:58.022023 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 00:19:58.022032 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 00:19:58.022040 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 00:19:58.022069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 00:19:58.022078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 00:19:58.022086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 00:19:58.022098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 00:19:58.022107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 00:19:58.022115 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 00:19:58.022123 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 00:19:58.022132 kernel: TSC deadline timer available Apr 30 00:19:58.022144 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 00:19:58.022156 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 00:19:58.022167 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Apr 30 00:19:58.022179 kernel: Booting paravirtualized kernel on KVM Apr 30 00:19:58.022189 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 00:19:58.022210 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 00:19:58.022223 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 00:19:58.022235 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 00:19:58.022246 kernel: pcpu-alloc: [0] 0 1 Apr 30 00:19:58.022257 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 00:19:58.022268 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:19:58.022277 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:19:58.022285 kernel: random: crng init done Apr 30 00:19:58.022298 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:19:58.022306 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 00:19:58.022314 kernel: Fallback order for Node 0: 0 Apr 30 00:19:58.022323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Apr 30 00:19:58.022331 kernel: Policy zone: DMA32 Apr 30 00:19:58.022339 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:19:58.022348 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 125152K reserved, 0K cma-reserved) Apr 30 00:19:58.022357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:19:58.022369 kernel: Kernel/User page tables isolation: enabled Apr 30 00:19:58.022377 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 00:19:58.022385 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 00:19:58.022394 kernel: Dynamic Preempt: voluntary Apr 30 00:19:58.022402 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:19:58.022412 kernel: rcu: RCU event tracing is enabled. Apr 30 00:19:58.022421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:19:58.022429 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:19:58.022440 kernel: Rude variant of Tasks RCU enabled. Apr 30 00:19:58.022452 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:19:58.022468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:19:58.022479 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:19:58.022491 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 00:19:58.022504 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:19:58.022536 kernel: Console: colour VGA+ 80x25 Apr 30 00:19:58.022544 kernel: printk: console [tty0] enabled Apr 30 00:19:58.022553 kernel: printk: console [ttyS0] enabled Apr 30 00:19:58.022561 kernel: ACPI: Core revision 20230628 Apr 30 00:19:58.022570 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 00:19:58.022583 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 00:19:58.022591 kernel: x2apic enabled Apr 30 00:19:58.022599 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 00:19:58.022608 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 00:19:58.022616 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Apr 30 00:19:58.022625 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Apr 30 00:19:58.022633 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 00:19:58.022642 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 00:19:58.022663 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 00:19:58.022672 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 00:19:58.022681 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 00:19:58.022693 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 00:19:58.022701 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 00:19:58.022710 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 00:19:58.022719 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 00:19:58.022728 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 00:19:58.022737 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 00:19:58.022752 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 00:19:58.022761 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 00:19:58.022769 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 00:19:58.022778 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 00:19:58.022787 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 00:19:58.022796 kernel: Freeing SMP alternatives memory: 32K Apr 30 00:19:58.022805 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:19:58.022814 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:19:58.022830 kernel: landlock: Up and running. Apr 30 00:19:58.022843 kernel: SELinux: Initializing. Apr 30 00:19:58.022856 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:19:58.022868 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:19:58.022881 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Apr 30 00:19:58.022894 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:19:58.022906 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:19:58.022915 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:19:58.022924 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Apr 30 00:19:58.022938 kernel: signal: max sigframe size: 1776 Apr 30 00:19:58.022947 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:19:58.022956 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:19:58.022965 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 00:19:58.022974 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:19:58.022983 kernel: smpboot: x86: Booting SMP configuration: Apr 30 00:19:58.022992 kernel: .... node #0, CPUs: #1 Apr 30 00:19:58.023001 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:19:58.023014 kernel: smpboot: Max logical packages: 1 Apr 30 00:19:58.023034 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Apr 30 00:19:58.023048 kernel: devtmpfs: initialized Apr 30 00:19:58.023060 kernel: x86/mm: Memory block size: 128MB Apr 30 00:19:58.023073 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:19:58.023087 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:19:58.023100 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:19:58.023109 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:19:58.023118 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:19:58.023127 kernel: audit: type=2000 audit(1745972397.073:1): state=initialized audit_enabled=0 res=1 Apr 30 00:19:58.023139 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:19:58.023148 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 00:19:58.023160 kernel: cpuidle: using governor menu Apr 30 00:19:58.023175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:19:58.023212 kernel: dca service started, version 1.12.1 Apr 30 00:19:58.023225 kernel: PCI: Using configuration type 1 for base access Apr 30 00:19:58.023251 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 00:19:58.023267 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:19:58.023284 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:19:58.023305 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:19:58.023321 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:19:58.023336 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:19:58.023352 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:19:58.023368 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:19:58.023391 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 00:19:58.023416 kernel: ACPI: Interpreter enabled Apr 30 00:19:58.023433 kernel: ACPI: PM: (supports S0 S5) Apr 30 00:19:58.023449 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 00:19:58.023470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 00:19:58.023486 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 00:19:58.023502 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 00:19:58.023680 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:19:58.024026 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:19:58.024201 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 00:19:58.024354 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 00:19:58.024382 kernel: acpiphp: Slot [3] registered Apr 30 00:19:58.024398 kernel: acpiphp: Slot [4] registered Apr 30 00:19:58.024415 kernel: acpiphp: Slot [5] registered Apr 30 00:19:58.024449 kernel: acpiphp: Slot [6] registered Apr 30 00:19:58.024463 kernel: acpiphp: Slot [7] registered Apr 30 00:19:58.024478 kernel: acpiphp: Slot [8] registered Apr 30 00:19:58.024492 kernel: acpiphp: Slot [9] registered Apr 30 00:19:58.024508 kernel: acpiphp: Slot [10] registered Apr 30 00:19:58.024537 kernel: acpiphp: Slot [11] registered Apr 30 00:19:58.024552 kernel: acpiphp: Slot [12] registered Apr 30 00:19:58.024572 kernel: acpiphp: Slot [13] registered Apr 30 00:19:58.024588 kernel: acpiphp: Slot [14] registered Apr 30 00:19:58.024604 kernel: acpiphp: Slot [15] registered Apr 30 00:19:58.024619 kernel: acpiphp: Slot [16] registered Apr 30 00:19:58.024635 kernel: acpiphp: Slot [17] registered Apr 30 00:19:58.024652 kernel: acpiphp: Slot [18] registered Apr 30 00:19:58.024669 kernel: acpiphp: Slot [19] registered Apr 30 00:19:58.024684 kernel: acpiphp: Slot [20] registered Apr 30 00:19:58.024700 kernel: acpiphp: Slot [21] registered Apr 30 00:19:58.024720 kernel: acpiphp: Slot [22] registered Apr 30 00:19:58.024736 kernel: acpiphp: Slot [23] registered Apr 30 00:19:58.024752 kernel: acpiphp: Slot [24] registered Apr 30 00:19:58.024768 kernel: acpiphp: Slot [25] registered Apr 30 00:19:58.024784 kernel: acpiphp: Slot [26] registered Apr 30 00:19:58.024800 kernel: acpiphp: Slot [27] registered Apr 30 00:19:58.024816 kernel: acpiphp: Slot [28] registered Apr 30 00:19:58.024832 kernel: acpiphp: Slot [29] registered Apr 30 00:19:58.024848 kernel: acpiphp: Slot [30] registered Apr 30 00:19:58.024865 kernel: acpiphp: Slot [31] registered Apr 30 00:19:58.024885 kernel: PCI host bridge to bus 0000:00 Apr 30 00:19:58.025103 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 00:19:58.025244 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 00:19:58.025378 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 00:19:58.025513 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 00:19:58.025666 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 30 00:19:58.025800 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:19:58.026013 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 00:19:58.026185 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 00:19:58.026380 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 30 00:19:58.026576 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Apr 30 00:19:58.026729 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 30 00:19:58.026880 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 30 00:19:58.027040 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 30 00:19:58.027189 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 30 00:19:58.027316 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Apr 30 00:19:58.027423 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Apr 30 00:19:58.027567 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 30 00:19:58.027683 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 30 00:19:58.027807 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 30 00:19:58.027920 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Apr 30 00:19:58.028017 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Apr 30 00:19:58.028113 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Apr 30 00:19:58.028207 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Apr 30 00:19:58.028301 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 30 00:19:58.028395 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 00:19:58.028586 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 00:19:58.028718 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Apr 30 00:19:58.028816 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Apr 30 00:19:58.028912 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Apr 30 00:19:58.029073 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 00:19:58.029262 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Apr 30 00:19:58.029361 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Apr 30 00:19:58.029487 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Apr 30 00:19:58.029648 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Apr 30 00:19:58.029778 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Apr 30 00:19:58.029876 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Apr 30 00:19:58.029970 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Apr 30 00:19:58.030137 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Apr 30 00:19:58.030294 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 00:19:58.030440 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Apr 30 00:19:58.030607 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Apr 30 00:19:58.030789 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Apr 30 00:19:58.030938 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Apr 30 00:19:58.031259 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Apr 30 00:19:58.032840 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Apr 30 00:19:58.032990 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Apr 30 00:19:58.033133 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Apr 30 00:19:58.033257 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Apr 30 00:19:58.033271 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 00:19:58.033280 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 00:19:58.033289 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 00:19:58.033298 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 00:19:58.033308 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 00:19:58.033323 kernel: iommu: Default domain type: Translated Apr 30 00:19:58.033333 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 00:19:58.034571 kernel: PCI: Using ACPI for IRQ routing Apr 30 00:19:58.034586 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 00:19:58.034596 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 00:19:58.034606 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Apr 30 00:19:58.034760 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 30 00:19:58.034868 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 30 00:19:58.034975 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 00:19:58.034989 kernel: vgaarb: loaded Apr 30 00:19:58.035001 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 00:19:58.035010 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 00:19:58.035019 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 00:19:58.035028 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:19:58.035038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:19:58.035047 kernel: pnp: PnP ACPI init Apr 30 00:19:58.035056 kernel: pnp: PnP ACPI: found 4 devices Apr 30 00:19:58.035070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 00:19:58.035080 kernel: NET: Registered PF_INET protocol family Apr 30 00:19:58.035089 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:19:58.035098 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 00:19:58.035107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:19:58.035116 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 00:19:58.035125 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 00:19:58.035135 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 00:19:58.035144 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:19:58.035156 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:19:58.035170 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:19:58.035182 kernel: NET: Registered PF_XDP protocol family Apr 30 00:19:58.035304 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 00:19:58.035393 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 00:19:58.035479 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 00:19:58.036820 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 00:19:58.036968 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 30 00:19:58.037094 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 30 00:19:58.037201 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 00:19:58.037215 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 00:19:58.038690 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 43462 usecs Apr 30 00:19:58.038722 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:19:58.038732 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 00:19:58.038743 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Apr 30 00:19:58.038753 kernel: Initialise system trusted keyrings Apr 30 00:19:58.038770 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 00:19:58.038779 kernel: Key type asymmetric registered Apr 30 00:19:58.038788 kernel: Asymmetric key parser 'x509' registered Apr 30 00:19:58.038797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 00:19:58.038806 kernel: io scheduler mq-deadline registered Apr 30 00:19:58.038815 kernel: io scheduler kyber registered Apr 30 00:19:58.038824 kernel: io scheduler bfq registered Apr 30 00:19:58.038833 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 00:19:58.038844 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Apr 30 00:19:58.038853 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 30 00:19:58.038865 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 30 00:19:58.038874 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:19:58.038883 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 00:19:58.038892 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 00:19:58.038901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 00:19:58.038910 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 00:19:58.038919 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 00:19:58.039082 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 00:19:58.039233 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 00:19:58.040691 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T00:19:57 UTC (1745972397) Apr 30 00:19:58.040857 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 00:19:58.040881 kernel: intel_pstate: CPU model not supported Apr 30 00:19:58.040896 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:19:58.040906 kernel: Segment Routing with IPv6 Apr 30 00:19:58.040915 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:19:58.040925 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:19:58.040942 kernel: Key type dns_resolver registered Apr 30 00:19:58.040952 kernel: IPI shorthand broadcast: enabled Apr 30 00:19:58.040962 kernel: sched_clock: Marking stable (1048004561, 92112995)->(1238048179, -97930623) Apr 30 00:19:58.040971 kernel: registered taskstats version 1 Apr 30 00:19:58.040980 kernel: Loading compiled-in X.509 certificates Apr 30 00:19:58.040990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 00:19:58.040999 kernel: Key type .fscrypt registered Apr 30 00:19:58.041007 kernel: Key type fscrypt-provisioning registered Apr 30 00:19:58.041017 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:19:58.041030 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:19:58.041043 kernel: ima: No architecture policies found Apr 30 00:19:58.041056 kernel: clk: Disabling unused clocks Apr 30 00:19:58.041069 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 00:19:58.041082 kernel: Write protecting the kernel read-only data: 36864k Apr 30 00:19:58.041127 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 00:19:58.041144 kernel: Run /init as init process Apr 30 00:19:58.041157 kernel: with arguments: Apr 30 00:19:58.041171 kernel: /init Apr 30 00:19:58.041188 kernel: with environment: Apr 30 00:19:58.041202 kernel: HOME=/ Apr 30 00:19:58.041217 kernel: TERM=linux Apr 30 00:19:58.041232 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:19:58.041257 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:19:58.041275 systemd[1]: Detected virtualization kvm. Apr 30 00:19:58.041299 systemd[1]: Detected architecture x86-64. Apr 30 00:19:58.041313 systemd[1]: Running in initrd. Apr 30 00:19:58.041334 systemd[1]: No hostname configured, using default hostname. Apr 30 00:19:58.041349 systemd[1]: Hostname set to . Apr 30 00:19:58.041363 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:19:58.041376 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:19:58.041390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:19:58.041405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:19:58.041421 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:19:58.041437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:19:58.041459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:19:58.041473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:19:58.041489 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:19:58.041503 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:19:58.041608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:19:58.041625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:19:58.041644 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:19:58.041658 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:19:58.041672 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:19:58.041691 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:19:58.041705 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:19:58.041720 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:19:58.041740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:19:58.041755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:19:58.041772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:19:58.041784 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:19:58.041794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:19:58.041810 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:19:58.041824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:19:58.041838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:19:58.041858 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:19:58.041874 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:19:58.041891 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:19:58.041901 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:19:58.041911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:19:58.041921 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:19:58.041975 systemd-journald[183]: Collecting audit messages is disabled. Apr 30 00:19:58.042004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:19:58.042014 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:19:58.042027 systemd-journald[183]: Journal started Apr 30 00:19:58.042053 systemd-journald[183]: Runtime Journal (/run/log/journal/f80c9ea777da43cba505d83090790ea0) is 4.9M, max 39.3M, 34.4M free. Apr 30 00:19:58.031474 systemd-modules-load[184]: Inserted module 'overlay' Apr 30 00:19:58.052554 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:19:58.084842 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:19:58.089801 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:19:58.089909 kernel: Bridge firewalling registered Apr 30 00:19:58.090537 systemd-modules-load[184]: Inserted module 'br_netfilter' Apr 30 00:19:58.093341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:19:58.098145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:19:58.101361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:19:58.111865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:19:58.115769 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:19:58.117780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:19:58.129697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:19:58.154863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:19:58.155775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:19:58.158824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:19:58.167867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:19:58.169402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:19:58.173967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:19:58.187789 dracut-cmdline[215]: dracut-dracut-053 Apr 30 00:19:58.192463 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:19:58.220481 systemd-resolved[217]: Positive Trust Anchors: Apr 30 00:19:58.221267 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:19:58.221334 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:19:58.229602 systemd-resolved[217]: Defaulting to hostname 'linux'. Apr 30 00:19:58.231932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:19:58.233311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:19:58.295592 kernel: SCSI subsystem initialized Apr 30 00:19:58.310653 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:19:58.327604 kernel: iscsi: registered transport (tcp) Apr 30 00:19:58.358611 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:19:58.358725 kernel: QLogic iSCSI HBA Driver Apr 30 00:19:58.421035 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:19:58.431275 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:19:58.463785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:19:58.463896 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:19:58.463917 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:19:58.513582 kernel: raid6: avx2x4 gen() 16640 MB/s Apr 30 00:19:58.530588 kernel: raid6: avx2x2 gen() 16500 MB/s Apr 30 00:19:58.547856 kernel: raid6: avx2x1 gen() 12638 MB/s Apr 30 00:19:58.547949 kernel: raid6: using algorithm avx2x4 gen() 16640 MB/s Apr 30 00:19:58.565892 kernel: raid6: .... xor() 6365 MB/s, rmw enabled Apr 30 00:19:58.566007 kernel: raid6: using avx2x2 recovery algorithm Apr 30 00:19:58.589562 kernel: xor: automatically using best checksumming function avx Apr 30 00:19:58.754560 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:19:58.767997 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:19:58.774812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:19:58.792215 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 30 00:19:58.797994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:19:58.806375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:19:58.844359 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Apr 30 00:19:58.883992 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:19:58.889919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:19:58.955641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:19:58.966341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:19:59.011456 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:19:59.014853 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:19:59.017736 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:19:59.019626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:19:59.026877 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:19:59.040915 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:19:59.067946 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:19:59.076583 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Apr 30 00:19:59.146710 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 00:19:59.146740 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 00:19:59.146888 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 00:19:59.146902 kernel: AES CTR mode by8 optimization enabled Apr 30 00:19:59.146914 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:19:59.146926 kernel: GPT:9289727 != 125829119 Apr 30 00:19:59.146979 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:19:59.146993 kernel: GPT:9289727 != 125829119 Apr 30 00:19:59.147005 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:19:59.147018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:19:59.147029 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Apr 30 00:19:59.162534 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Apr 30 00:19:59.111317 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:19:59.111457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:19:59.112104 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:19:59.112716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:19:59.239106 kernel: ACPI: bus type USB registered Apr 30 00:19:59.239135 kernel: libata version 3.00 loaded. Apr 30 00:19:59.239149 kernel: usbcore: registered new interface driver usbfs Apr 30 00:19:59.239161 kernel: usbcore: registered new interface driver hub Apr 30 00:19:59.239178 kernel: usbcore: registered new device driver usb Apr 30 00:19:59.239195 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Apr 30 00:19:59.239469 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Apr 30 00:19:59.239649 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Apr 30 00:19:59.239770 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Apr 30 00:19:59.239899 kernel: hub 1-0:1.0: USB hub found Apr 30 00:19:59.240062 kernel: hub 1-0:1.0: 2 ports detected Apr 30 00:19:59.240177 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 30 00:19:59.243766 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (460) Apr 30 00:19:59.243799 kernel: scsi host1: ata_piix Apr 30 00:19:59.244009 kernel: scsi host2: ata_piix Apr 30 00:19:59.244214 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Apr 30 00:19:59.244235 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Apr 30 00:19:59.112920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:19:59.117731 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:19:59.123833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:19:59.243727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:19:59.256205 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (449) Apr 30 00:19:59.257839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:19:59.267925 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:19:59.273690 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:19:59.281474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:19:59.288622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:19:59.290087 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:19:59.290964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:19:59.298786 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:19:59.305786 disk-uuid[549]: Primary Header is updated. Apr 30 00:19:59.305786 disk-uuid[549]: Secondary Entries is updated. Apr 30 00:19:59.305786 disk-uuid[549]: Secondary Header is updated. Apr 30 00:19:59.312725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:20:00.326558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:20:00.328355 disk-uuid[550]: The operation has completed successfully. Apr 30 00:20:00.387037 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:20:00.387180 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:20:00.401901 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:20:00.412016 sh[561]: Success Apr 30 00:20:00.434560 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 00:20:00.547577 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:20:00.559854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:20:00.566629 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:20:00.600691 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 00:20:00.600813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:00.601953 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:20:00.603660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:20:00.603736 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:20:00.619012 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:20:00.620841 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:20:00.630842 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:20:00.634815 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:20:00.650322 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:00.650413 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:00.650427 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:20:00.656647 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:20:00.672262 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:20:00.674006 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:00.683405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:20:00.694921 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:20:00.830703 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:20:00.839025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:20:00.867678 ignition[655]: Ignition 2.20.0 Apr 30 00:20:00.867710 ignition[655]: Stage: fetch-offline Apr 30 00:20:00.867820 ignition[655]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:00.867836 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:00.868045 ignition[655]: parsed url from cmdline: "" Apr 30 00:20:00.870869 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:20:00.868054 ignition[655]: no config URL provided Apr 30 00:20:00.868062 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:20:00.868075 ignition[655]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:20:00.868084 ignition[655]: failed to fetch config: resource requires networking Apr 30 00:20:00.868476 ignition[655]: Ignition finished successfully Apr 30 00:20:00.886634 systemd-networkd[751]: lo: Link UP Apr 30 00:20:00.886648 systemd-networkd[751]: lo: Gained carrier Apr 30 00:20:00.890678 systemd-networkd[751]: Enumeration completed Apr 30 00:20:00.891192 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 00:20:00.891197 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Apr 30 00:20:00.892369 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:20:00.892596 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:20:00.892605 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:20:00.893714 systemd[1]: Reached target network.target - Network. Apr 30 00:20:00.893891 systemd-networkd[751]: eth0: Link UP Apr 30 00:20:00.893898 systemd-networkd[751]: eth0: Gained carrier Apr 30 00:20:00.893914 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Apr 30 00:20:00.898128 systemd-networkd[751]: eth1: Link UP Apr 30 00:20:00.898135 systemd-networkd[751]: eth1: Gained carrier Apr 30 00:20:00.898155 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:20:00.903974 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:20:00.921419 systemd-networkd[751]: eth0: DHCPv4 address 146.190.146.79/20, gateway 146.190.144.1 acquired from 169.254.169.253 Apr 30 00:20:00.933661 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.18/20 acquired from 169.254.169.253 Apr 30 00:20:00.950119 ignition[754]: Ignition 2.20.0 Apr 30 00:20:00.951487 ignition[754]: Stage: fetch Apr 30 00:20:00.951918 ignition[754]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:00.951932 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:00.952722 ignition[754]: parsed url from cmdline: "" Apr 30 00:20:00.952730 ignition[754]: no config URL provided Apr 30 00:20:00.952743 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:20:00.952766 ignition[754]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:20:00.952805 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Apr 30 00:20:00.990997 ignition[754]: GET result: OK Apr 30 00:20:00.991377 ignition[754]: parsing config with SHA512: 5fb26344a85cb2e317da8b8514c3899f44ef5af84c453a5a79915de08756b2988ff0f21db90089419c52bc09d2aad9407fa92548d687a842f41144321cdb5807 Apr 30 00:20:01.007624 unknown[754]: fetched base config from "system" Apr 30 00:20:01.007655 unknown[754]: fetched base config from "system" Apr 30 00:20:01.007669 unknown[754]: fetched user config from "digitalocean" Apr 30 00:20:01.010399 ignition[754]: fetch: fetch complete Apr 30 00:20:01.010412 ignition[754]: fetch: fetch passed Apr 30 00:20:01.010630 ignition[754]: Ignition finished successfully Apr 30 00:20:01.014478 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:20:01.022992 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:20:01.073734 ignition[762]: Ignition 2.20.0 Apr 30 00:20:01.073757 ignition[762]: Stage: kargs Apr 30 00:20:01.074188 ignition[762]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:01.074207 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:01.076070 ignition[762]: kargs: kargs passed Apr 30 00:20:01.076180 ignition[762]: Ignition finished successfully Apr 30 00:20:01.077944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:20:01.088035 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:20:01.131106 ignition[769]: Ignition 2.20.0 Apr 30 00:20:01.131126 ignition[769]: Stage: disks Apr 30 00:20:01.131495 ignition[769]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:01.131557 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:01.135246 ignition[769]: disks: disks passed Apr 30 00:20:01.135372 ignition[769]: Ignition finished successfully Apr 30 00:20:01.142185 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:20:01.143868 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:20:01.144620 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:20:01.145070 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:20:01.147369 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:20:01.147760 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:20:01.161157 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:20:01.194156 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:20:01.199835 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:20:01.207994 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:20:01.385536 kernel: EXT4-fs (vda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 00:20:01.387867 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:20:01.389183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:20:01.401775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:20:01.406693 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:20:01.409910 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Apr 30 00:20:01.428662 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (785) Apr 30 00:20:01.430098 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:20:01.438843 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:01.438897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:01.438920 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:20:01.443575 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:20:01.442930 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:20:01.443007 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:20:01.452866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:20:01.458159 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:20:01.469957 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:20:01.587478 coreos-metadata[787]: Apr 30 00:20:01.587 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:20:01.606710 coreos-metadata[787]: Apr 30 00:20:01.606 INFO Fetch successful Apr 30 00:20:01.609784 coreos-metadata[788]: Apr 30 00:20:01.609 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:20:01.614794 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:20:01.620042 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Apr 30 00:20:01.621297 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Apr 30 00:20:01.626014 coreos-metadata[788]: Apr 30 00:20:01.625 INFO Fetch successful Apr 30 00:20:01.634142 coreos-metadata[788]: Apr 30 00:20:01.633 INFO wrote hostname ci-4152.2.3-1-91c0161c2f to /sysroot/etc/hostname Apr 30 00:20:01.635903 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:20:01.640018 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:20:01.651476 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:20:01.659902 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:20:01.873932 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:20:01.882860 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:20:01.897336 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:20:01.915577 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:01.917653 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:20:01.944214 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:20:01.971734 ignition[907]: INFO : Ignition 2.20.0 Apr 30 00:20:01.971734 ignition[907]: INFO : Stage: mount Apr 30 00:20:01.973315 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:01.973315 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:01.974703 ignition[907]: INFO : mount: mount passed Apr 30 00:20:01.974703 ignition[907]: INFO : Ignition finished successfully Apr 30 00:20:01.977228 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:20:01.982852 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:20:02.024626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:20:02.053592 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (919) Apr 30 00:20:02.055628 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:02.055755 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:02.057611 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:20:02.070577 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:20:02.074353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:20:02.120745 ignition[936]: INFO : Ignition 2.20.0 Apr 30 00:20:02.120745 ignition[936]: INFO : Stage: files Apr 30 00:20:02.122423 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:02.122423 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:02.124214 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:20:02.127237 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:20:02.127237 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:20:02.133608 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:20:02.134998 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:20:02.136901 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:20:02.136347 unknown[936]: wrote ssh authorized keys file for user: core Apr 30 00:20:02.139569 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:20:02.141142 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:20:02.141142 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:20:02.141142 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 00:20:02.307228 systemd-networkd[751]: eth0: Gained IPv6LL Apr 30 00:20:02.627209 systemd-networkd[751]: eth1: Gained IPv6LL Apr 30 00:20:02.689936 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:20:03.077311 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:20:03.077311 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:20:03.079744 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 00:20:03.791491 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 30 00:20:03.869958 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:20:03.869958 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:20:03.871964 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 00:20:04.221553 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 30 00:20:04.532156 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:20:04.532156 ignition[936]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 30 00:20:04.533851 ignition[936]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:20:04.534766 ignition[936]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:20:04.542004 ignition[936]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:20:04.542004 ignition[936]: INFO : files: files passed Apr 30 00:20:04.542004 ignition[936]: INFO : Ignition finished successfully Apr 30 00:20:04.536646 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:20:04.545961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:20:04.548793 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:20:04.566598 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:20:04.567757 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:20:04.583173 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:20:04.583173 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:20:04.586130 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:20:04.589617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:20:04.590393 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:20:04.594846 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:20:04.647750 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:20:04.647954 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:20:04.649501 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:20:04.650206 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:20:04.651229 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:20:04.658912 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:20:04.681726 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:20:04.685794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:20:04.703793 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:20:04.705082 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:20:04.706349 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:20:04.707335 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:20:04.707539 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:20:04.709452 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:20:04.710647 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:20:04.711725 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:20:04.713143 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:20:04.714403 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:20:04.715645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:20:04.716766 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:20:04.717884 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:20:04.718457 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:20:04.719184 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:20:04.719935 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:20:04.720078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:20:04.721225 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:20:04.722400 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:20:04.723373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:20:04.723625 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:20:04.724219 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:20:04.724363 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:20:04.725702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:20:04.725916 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:20:04.726938 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:20:04.727165 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:20:04.728247 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:20:04.728364 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:20:04.736130 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:20:04.737278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:20:04.737484 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:20:04.747835 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:20:04.748831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:20:04.749035 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:20:04.758235 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:20:04.758380 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:20:04.763480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:20:04.768783 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:20:04.773374 ignition[988]: INFO : Ignition 2.20.0 Apr 30 00:20:04.773374 ignition[988]: INFO : Stage: umount Apr 30 00:20:04.775218 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:04.775218 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Apr 30 00:20:04.779290 ignition[988]: INFO : umount: umount passed Apr 30 00:20:04.779290 ignition[988]: INFO : Ignition finished successfully Apr 30 00:20:04.778635 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:20:04.778850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:20:04.780500 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:20:04.781265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:20:04.783705 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:20:04.783784 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:20:04.784307 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:20:04.784374 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:20:04.785009 systemd[1]: Stopped target network.target - Network. Apr 30 00:20:04.785961 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:20:04.786033 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:20:04.787070 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:20:04.787947 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:20:04.788668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:20:04.789607 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:20:04.789993 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:20:04.790423 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:20:04.790494 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:20:04.791690 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:20:04.791757 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:20:04.792992 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:20:04.793105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:20:04.795754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:20:04.795845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:20:04.796853 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:20:04.797851 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:20:04.800405 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:20:04.800607 systemd-networkd[751]: eth1: DHCPv6 lease lost Apr 30 00:20:04.801678 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:20:04.801833 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:20:04.803560 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:20:04.804069 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:20:04.804713 systemd-networkd[751]: eth0: DHCPv6 lease lost Apr 30 00:20:04.806836 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:20:04.807640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:20:04.809482 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:20:04.811810 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:20:04.818769 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:20:04.819304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:20:04.819423 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:20:04.820082 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:20:04.822446 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:20:04.828123 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:20:04.843470 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:20:04.844275 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:20:04.847654 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:20:04.847868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:20:04.851846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:20:04.851950 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:20:04.852655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:20:04.852720 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:20:04.853623 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:20:04.853713 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:20:04.855034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:20:04.855128 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:20:04.856129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:20:04.856223 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:20:04.863878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:20:04.864336 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:20:04.864605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:20:04.865066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:20:04.865129 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:20:04.865494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:20:04.867839 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:20:04.868882 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:20:04.868947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:20:04.870457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:20:04.870555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:04.873817 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:20:04.873937 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:20:04.875806 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:20:04.886922 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:20:04.898666 systemd[1]: Switching root. Apr 30 00:20:04.932234 systemd-journald[183]: Journal stopped Apr 30 00:20:06.551469 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Apr 30 00:20:06.551626 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:20:06.551679 kernel: SELinux: policy capability open_perms=1 Apr 30 00:20:06.551699 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:20:06.551717 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:20:06.551737 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:20:06.551757 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:20:06.551777 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:20:06.551806 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:20:06.551831 kernel: audit: type=1403 audit(1745972405.272:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:20:06.551865 systemd[1]: Successfully loaded SELinux policy in 46.321ms. Apr 30 00:20:06.551903 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.723ms. Apr 30 00:20:06.551925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:20:06.551945 systemd[1]: Detected virtualization kvm. Apr 30 00:20:06.551964 systemd[1]: Detected architecture x86-64. Apr 30 00:20:06.551986 systemd[1]: Detected first boot. Apr 30 00:20:06.552004 systemd[1]: Hostname set to . Apr 30 00:20:06.552021 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:20:06.552042 zram_generator::config[1051]: No configuration found. Apr 30 00:20:06.552080 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:20:06.552109 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:20:06.552131 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:20:06.552156 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:20:06.552174 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:20:06.552194 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:20:06.552213 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:20:06.552232 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:20:06.552259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:20:06.552282 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:20:06.552300 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:20:06.552318 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:20:06.552335 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:20:06.552352 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:20:06.552370 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:20:06.552389 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:20:06.552415 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:20:06.552450 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:20:06.552471 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:20:06.552490 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:20:06.552511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:20:06.552833 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:20:06.552862 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:20:06.552900 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:20:06.552920 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:20:06.552939 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:20:06.552959 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:20:06.552981 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:20:06.553002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:20:06.553021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:20:06.553040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:20:06.553059 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:20:06.553078 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:20:06.553112 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:20:06.553132 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:20:06.553151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:06.553172 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:20:06.553191 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:20:06.553210 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:20:06.553230 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:20:06.553250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:06.553283 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:20:06.553304 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:20:06.553327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:06.553347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:20:06.553367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:06.553386 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:20:06.553407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:06.553452 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:20:06.553486 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:20:06.553509 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:20:06.553544 kernel: fuse: init (API version 7.39) Apr 30 00:20:06.553567 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:20:06.553586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:20:06.553604 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:20:06.553622 kernel: loop: module loaded Apr 30 00:20:06.553641 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:20:06.553660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:20:06.553692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:06.553713 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:20:06.553732 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:20:06.553751 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:20:06.553769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:20:06.553790 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:20:06.553814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:20:06.553833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:20:06.553851 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:20:06.553880 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:20:06.553898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:06.553916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:06.553935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:06.553964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:06.553981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:20:06.554001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:20:06.554022 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:20:06.554042 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:06.554063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:06.554122 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:20:06.554148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:20:06.554171 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:20:06.554264 systemd-journald[1142]: Collecting audit messages is disabled. Apr 30 00:20:06.554314 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:20:06.554336 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:20:06.554360 systemd-journald[1142]: Journal started Apr 30 00:20:06.554414 systemd-journald[1142]: Runtime Journal (/run/log/journal/f80c9ea777da43cba505d83090790ea0) is 4.9M, max 39.3M, 34.4M free. Apr 30 00:20:06.562600 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:20:06.575562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:20:06.578551 kernel: ACPI: bus type drm_connector registered Apr 30 00:20:06.592684 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:20:06.603557 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:20:06.609188 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:20:06.610386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:20:06.612795 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:20:06.613773 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:20:06.614959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:20:06.651995 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:20:06.661898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:20:06.676824 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:20:06.678021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:20:06.682743 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Apr 30 00:20:06.682769 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Apr 30 00:20:06.714007 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:20:06.715732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:20:06.719944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:20:06.737869 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:20:06.747783 systemd-journald[1142]: Time spent on flushing to /var/log/journal/f80c9ea777da43cba505d83090790ea0 is 50.458ms for 982 entries. Apr 30 00:20:06.747783 systemd-journald[1142]: System Journal (/var/log/journal/f80c9ea777da43cba505d83090790ea0) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:20:06.821895 systemd-journald[1142]: Received client request to flush runtime journal. Apr 30 00:20:06.753125 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:20:06.756679 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:20:06.776875 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:20:06.790943 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:20:06.825529 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:20:06.857545 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:20:06.862672 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:20:06.870092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:20:06.909553 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Apr 30 00:20:06.910017 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Apr 30 00:20:06.918294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:20:07.844110 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:20:07.857930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:20:07.894809 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Apr 30 00:20:07.926779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:20:07.939228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:20:07.979258 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:20:08.046272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:08.046566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:08.059730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:08.071563 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1232) Apr 30 00:20:08.074766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:08.080947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:08.083275 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:20:08.083338 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:20:08.083408 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:08.084031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:08.084289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:08.093186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:08.098010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:08.104461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:20:08.108768 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:20:08.109915 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:08.112891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:08.114811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:20:08.159086 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 00:20:08.271529 systemd-networkd[1223]: lo: Link UP Apr 30 00:20:08.272479 systemd-networkd[1223]: lo: Gained carrier Apr 30 00:20:08.276318 systemd-networkd[1223]: Enumeration completed Apr 30 00:20:08.276868 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:20:08.280232 systemd-networkd[1223]: eth0: Configuring with /run/systemd/network/10-f2:d4:dd:c1:4f:7d.network. Apr 30 00:20:08.282405 systemd-networkd[1223]: eth1: Configuring with /run/systemd/network/10-46:b4:1b:fe:d7:f4.network. Apr 30 00:20:08.283628 systemd-networkd[1223]: eth0: Link UP Apr 30 00:20:08.283644 systemd-networkd[1223]: eth0: Gained carrier Apr 30 00:20:08.284903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:20:08.288944 systemd-networkd[1223]: eth1: Link UP Apr 30 00:20:08.288954 systemd-networkd[1223]: eth1: Gained carrier Apr 30 00:20:08.339628 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 00:20:08.349545 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 30 00:20:08.385931 kernel: ACPI: button: Power Button [PWRF] Apr 30 00:20:08.385963 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 00:20:08.381388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:20:08.437567 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:20:08.452054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:08.518691 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Apr 30 00:20:08.518788 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Apr 30 00:20:08.526551 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:20:08.526676 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:20:08.526712 kernel: [drm] features: -context_init Apr 30 00:20:08.528547 kernel: [drm] number of scanouts: 1 Apr 30 00:20:08.530542 kernel: [drm] number of cap sets: 0 Apr 30 00:20:08.547557 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Apr 30 00:20:08.563548 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 00:20:08.563660 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 00:20:08.557489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:20:08.557957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:08.574305 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:20:08.582775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:08.603346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:20:08.603890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:08.619891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:08.631630 kernel: EDAC MC: Ver: 3.0.0 Apr 30 00:20:08.677320 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:20:08.690425 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:20:08.703938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:08.722322 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:20:08.750338 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:20:08.751301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:20:08.757949 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:20:08.775888 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:20:08.814755 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:20:08.817473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:20:08.824789 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Apr 30 00:20:08.826231 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:20:08.826287 systemd[1]: Reached target machines.target - Containers. Apr 30 00:20:08.830006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:20:08.850554 kernel: ISO 9660 Extensions: RRIP_1991A Apr 30 00:20:08.852505 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Apr 30 00:20:08.853329 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:20:08.857378 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:20:08.865827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:20:08.877830 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:20:08.878863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:08.884886 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:20:08.895907 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:20:08.910898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:20:08.914631 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:20:08.940945 kernel: loop0: detected capacity change from 0 to 8 Apr 30 00:20:08.946251 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:20:08.959657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:20:08.953805 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:20:08.982584 kernel: loop1: detected capacity change from 0 to 138184 Apr 30 00:20:09.039427 kernel: loop2: detected capacity change from 0 to 210664 Apr 30 00:20:09.079638 kernel: loop3: detected capacity change from 0 to 140992 Apr 30 00:20:09.127691 kernel: loop4: detected capacity change from 0 to 8 Apr 30 00:20:09.132877 kernel: loop5: detected capacity change from 0 to 138184 Apr 30 00:20:09.163026 kernel: loop6: detected capacity change from 0 to 210664 Apr 30 00:20:09.182338 kernel: loop7: detected capacity change from 0 to 140992 Apr 30 00:20:09.202457 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Apr 30 00:20:09.205620 (sd-merge)[1312]: Merged extensions into '/usr'. Apr 30 00:20:09.211860 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:20:09.212119 systemd[1]: Reloading... Apr 30 00:20:09.349217 systemd-networkd[1223]: eth1: Gained IPv6LL Apr 30 00:20:09.389051 zram_generator::config[1343]: No configuration found. Apr 30 00:20:09.539237 systemd-networkd[1223]: eth0: Gained IPv6LL Apr 30 00:20:09.669186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:20:09.684997 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:20:09.765736 systemd[1]: Reloading finished in 552 ms. Apr 30 00:20:09.787817 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:20:09.790919 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:20:09.793602 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:20:09.811272 systemd[1]: Starting ensure-sysext.service... Apr 30 00:20:09.819122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:20:09.828114 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:20:09.828139 systemd[1]: Reloading... Apr 30 00:20:09.878600 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:20:09.879049 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:20:09.880145 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:20:09.880469 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Apr 30 00:20:09.880578 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Apr 30 00:20:09.884339 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:20:09.884357 systemd-tmpfiles[1393]: Skipping /boot Apr 30 00:20:09.899470 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:20:09.899490 systemd-tmpfiles[1393]: Skipping /boot Apr 30 00:20:09.974649 zram_generator::config[1422]: No configuration found. Apr 30 00:20:10.142662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:20:10.242746 systemd[1]: Reloading finished in 411 ms. Apr 30 00:20:10.259929 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:20:10.277803 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:20:10.296977 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:20:10.313763 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:20:10.323832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:20:10.339990 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:20:10.353199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:10.353429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:10.358281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:10.376351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:10.395056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:10.396078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:10.396263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:10.424999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:10.425248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:10.433913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:10.434109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:10.439872 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:10.440099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:10.454171 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:20:10.459509 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:20:10.475833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:10.476095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:10.490892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:10.502990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:20:10.505788 augenrules[1512]: No rules Apr 30 00:20:10.511845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:10.526898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:10.530745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:10.531021 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:20:10.531111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:10.535545 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:20:10.535866 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:20:10.542044 systemd-resolved[1480]: Positive Trust Anchors: Apr 30 00:20:10.542533 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:20:10.542638 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:20:10.546952 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:20:10.549282 systemd-resolved[1480]: Using system hostname 'ci-4152.2.3-1-91c0161c2f'. Apr 30 00:20:10.551923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:10.552193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:10.557060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:20:10.561465 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:20:10.562920 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:20:10.567449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:10.567830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:10.571022 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:10.571612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:10.585285 systemd[1]: Finished ensure-sysext.service. Apr 30 00:20:10.595391 systemd[1]: Reached target network.target - Network. Apr 30 00:20:10.596961 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:20:10.597508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:20:10.597939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:20:10.598025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:20:10.607751 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:20:10.618824 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:20:10.644562 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:20:10.697199 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:20:10.698045 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:20:10.698665 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:20:10.699894 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:20:10.701252 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:20:10.701858 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:20:10.701896 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:20:10.702277 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:20:10.704634 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:20:10.705328 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:20:10.706312 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:20:10.708796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:20:10.712265 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:20:10.717157 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:20:10.723458 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:20:10.724200 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:20:10.726582 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:20:10.728243 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:20:10.729340 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:20:10.729373 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:20:10.731303 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:20:10.735697 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:20:10.741851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:20:10.758722 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:20:10.770935 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:20:10.773630 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:20:10.778088 jq[1545]: false Apr 30 00:20:10.784108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:10.803774 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:20:10.818507 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:20:10.820806 dbus-daemon[1543]: [system] SELinux support is enabled Apr 30 00:20:10.830706 coreos-metadata[1542]: Apr 30 00:20:10.830 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:20:10.838044 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:20:10.843786 coreos-metadata[1542]: Apr 30 00:20:10.842 INFO Fetch successful Apr 30 00:20:10.851029 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:20:10.867808 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:20:10.873584 extend-filesystems[1546]: Found loop4 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found loop5 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found loop6 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found loop7 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda1 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda2 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda3 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found usr Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda4 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda6 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda7 Apr 30 00:20:10.888283 extend-filesystems[1546]: Found vda9 Apr 30 00:20:10.888283 extend-filesystems[1546]: Checking size of /dev/vda9 Apr 30 00:20:10.894761 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:20:10.908201 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:20:10.924249 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:20:10.936715 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:20:10.942771 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:20:10.952629 extend-filesystems[1546]: Resized partition /dev/vda9 Apr 30 00:20:10.967084 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:20:10.967379 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:20:10.978551 extend-filesystems[1585]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:20:11.006093 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Apr 30 00:20:11.006147 update_engine[1579]: I20250430 00:20:10.987834 1579 main.cc:92] Flatcar Update Engine starting Apr 30 00:20:11.006147 update_engine[1579]: I20250430 00:20:11.004982 1579 update_check_scheduler.cc:74] Next update check in 2m35s Apr 30 00:20:11.710720 jq[1581]: true Apr 30 00:20:10.982408 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:20:10.982755 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:20:10.997232 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:20:11.705208 systemd-resolved[1480]: Clock change detected. Flushing caches. Apr 30 00:20:11.706036 systemd-timesyncd[1534]: Contacted time server 216.31.17.12:123 (0.flatcar.pool.ntp.org). Apr 30 00:20:11.706157 systemd-timesyncd[1534]: Initial clock synchronization to Wed 2025-04-30 00:20:11.705124 UTC. Apr 30 00:20:11.714636 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:20:11.715221 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:20:11.762353 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:20:11.775361 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:20:11.791475 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:20:11.792019 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:20:11.792071 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:20:11.792692 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:20:11.792833 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Apr 30 00:20:11.792883 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:20:11.798894 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:20:11.809149 jq[1591]: true Apr 30 00:20:11.802454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:20:11.814214 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:20:11.852254 tar[1590]: linux-amd64/helm Apr 30 00:20:11.930422 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1612) Apr 30 00:20:11.980473 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 00:20:12.011339 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:20:12.011339 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 00:20:12.011339 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 00:20:12.044438 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Apr 30 00:20:12.044438 extend-filesystems[1546]: Found vdb Apr 30 00:20:12.018217 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:20:12.018609 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:20:12.059447 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:20:12.098529 systemd-logind[1569]: New seat seat0. Apr 30 00:20:12.126247 systemd-logind[1569]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 00:20:12.126281 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:20:12.131030 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:20:12.155962 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:20:12.163538 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:20:12.199315 systemd[1]: Starting sshkeys.service... Apr 30 00:20:12.294326 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:20:12.309267 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:20:12.441384 coreos-metadata[1651]: Apr 30 00:20:12.441 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Apr 30 00:20:12.443207 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:20:12.459533 containerd[1593]: time="2025-04-30T00:20:12.458268556Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:20:12.467887 coreos-metadata[1651]: Apr 30 00:20:12.464 INFO Fetch successful Apr 30 00:20:12.484075 unknown[1651]: wrote ssh authorized keys file for user: core Apr 30 00:20:12.548309 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:20:12.552585 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:20:12.566566 systemd[1]: Finished sshkeys.service. Apr 30 00:20:12.590727 containerd[1593]: time="2025-04-30T00:20:12.590336481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.597029 containerd[1593]: time="2025-04-30T00:20:12.596763524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:12.597029 containerd[1593]: time="2025-04-30T00:20:12.596828418Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:20:12.597029 containerd[1593]: time="2025-04-30T00:20:12.596885679Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:20:12.597226 containerd[1593]: time="2025-04-30T00:20:12.597174595Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:20:12.597226 containerd[1593]: time="2025-04-30T00:20:12.597211633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.597333 containerd[1593]: time="2025-04-30T00:20:12.597306829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:12.597333 containerd[1593]: time="2025-04-30T00:20:12.597327595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.597722318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.597758373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.597781213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.597798388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.597990308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.598325965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.598609063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.598635928Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.598786970Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:20:12.600198 containerd[1593]: time="2025-04-30T00:20:12.599806173Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.605918002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.606033069Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.606060937Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.606093339Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.606120983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.606402206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:20:12.607387 containerd[1593]: time="2025-04-30T00:20:12.607148630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607434339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607461494Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607482116Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607501666Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607517726Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607531666Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607547514Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607563420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607577978Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607591239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607603348Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607628190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607643667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.607800 containerd[1593]: time="2025-04-30T00:20:12.607657118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607671236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607685000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607701230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607715057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607731175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607745070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607763344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607776914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607789040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607841517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607878615Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607907425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607922587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608316 containerd[1593]: time="2025-04-30T00:20:12.607936901Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608011421Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608042871Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608067698Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608088720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608103317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608119016Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608130649Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:20:12.608811 containerd[1593]: time="2025-04-30T00:20:12.608142652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:20:12.609160 containerd[1593]: time="2025-04-30T00:20:12.608534555Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:20:12.609160 containerd[1593]: time="2025-04-30T00:20:12.608612030Z" level=info msg="Connect containerd service" Apr 30 00:20:12.609160 containerd[1593]: time="2025-04-30T00:20:12.608700063Z" level=info msg="using legacy CRI server" Apr 30 00:20:12.609160 containerd[1593]: time="2025-04-30T00:20:12.608713548Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:20:12.618364 containerd[1593]: time="2025-04-30T00:20:12.615391941Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:20:12.621092 containerd[1593]: time="2025-04-30T00:20:12.621008626Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:20:12.621278 containerd[1593]: time="2025-04-30T00:20:12.621219707Z" level=info msg="Start subscribing containerd event" Apr 30 00:20:12.621320 containerd[1593]: time="2025-04-30T00:20:12.621291455Z" level=info msg="Start recovering state" Apr 30 00:20:12.621425 containerd[1593]: time="2025-04-30T00:20:12.621394058Z" level=info msg="Start event monitor" Apr 30 00:20:12.621479 containerd[1593]: time="2025-04-30T00:20:12.621441469Z" level=info msg="Start snapshots syncer" Apr 30 00:20:12.621479 containerd[1593]: time="2025-04-30T00:20:12.621457841Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:20:12.621479 containerd[1593]: time="2025-04-30T00:20:12.621470107Z" level=info msg="Start streaming server" Apr 30 00:20:12.625630 containerd[1593]: time="2025-04-30T00:20:12.625553555Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:20:12.625814 containerd[1593]: time="2025-04-30T00:20:12.625768091Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:20:12.627162 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:20:12.643677 containerd[1593]: time="2025-04-30T00:20:12.642969014Z" level=info msg="containerd successfully booted in 0.188588s" Apr 30 00:20:13.088525 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:20:13.159736 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:20:13.179966 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:20:13.193626 systemd[1]: Started sshd@0-146.190.146.79:22-147.75.109.163:43382.service - OpenSSH per-connection server daemon (147.75.109.163:43382). Apr 30 00:20:13.210319 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:20:13.213247 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:20:13.236438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:20:13.286723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:20:13.302602 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:20:13.327684 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:20:13.336263 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:20:13.429942 tar[1590]: linux-amd64/LICENSE Apr 30 00:20:13.429942 tar[1590]: linux-amd64/README.md Apr 30 00:20:13.448275 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:20:13.452827 sshd[1679]: Accepted publickey for core from 147.75.109.163 port 43382 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:13.455330 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:13.466743 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:20:13.478342 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:20:13.485822 systemd-logind[1569]: New session 1 of user core. Apr 30 00:20:13.504586 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:20:13.522496 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:20:13.540532 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:20:13.718149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:13.722785 systemd[1700]: Queued start job for default target default.target. Apr 30 00:20:13.726209 systemd[1700]: Created slice app.slice - User Application Slice. Apr 30 00:20:13.726262 systemd[1700]: Reached target paths.target - Paths. Apr 30 00:20:13.726284 systemd[1700]: Reached target timers.target - Timers. Apr 30 00:20:13.729679 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:20:13.733666 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:20:13.734645 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:20:13.750453 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:20:13.750537 systemd[1700]: Reached target sockets.target - Sockets. Apr 30 00:20:13.750553 systemd[1700]: Reached target basic.target - Basic System. Apr 30 00:20:13.750617 systemd[1700]: Reached target default.target - Main User Target. Apr 30 00:20:13.750650 systemd[1700]: Startup finished in 198ms. Apr 30 00:20:13.750910 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:20:13.763432 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:20:13.767038 systemd[1]: Startup finished in 8.745s (kernel) + 7.841s (userspace) = 16.587s. Apr 30 00:20:13.839789 systemd[1]: Started sshd@1-146.190.146.79:22-147.75.109.163:43394.service - OpenSSH per-connection server daemon (147.75.109.163:43394). Apr 30 00:20:13.950279 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 43394 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:13.952815 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:13.962157 systemd-logind[1569]: New session 2 of user core. Apr 30 00:20:13.968526 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:20:14.048051 sshd[1732]: Connection closed by 147.75.109.163 port 43394 Apr 30 00:20:14.051340 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:14.062150 systemd[1]: Started sshd@2-146.190.146.79:22-147.75.109.163:43404.service - OpenSSH per-connection server daemon (147.75.109.163:43404). Apr 30 00:20:14.062995 systemd[1]: sshd@1-146.190.146.79:22-147.75.109.163:43394.service: Deactivated successfully. Apr 30 00:20:14.075252 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:20:14.080076 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:20:14.084413 systemd-logind[1569]: Removed session 2. Apr 30 00:20:14.133034 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 43404 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:14.137456 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:14.145388 systemd-logind[1569]: New session 3 of user core. Apr 30 00:20:14.155192 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:20:14.218501 sshd[1740]: Connection closed by 147.75.109.163 port 43404 Apr 30 00:20:14.219188 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:14.231368 systemd[1]: Started sshd@3-146.190.146.79:22-147.75.109.163:43410.service - OpenSSH per-connection server daemon (147.75.109.163:43410). Apr 30 00:20:14.232066 systemd[1]: sshd@2-146.190.146.79:22-147.75.109.163:43404.service: Deactivated successfully. Apr 30 00:20:14.245206 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:20:14.248026 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:20:14.252707 systemd-logind[1569]: Removed session 3. Apr 30 00:20:14.303085 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 43410 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:14.305022 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:14.314795 systemd-logind[1569]: New session 4 of user core. Apr 30 00:20:14.325381 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:20:14.407303 sshd[1748]: Connection closed by 147.75.109.163 port 43410 Apr 30 00:20:14.408996 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:14.417275 systemd[1]: Started sshd@4-146.190.146.79:22-147.75.109.163:43422.service - OpenSSH per-connection server daemon (147.75.109.163:43422). Apr 30 00:20:14.420480 systemd[1]: sshd@3-146.190.146.79:22-147.75.109.163:43410.service: Deactivated successfully. Apr 30 00:20:14.426830 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:20:14.434616 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:20:14.440493 systemd-logind[1569]: Removed session 4. Apr 30 00:20:14.492338 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 43422 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:14.494083 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:14.506216 systemd-logind[1569]: New session 5 of user core. Apr 30 00:20:14.509710 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:20:14.600959 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:20:14.601491 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:14.618129 sudo[1759]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:14.622000 sshd[1758]: Connection closed by 147.75.109.163 port 43422 Apr 30 00:20:14.625172 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:14.638379 systemd[1]: Started sshd@5-146.190.146.79:22-147.75.109.163:43438.service - OpenSSH per-connection server daemon (147.75.109.163:43438). Apr 30 00:20:14.640549 systemd[1]: sshd@4-146.190.146.79:22-147.75.109.163:43422.service: Deactivated successfully. Apr 30 00:20:14.655630 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:20:14.657032 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:20:14.662360 systemd-logind[1569]: Removed session 5. Apr 30 00:20:14.715136 sshd[1761]: Accepted publickey for core from 147.75.109.163 port 43438 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:14.718119 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:14.730631 systemd-logind[1569]: New session 6 of user core. Apr 30 00:20:14.739464 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:20:14.774627 kubelet[1713]: E0430 00:20:14.774517 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:20:14.779404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:20:14.781189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:20:14.811964 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:20:14.812383 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:14.819409 sudo[1771]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:14.829943 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:20:14.830425 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:14.855563 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:20:14.913337 augenrules[1793]: No rules Apr 30 00:20:14.915543 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:20:14.917150 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:20:14.920806 sudo[1770]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:14.924594 sshd[1767]: Connection closed by 147.75.109.163 port 43438 Apr 30 00:20:14.928066 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:14.939163 systemd[1]: Started sshd@6-146.190.146.79:22-147.75.109.163:43450.service - OpenSSH per-connection server daemon (147.75.109.163:43450). Apr 30 00:20:14.939916 systemd[1]: sshd@5-146.190.146.79:22-147.75.109.163:43438.service: Deactivated successfully. Apr 30 00:20:14.951075 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:20:14.952690 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:20:14.954114 systemd-logind[1569]: Removed session 6. Apr 30 00:20:14.997400 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 43450 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:20:14.999269 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:15.006057 systemd-logind[1569]: New session 7 of user core. Apr 30 00:20:15.017489 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:20:15.082217 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:20:15.082645 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:15.678542 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:20:15.691789 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:20:16.164723 dockerd[1823]: time="2025-04-30T00:20:16.163487382Z" level=info msg="Starting up" Apr 30 00:20:16.291696 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3045204441-merged.mount: Deactivated successfully. Apr 30 00:20:16.679695 dockerd[1823]: time="2025-04-30T00:20:16.679206316Z" level=info msg="Loading containers: start." Apr 30 00:20:16.897921 kernel: Initializing XFRM netlink socket Apr 30 00:20:17.010384 systemd-networkd[1223]: docker0: Link UP Apr 30 00:20:17.050256 dockerd[1823]: time="2025-04-30T00:20:17.050208310Z" level=info msg="Loading containers: done." Apr 30 00:20:17.079838 dockerd[1823]: time="2025-04-30T00:20:17.079757574Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:20:17.080141 dockerd[1823]: time="2025-04-30T00:20:17.080106156Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:20:17.080311 dockerd[1823]: time="2025-04-30T00:20:17.080287683Z" level=info msg="Daemon has completed initialization" Apr 30 00:20:17.120962 dockerd[1823]: time="2025-04-30T00:20:17.120845864Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:20:17.121570 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:20:18.154671 containerd[1593]: time="2025-04-30T00:20:18.154287784Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:20:18.748315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783942083.mount: Deactivated successfully. Apr 30 00:20:20.192893 containerd[1593]: time="2025-04-30T00:20:20.191651275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:20.195731 containerd[1593]: time="2025-04-30T00:20:20.195658416Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 00:20:20.197965 containerd[1593]: time="2025-04-30T00:20:20.196038973Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:20.200802 containerd[1593]: time="2025-04-30T00:20:20.200753404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:20.202832 containerd[1593]: time="2025-04-30T00:20:20.202746445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.048308039s" Apr 30 00:20:20.203129 containerd[1593]: time="2025-04-30T00:20:20.203096310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 00:20:20.240765 containerd[1593]: time="2025-04-30T00:20:20.240702120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:20:21.876899 containerd[1593]: time="2025-04-30T00:20:21.876808919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:21.878197 containerd[1593]: time="2025-04-30T00:20:21.878138254Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 00:20:21.878702 containerd[1593]: time="2025-04-30T00:20:21.878653837Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:21.883339 containerd[1593]: time="2025-04-30T00:20:21.883256818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:21.889263 containerd[1593]: time="2025-04-30T00:20:21.888798385Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.647671757s" Apr 30 00:20:21.889263 containerd[1593]: time="2025-04-30T00:20:21.888899273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 00:20:21.922744 containerd[1593]: time="2025-04-30T00:20:21.922648235Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:20:22.932886 containerd[1593]: time="2025-04-30T00:20:22.932023371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:22.932886 containerd[1593]: time="2025-04-30T00:20:22.932781647Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 00:20:22.934094 containerd[1593]: time="2025-04-30T00:20:22.933259175Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:22.937509 containerd[1593]: time="2025-04-30T00:20:22.936018915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:22.937509 containerd[1593]: time="2025-04-30T00:20:22.937070483Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.014096572s" Apr 30 00:20:22.937509 containerd[1593]: time="2025-04-30T00:20:22.937104988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 00:20:22.974313 containerd[1593]: time="2025-04-30T00:20:22.974276045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:20:24.048729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446293369.mount: Deactivated successfully. Apr 30 00:20:24.573957 containerd[1593]: time="2025-04-30T00:20:24.573891922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:24.575156 containerd[1593]: time="2025-04-30T00:20:24.575031872Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 00:20:24.576087 containerd[1593]: time="2025-04-30T00:20:24.575696792Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:24.578568 containerd[1593]: time="2025-04-30T00:20:24.578518826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:24.579730 containerd[1593]: time="2025-04-30T00:20:24.579681811Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.605156486s" Apr 30 00:20:24.579923 containerd[1593]: time="2025-04-30T00:20:24.579895229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 00:20:24.613349 containerd[1593]: time="2025-04-30T00:20:24.613299171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:20:24.615213 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Apr 30 00:20:24.968105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:20:24.976225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:25.064909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520053995.mount: Deactivated successfully. Apr 30 00:20:25.222173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:25.233442 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:20:25.345897 kubelet[2134]: E0430 00:20:25.345312 2134 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:20:25.354190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:20:25.354404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:20:26.084909 containerd[1593]: time="2025-04-30T00:20:26.084825954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:26.086389 containerd[1593]: time="2025-04-30T00:20:26.086007679Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 00:20:26.087005 containerd[1593]: time="2025-04-30T00:20:26.086958867Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:26.090514 containerd[1593]: time="2025-04-30T00:20:26.090446556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:26.093253 containerd[1593]: time="2025-04-30T00:20:26.092453538Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.478845765s" Apr 30 00:20:26.093253 containerd[1593]: time="2025-04-30T00:20:26.092521396Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 00:20:26.128052 containerd[1593]: time="2025-04-30T00:20:26.127984208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:20:26.526547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138777668.mount: Deactivated successfully. Apr 30 00:20:26.531703 containerd[1593]: time="2025-04-30T00:20:26.531650607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:26.532511 containerd[1593]: time="2025-04-30T00:20:26.532436398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 00:20:26.533226 containerd[1593]: time="2025-04-30T00:20:26.533165731Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:26.536930 containerd[1593]: time="2025-04-30T00:20:26.535893128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:26.536930 containerd[1593]: time="2025-04-30T00:20:26.536772427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 408.7367ms" Apr 30 00:20:26.536930 containerd[1593]: time="2025-04-30T00:20:26.536804852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 00:20:26.565319 containerd[1593]: time="2025-04-30T00:20:26.565258919Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:20:27.003363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441353045.mount: Deactivated successfully. Apr 30 00:20:27.709102 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Apr 30 00:20:28.908654 containerd[1593]: time="2025-04-30T00:20:28.908468739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:28.910496 containerd[1593]: time="2025-04-30T00:20:28.910406290Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 00:20:28.911453 containerd[1593]: time="2025-04-30T00:20:28.911403630Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:28.915541 containerd[1593]: time="2025-04-30T00:20:28.914708365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:28.916277 containerd[1593]: time="2025-04-30T00:20:28.916235066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.350676669s" Apr 30 00:20:28.916277 containerd[1593]: time="2025-04-30T00:20:28.916276298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 00:20:32.149433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:32.163300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:32.199255 systemd[1]: Reloading requested from client PID 2298 ('systemctl') (unit session-7.scope)... Apr 30 00:20:32.199286 systemd[1]: Reloading... Apr 30 00:20:32.343908 zram_generator::config[2338]: No configuration found. Apr 30 00:20:32.513150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:20:32.604912 systemd[1]: Reloading finished in 405 ms. Apr 30 00:20:32.674335 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:20:32.675504 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:32.676663 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:20:32.677095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:32.696482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:32.827095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:32.833423 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:20:32.890779 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:20:32.890779 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:20:32.890779 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:20:32.893464 kubelet[2407]: I0430 00:20:32.893344 2407 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:20:33.101095 kubelet[2407]: I0430 00:20:33.100952 2407 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:20:33.101095 kubelet[2407]: I0430 00:20:33.100994 2407 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:20:33.101658 kubelet[2407]: I0430 00:20:33.101306 2407 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:20:33.123521 kubelet[2407]: I0430 00:20:33.123127 2407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:20:33.124012 kubelet[2407]: E0430 00:20:33.123752 2407 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.146.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.147890 kubelet[2407]: I0430 00:20:33.147328 2407 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:20:33.148747 kubelet[2407]: I0430 00:20:33.148687 2407 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:20:33.149158 kubelet[2407]: I0430 00:20:33.148741 2407 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-1-91c0161c2f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:20:33.149893 kubelet[2407]: I0430 00:20:33.149823 2407 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:20:33.149893 kubelet[2407]: I0430 00:20:33.149876 2407 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:20:33.150074 kubelet[2407]: I0430 00:20:33.150060 2407 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:20:33.151208 kubelet[2407]: I0430 00:20:33.151173 2407 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:20:33.151208 kubelet[2407]: I0430 00:20:33.151203 2407 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:20:33.152941 kubelet[2407]: I0430 00:20:33.151241 2407 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:20:33.152941 kubelet[2407]: I0430 00:20:33.151266 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:20:33.154509 kubelet[2407]: W0430 00:20:33.154441 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.146.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.154738 kubelet[2407]: E0430 00:20:33.154715 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.146.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.155397 kubelet[2407]: I0430 00:20:33.155094 2407 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:20:33.157880 kubelet[2407]: I0430 00:20:33.157090 2407 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:20:33.157880 kubelet[2407]: W0430 00:20:33.157196 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:20:33.158178 kubelet[2407]: I0430 00:20:33.158155 2407 server.go:1264] "Started kubelet" Apr 30 00:20:33.162144 kubelet[2407]: W0430 00:20:33.161847 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.146.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-1-91c0161c2f&limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.162144 kubelet[2407]: E0430 00:20:33.161931 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.146.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-1-91c0161c2f&limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.162144 kubelet[2407]: I0430 00:20:33.162030 2407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:20:33.164391 kubelet[2407]: I0430 00:20:33.164318 2407 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:20:33.167839 kubelet[2407]: I0430 00:20:33.167333 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:20:33.169882 kubelet[2407]: I0430 00:20:33.168422 2407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:20:33.169882 kubelet[2407]: I0430 00:20:33.168771 2407 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:20:33.169882 kubelet[2407]: E0430 00:20:33.169056 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.146.79:6443/api/v1/namespaces/default/events\": dial tcp 146.190.146.79:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.3-1-91c0161c2f.183af0ad6a039130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-1-91c0161c2f,UID:ci-4152.2.3-1-91c0161c2f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-1-91c0161c2f,},FirstTimestamp:2025-04-30 00:20:33.158115632 +0000 UTC m=+0.319704692,LastTimestamp:2025-04-30 00:20:33.158115632 +0000 UTC m=+0.319704692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-1-91c0161c2f,}" Apr 30 00:20:33.173933 kubelet[2407]: E0430 00:20:33.173884 2407 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.3-1-91c0161c2f\" not found" Apr 30 00:20:33.174168 kubelet[2407]: I0430 00:20:33.174155 2407 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:20:33.174387 kubelet[2407]: I0430 00:20:33.174366 2407 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:20:33.174621 kubelet[2407]: I0430 00:20:33.174603 2407 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:20:33.175432 kubelet[2407]: W0430 00:20:33.175379 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.146.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.175570 kubelet[2407]: E0430 00:20:33.175558 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.146.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.176291 kubelet[2407]: E0430 00:20:33.176263 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.146.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-1-91c0161c2f?timeout=10s\": dial tcp 146.190.146.79:6443: connect: connection refused" interval="200ms" Apr 30 00:20:33.177053 kubelet[2407]: E0430 00:20:33.177025 2407 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:20:33.177790 kubelet[2407]: I0430 00:20:33.177769 2407 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:20:33.178005 kubelet[2407]: I0430 00:20:33.177982 2407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:20:33.180371 kubelet[2407]: I0430 00:20:33.180344 2407 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:20:33.203300 kubelet[2407]: I0430 00:20:33.203231 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:20:33.205119 kubelet[2407]: I0430 00:20:33.205071 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:20:33.205119 kubelet[2407]: I0430 00:20:33.205121 2407 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:20:33.205306 kubelet[2407]: I0430 00:20:33.205163 2407 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:20:33.205306 kubelet[2407]: E0430 00:20:33.205251 2407 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:20:33.214037 kubelet[2407]: W0430 00:20:33.213797 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.146.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.214037 kubelet[2407]: E0430 00:20:33.213973 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.146.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:33.228694 kubelet[2407]: I0430 00:20:33.228639 2407 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:20:33.228694 kubelet[2407]: I0430 00:20:33.228657 2407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:20:33.228694 kubelet[2407]: I0430 00:20:33.228684 2407 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:20:33.230449 kubelet[2407]: I0430 00:20:33.230397 2407 policy_none.go:49] "None policy: Start" Apr 30 00:20:33.231600 kubelet[2407]: I0430 00:20:33.231549 2407 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:20:33.232133 kubelet[2407]: I0430 00:20:33.231743 2407 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:20:33.237916 kubelet[2407]: I0430 00:20:33.237868 2407 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:20:33.239876 kubelet[2407]: I0430 00:20:33.238306 2407 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:20:33.239876 kubelet[2407]: I0430 00:20:33.238502 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:20:33.243740 kubelet[2407]: E0430 00:20:33.243711 2407 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.3-1-91c0161c2f\" not found" Apr 30 00:20:33.276569 kubelet[2407]: I0430 00:20:33.276533 2407 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.277283 kubelet[2407]: E0430 00:20:33.277244 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.146.79:6443/api/v1/nodes\": dial tcp 146.190.146.79:6443: connect: connection refused" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.305825 kubelet[2407]: I0430 00:20:33.305709 2407 topology_manager.go:215] "Topology Admit Handler" podUID="623f7eba2f5aad39f0d80687b9485b90" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.307557 kubelet[2407]: I0430 00:20:33.307516 2407 topology_manager.go:215] "Topology Admit Handler" podUID="043fb42851e646ca0afb9272284eb982" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.309209 kubelet[2407]: I0430 00:20:33.308969 2407 topology_manager.go:215] "Topology Admit Handler" podUID="f398de580ed3e748a7d587a6579bbeb3" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376086 kubelet[2407]: I0430 00:20:33.375609 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/043fb42851e646ca0afb9272284eb982-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-1-91c0161c2f\" (UID: \"043fb42851e646ca0afb9272284eb982\") " pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376086 kubelet[2407]: I0430 00:20:33.375684 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376086 kubelet[2407]: I0430 00:20:33.375722 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376086 kubelet[2407]: I0430 00:20:33.375752 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376086 kubelet[2407]: I0430 00:20:33.375789 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376484 kubelet[2407]: I0430 00:20:33.375824 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/623f7eba2f5aad39f0d80687b9485b90-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-1-91c0161c2f\" (UID: \"623f7eba2f5aad39f0d80687b9485b90\") " pod="kube-system/kube-scheduler-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376484 kubelet[2407]: I0430 00:20:33.375889 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/043fb42851e646ca0afb9272284eb982-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-1-91c0161c2f\" (UID: \"043fb42851e646ca0afb9272284eb982\") " pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376484 kubelet[2407]: I0430 00:20:33.375923 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/043fb42851e646ca0afb9272284eb982-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-1-91c0161c2f\" (UID: \"043fb42851e646ca0afb9272284eb982\") " pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.376484 kubelet[2407]: I0430 00:20:33.375971 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.377880 kubelet[2407]: E0430 00:20:33.377682 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.146.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-1-91c0161c2f?timeout=10s\": dial tcp 146.190.146.79:6443: connect: connection refused" interval="400ms" Apr 30 00:20:33.479701 kubelet[2407]: I0430 00:20:33.479660 2407 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.480149 kubelet[2407]: E0430 00:20:33.480036 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.146.79:6443/api/v1/nodes\": dial tcp 146.190.146.79:6443: connect: connection refused" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.616376 kubelet[2407]: E0430 00:20:33.616295 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:33.617143 kubelet[2407]: E0430 00:20:33.616772 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:33.617238 containerd[1593]: time="2025-04-30T00:20:33.617125475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-1-91c0161c2f,Uid:623f7eba2f5aad39f0d80687b9485b90,Namespace:kube-system,Attempt:0,}" Apr 30 00:20:33.618068 containerd[1593]: time="2025-04-30T00:20:33.617975186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-1-91c0161c2f,Uid:043fb42851e646ca0afb9272284eb982,Namespace:kube-system,Attempt:0,}" Apr 30 00:20:33.620709 kubelet[2407]: E0430 00:20:33.620646 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:33.621018 systemd-resolved[1480]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Apr 30 00:20:33.623185 containerd[1593]: time="2025-04-30T00:20:33.623027079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-1-91c0161c2f,Uid:f398de580ed3e748a7d587a6579bbeb3,Namespace:kube-system,Attempt:0,}" Apr 30 00:20:33.778465 kubelet[2407]: E0430 00:20:33.778387 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.146.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-1-91c0161c2f?timeout=10s\": dial tcp 146.190.146.79:6443: connect: connection refused" interval="800ms" Apr 30 00:20:33.882967 kubelet[2407]: I0430 00:20:33.882520 2407 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:33.883356 kubelet[2407]: E0430 00:20:33.883292 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.146.79:6443/api/v1/nodes\": dial tcp 146.190.146.79:6443: connect: connection refused" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:34.003615 kubelet[2407]: W0430 00:20:34.003411 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.146.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.003615 kubelet[2407]: E0430 00:20:34.003516 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.146.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.078967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739769984.mount: Deactivated successfully. Apr 30 00:20:34.083896 containerd[1593]: time="2025-04-30T00:20:34.083103384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:20:34.084703 containerd[1593]: time="2025-04-30T00:20:34.084641679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 00:20:34.086073 containerd[1593]: time="2025-04-30T00:20:34.086025614Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:20:34.087686 containerd[1593]: time="2025-04-30T00:20:34.087622809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:20:34.092523 containerd[1593]: time="2025-04-30T00:20:34.089731143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:20:34.096015 containerd[1593]: time="2025-04-30T00:20:34.095937413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:20:34.097729 containerd[1593]: time="2025-04-30T00:20:34.097479091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:20:34.097729 containerd[1593]: time="2025-04-30T00:20:34.097661133Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:20:34.098926 containerd[1593]: time="2025-04-30T00:20:34.098603151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.947829ms" Apr 30 00:20:34.109920 containerd[1593]: time="2025-04-30T00:20:34.108819332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.163482ms" Apr 30 00:20:34.124975 containerd[1593]: time="2025-04-30T00:20:34.123720144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.551834ms" Apr 30 00:20:34.175452 kubelet[2407]: W0430 00:20:34.175363 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.146.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-1-91c0161c2f&limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.175686 kubelet[2407]: E0430 00:20:34.175661 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.146.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.3-1-91c0161c2f&limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.296666 containerd[1593]: time="2025-04-30T00:20:34.296442316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:20:34.296666 containerd[1593]: time="2025-04-30T00:20:34.296524352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:20:34.296666 containerd[1593]: time="2025-04-30T00:20:34.296548165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:34.297105 containerd[1593]: time="2025-04-30T00:20:34.296666375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:34.299425 containerd[1593]: time="2025-04-30T00:20:34.299044747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:20:34.299425 containerd[1593]: time="2025-04-30T00:20:34.299144156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:20:34.299425 containerd[1593]: time="2025-04-30T00:20:34.299169988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:34.299425 containerd[1593]: time="2025-04-30T00:20:34.299310879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:34.304337 containerd[1593]: time="2025-04-30T00:20:34.304216070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:20:34.305580 containerd[1593]: time="2025-04-30T00:20:34.305511969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:20:34.305811 containerd[1593]: time="2025-04-30T00:20:34.305775487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:34.306073 containerd[1593]: time="2025-04-30T00:20:34.306043560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:34.431648 kubelet[2407]: W0430 00:20:34.431214 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.146.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.431648 kubelet[2407]: E0430 00:20:34.431290 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.146.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.431648 kubelet[2407]: W0430 00:20:34.431502 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.146.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.431648 kubelet[2407]: E0430 00:20:34.431534 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.146.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.146.79:6443: connect: connection refused Apr 30 00:20:34.437909 containerd[1593]: time="2025-04-30T00:20:34.437865573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.3-1-91c0161c2f,Uid:043fb42851e646ca0afb9272284eb982,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfcb5a45d6b861c5ad2640d5d0c330c4b0e12e42b16190471510aeddf926f337\"" Apr 30 00:20:34.440477 kubelet[2407]: E0430 00:20:34.440139 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:34.450621 containerd[1593]: time="2025-04-30T00:20:34.449867845Z" level=info msg="CreateContainer within sandbox \"cfcb5a45d6b861c5ad2640d5d0c330c4b0e12e42b16190471510aeddf926f337\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:20:34.463384 containerd[1593]: time="2025-04-30T00:20:34.462996490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.3-1-91c0161c2f,Uid:f398de580ed3e748a7d587a6579bbeb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a0b6cf92eb8c3c370c89c453740d505d73e9c5ffed647ead9136f381a818d4\"" Apr 30 00:20:34.467652 kubelet[2407]: E0430 00:20:34.466241 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:34.470000 containerd[1593]: time="2025-04-30T00:20:34.469573232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.3-1-91c0161c2f,Uid:623f7eba2f5aad39f0d80687b9485b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7073d1c2f9c481fc137c77d51f7d8ec58d364b084b04e7cb79ed672e466f16a\"" Apr 30 00:20:34.470000 containerd[1593]: time="2025-04-30T00:20:34.469588058Z" level=info msg="CreateContainer within sandbox \"35a0b6cf92eb8c3c370c89c453740d505d73e9c5ffed647ead9136f381a818d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:20:34.470908 kubelet[2407]: E0430 00:20:34.470494 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:34.474182 containerd[1593]: time="2025-04-30T00:20:34.474136278Z" level=info msg="CreateContainer within sandbox \"cfcb5a45d6b861c5ad2640d5d0c330c4b0e12e42b16190471510aeddf926f337\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cd26ea698058754c29f52414e2b7622c8eb910118f638f2a439c4df4da144fb\"" Apr 30 00:20:34.475076 containerd[1593]: time="2025-04-30T00:20:34.475039129Z" level=info msg="CreateContainer within sandbox \"a7073d1c2f9c481fc137c77d51f7d8ec58d364b084b04e7cb79ed672e466f16a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:20:34.476900 containerd[1593]: time="2025-04-30T00:20:34.476016048Z" level=info msg="StartContainer for \"7cd26ea698058754c29f52414e2b7622c8eb910118f638f2a439c4df4da144fb\"" Apr 30 00:20:34.490613 containerd[1593]: time="2025-04-30T00:20:34.490545296Z" level=info msg="CreateContainer within sandbox \"35a0b6cf92eb8c3c370c89c453740d505d73e9c5ffed647ead9136f381a818d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c78694931e5f30503c29c0a9562926c9c95e05a692d90387ec80eac9a7a704b0\"" Apr 30 00:20:34.491722 containerd[1593]: time="2025-04-30T00:20:34.491684216Z" level=info msg="StartContainer for \"c78694931e5f30503c29c0a9562926c9c95e05a692d90387ec80eac9a7a704b0\"" Apr 30 00:20:34.495413 containerd[1593]: time="2025-04-30T00:20:34.495350544Z" level=info msg="CreateContainer within sandbox \"a7073d1c2f9c481fc137c77d51f7d8ec58d364b084b04e7cb79ed672e466f16a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ceed840118b690ede5974a954fb8fe63165601cc3742e69fcd51145ae7771bf6\"" Apr 30 00:20:34.496468 containerd[1593]: time="2025-04-30T00:20:34.496421490Z" level=info msg="StartContainer for \"ceed840118b690ede5974a954fb8fe63165601cc3742e69fcd51145ae7771bf6\"" Apr 30 00:20:34.579065 kubelet[2407]: E0430 00:20:34.579003 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.146.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.3-1-91c0161c2f?timeout=10s\": dial tcp 146.190.146.79:6443: connect: connection refused" interval="1.6s" Apr 30 00:20:34.613332 containerd[1593]: time="2025-04-30T00:20:34.613048249Z" level=info msg="StartContainer for \"7cd26ea698058754c29f52414e2b7622c8eb910118f638f2a439c4df4da144fb\" returns successfully" Apr 30 00:20:34.649294 containerd[1593]: time="2025-04-30T00:20:34.649158999Z" level=info msg="StartContainer for \"c78694931e5f30503c29c0a9562926c9c95e05a692d90387ec80eac9a7a704b0\" returns successfully" Apr 30 00:20:34.680916 containerd[1593]: time="2025-04-30T00:20:34.679401945Z" level=info msg="StartContainer for \"ceed840118b690ede5974a954fb8fe63165601cc3742e69fcd51145ae7771bf6\" returns successfully" Apr 30 00:20:34.687887 kubelet[2407]: I0430 00:20:34.687737 2407 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:34.690558 kubelet[2407]: E0430 00:20:34.690505 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.146.79:6443/api/v1/nodes\": dial tcp 146.190.146.79:6443: connect: connection refused" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:35.236928 kubelet[2407]: E0430 00:20:35.234135 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:35.255022 kubelet[2407]: E0430 00:20:35.254977 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:35.266256 kubelet[2407]: E0430 00:20:35.266200 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:36.268518 kubelet[2407]: E0430 00:20:36.268466 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:36.271100 kubelet[2407]: E0430 00:20:36.271057 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:36.293421 kubelet[2407]: I0430 00:20:36.293325 2407 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:36.963322 kubelet[2407]: E0430 00:20:36.963147 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:37.275042 kubelet[2407]: E0430 00:20:37.274945 2407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.3-1-91c0161c2f\" not found" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:37.342184 kubelet[2407]: I0430 00:20:37.342123 2407 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:38.155562 kubelet[2407]: I0430 00:20:38.155462 2407 apiserver.go:52] "Watching apiserver" Apr 30 00:20:38.175192 kubelet[2407]: I0430 00:20:38.175121 2407 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:20:39.785343 systemd[1]: Reloading requested from client PID 2689 ('systemctl') (unit session-7.scope)... Apr 30 00:20:39.785966 systemd[1]: Reloading... Apr 30 00:20:39.895895 zram_generator::config[2724]: No configuration found. Apr 30 00:20:40.107134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:20:40.259375 systemd[1]: Reloading finished in 472 ms. Apr 30 00:20:40.302711 kubelet[2407]: E0430 00:20:40.301977 2407 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152.2.3-1-91c0161c2f.183af0ad6a039130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.3-1-91c0161c2f,UID:ci-4152.2.3-1-91c0161c2f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.3-1-91c0161c2f,},FirstTimestamp:2025-04-30 00:20:33.158115632 +0000 UTC m=+0.319704692,LastTimestamp:2025-04-30 00:20:33.158115632 +0000 UTC m=+0.319704692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.3-1-91c0161c2f,}" Apr 30 00:20:40.302239 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:40.316651 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:20:40.317466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:40.335205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:40.527880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:40.535404 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:20:40.626233 kubelet[2788]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:20:40.626850 kubelet[2788]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:20:40.627497 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:20:40.627497 kubelet[2788]: I0430 00:20:40.627081 2788 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:20:40.633956 kubelet[2788]: I0430 00:20:40.632957 2788 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:20:40.633956 kubelet[2788]: I0430 00:20:40.632990 2788 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:20:40.633956 kubelet[2788]: I0430 00:20:40.633259 2788 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:20:40.635041 kubelet[2788]: I0430 00:20:40.635011 2788 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:20:40.636450 kubelet[2788]: I0430 00:20:40.636392 2788 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:20:40.646837 kubelet[2788]: I0430 00:20:40.646703 2788 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:20:40.647885 kubelet[2788]: I0430 00:20:40.647792 2788 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:20:40.648032 kubelet[2788]: I0430 00:20:40.647833 2788 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.3-1-91c0161c2f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:20:40.648201 kubelet[2788]: I0430 00:20:40.648052 2788 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:20:40.648201 kubelet[2788]: I0430 00:20:40.648069 2788 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:20:40.648201 kubelet[2788]: I0430 00:20:40.648128 2788 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:20:40.648994 kubelet[2788]: I0430 00:20:40.648248 2788 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:20:40.648994 kubelet[2788]: I0430 00:20:40.648263 2788 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:20:40.648994 kubelet[2788]: I0430 00:20:40.648286 2788 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:20:40.648994 kubelet[2788]: I0430 00:20:40.648321 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:20:40.649900 kubelet[2788]: I0430 00:20:40.649846 2788 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:20:40.650244 kubelet[2788]: I0430 00:20:40.650221 2788 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:20:40.651272 kubelet[2788]: I0430 00:20:40.651246 2788 server.go:1264] "Started kubelet" Apr 30 00:20:40.657608 kubelet[2788]: I0430 00:20:40.656614 2788 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:20:40.658493 kubelet[2788]: I0430 00:20:40.658290 2788 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:20:40.660806 kubelet[2788]: I0430 00:20:40.660774 2788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:20:40.661323 kubelet[2788]: I0430 00:20:40.661258 2788 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:20:40.662099 kubelet[2788]: I0430 00:20:40.661661 2788 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:20:40.672509 kubelet[2788]: I0430 00:20:40.672480 2788 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:20:40.681870 kubelet[2788]: I0430 00:20:40.680807 2788 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:20:40.683538 kubelet[2788]: I0430 00:20:40.683018 2788 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:20:40.695256 kubelet[2788]: I0430 00:20:40.695223 2788 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:20:40.695903 kubelet[2788]: I0430 00:20:40.695878 2788 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:20:40.702177 kubelet[2788]: E0430 00:20:40.701110 2788 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:20:40.705448 kubelet[2788]: I0430 00:20:40.705419 2788 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:20:40.720158 kubelet[2788]: I0430 00:20:40.720097 2788 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:20:40.721476 kubelet[2788]: I0430 00:20:40.721433 2788 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:20:40.721476 kubelet[2788]: I0430 00:20:40.721479 2788 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:20:40.721685 kubelet[2788]: I0430 00:20:40.721498 2788 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:20:40.721685 kubelet[2788]: E0430 00:20:40.721546 2788 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:20:40.778212 kubelet[2788]: I0430 00:20:40.777731 2788 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:40.796324 kubelet[2788]: I0430 00:20:40.794219 2788 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:40.796324 kubelet[2788]: I0430 00:20:40.794336 2788 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:40.808069 sudo[2819]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:20:40.809134 sudo[2819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:20:40.822246 kubelet[2788]: E0430 00:20:40.822099 2788 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:20:40.837339 kubelet[2788]: I0430 00:20:40.837283 2788 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:20:40.837339 kubelet[2788]: I0430 00:20:40.837331 2788 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:20:40.837567 kubelet[2788]: I0430 00:20:40.837364 2788 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:20:40.837567 kubelet[2788]: I0430 00:20:40.837555 2788 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:20:40.837640 kubelet[2788]: I0430 00:20:40.837567 2788 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:20:40.837640 kubelet[2788]: I0430 00:20:40.837587 2788 policy_none.go:49] "None policy: Start" Apr 30 00:20:40.838773 kubelet[2788]: I0430 00:20:40.838348 2788 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:20:40.838773 kubelet[2788]: I0430 00:20:40.838393 2788 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:20:40.839168 kubelet[2788]: I0430 00:20:40.838975 2788 state_mem.go:75] "Updated machine memory state" Apr 30 00:20:40.841553 kubelet[2788]: I0430 00:20:40.841148 2788 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:20:40.841553 kubelet[2788]: I0430 00:20:40.841468 2788 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:20:40.845190 kubelet[2788]: I0430 00:20:40.844376 2788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:20:41.023828 kubelet[2788]: I0430 00:20:41.023151 2788 topology_manager.go:215] "Topology Admit Handler" podUID="043fb42851e646ca0afb9272284eb982" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.023828 kubelet[2788]: I0430 00:20:41.023302 2788 topology_manager.go:215] "Topology Admit Handler" podUID="f398de580ed3e748a7d587a6579bbeb3" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.023828 kubelet[2788]: I0430 00:20:41.023382 2788 topology_manager.go:215] "Topology Admit Handler" podUID="623f7eba2f5aad39f0d80687b9485b90" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.041569 kubelet[2788]: W0430 00:20:41.040408 2788 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:20:41.041569 kubelet[2788]: W0430 00:20:41.040486 2788 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:20:41.041569 kubelet[2788]: W0430 00:20:41.041188 2788 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 00:20:41.086376 kubelet[2788]: I0430 00:20:41.085878 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/043fb42851e646ca0afb9272284eb982-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.3-1-91c0161c2f\" (UID: \"043fb42851e646ca0afb9272284eb982\") " pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086376 kubelet[2788]: I0430 00:20:41.085951 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-ca-certs\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086376 kubelet[2788]: I0430 00:20:41.086003 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086376 kubelet[2788]: I0430 00:20:41.086036 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/623f7eba2f5aad39f0d80687b9485b90-kubeconfig\") pod \"kube-scheduler-ci-4152.2.3-1-91c0161c2f\" (UID: \"623f7eba2f5aad39f0d80687b9485b90\") " pod="kube-system/kube-scheduler-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086376 kubelet[2788]: I0430 00:20:41.086065 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/043fb42851e646ca0afb9272284eb982-k8s-certs\") pod \"kube-apiserver-ci-4152.2.3-1-91c0161c2f\" (UID: \"043fb42851e646ca0afb9272284eb982\") " pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086759 kubelet[2788]: I0430 00:20:41.086092 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086759 kubelet[2788]: I0430 00:20:41.086117 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086759 kubelet[2788]: I0430 00:20:41.086144 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f398de580ed3e748a7d587a6579bbeb3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.3-1-91c0161c2f\" (UID: \"f398de580ed3e748a7d587a6579bbeb3\") " pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.086759 kubelet[2788]: I0430 00:20:41.086171 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/043fb42851e646ca0afb9272284eb982-ca-certs\") pod \"kube-apiserver-ci-4152.2.3-1-91c0161c2f\" (UID: \"043fb42851e646ca0afb9272284eb982\") " pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" Apr 30 00:20:41.344170 kubelet[2788]: E0430 00:20:41.343006 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:41.344170 kubelet[2788]: E0430 00:20:41.343019 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:41.344170 kubelet[2788]: E0430 00:20:41.343272 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:41.529610 sudo[2819]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:41.656552 kubelet[2788]: I0430 00:20:41.656249 2788 apiserver.go:52] "Watching apiserver" Apr 30 00:20:41.681469 kubelet[2788]: I0430 00:20:41.681355 2788 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:20:41.749624 kubelet[2788]: E0430 00:20:41.748101 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:41.753650 kubelet[2788]: E0430 00:20:41.753610 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:41.758079 kubelet[2788]: E0430 00:20:41.758029 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:41.809916 kubelet[2788]: I0430 00:20:41.809305 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.3-1-91c0161c2f" podStartSLOduration=0.809284007 podStartE2EDuration="809.284007ms" podCreationTimestamp="2025-04-30 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:20:41.796323076 +0000 UTC m=+1.248570644" watchObservedRunningTime="2025-04-30 00:20:41.809284007 +0000 UTC m=+1.261531571" Apr 30 00:20:41.824891 kubelet[2788]: I0430 00:20:41.823777 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.3-1-91c0161c2f" podStartSLOduration=0.823754189 podStartE2EDuration="823.754189ms" podCreationTimestamp="2025-04-30 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:20:41.810004844 +0000 UTC m=+1.262252409" watchObservedRunningTime="2025-04-30 00:20:41.823754189 +0000 UTC m=+1.276001748" Apr 30 00:20:41.838227 kubelet[2788]: I0430 00:20:41.838134 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.3-1-91c0161c2f" podStartSLOduration=0.838108828 podStartE2EDuration="838.108828ms" podCreationTimestamp="2025-04-30 00:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:20:41.824235929 +0000 UTC m=+1.276483496" watchObservedRunningTime="2025-04-30 00:20:41.838108828 +0000 UTC m=+1.290356398" Apr 30 00:20:42.751045 kubelet[2788]: E0430 00:20:42.749882 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:42.752149 kubelet[2788]: E0430 00:20:42.751744 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:43.054257 sudo[1806]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:43.058919 sshd[1805]: Connection closed by 147.75.109.163 port 43450 Apr 30 00:20:43.059724 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:43.067197 systemd[1]: sshd@6-146.190.146.79:22-147.75.109.163:43450.service: Deactivated successfully. Apr 30 00:20:43.072417 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:20:43.074598 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:20:43.076547 systemd-logind[1569]: Removed session 7. Apr 30 00:20:45.016244 kubelet[2788]: E0430 00:20:45.015763 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:45.754888 kubelet[2788]: E0430 00:20:45.754672 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:46.756431 kubelet[2788]: E0430 00:20:46.756395 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:50.779683 kubelet[2788]: E0430 00:20:50.779468 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:51.765338 kubelet[2788]: E0430 00:20:51.765109 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:52.618895 kubelet[2788]: E0430 00:20:52.618231 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.081235 kubelet[2788]: I0430 00:20:55.081090 2788 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:20:55.084422 containerd[1593]: time="2025-04-30T00:20:55.084272511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:20:55.087029 kubelet[2788]: I0430 00:20:55.084597 2788 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:20:55.164889 kubelet[2788]: I0430 00:20:55.161976 2788 topology_manager.go:215] "Topology Admit Handler" podUID="65a9c2be-30a4-424f-94b1-5a304643461e" podNamespace="kube-system" podName="kube-proxy-hm7dt" Apr 30 00:20:55.166889 kubelet[2788]: I0430 00:20:55.165547 2788 topology_manager.go:215] "Topology Admit Handler" podUID="2a74cd86-f205-4075-8071-2eb6c8eb4379" podNamespace="kube-system" podName="cilium-operator-599987898-d6fjj" Apr 30 00:20:55.184063 kubelet[2788]: I0430 00:20:55.184000 2788 topology_manager.go:215] "Topology Admit Handler" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" podNamespace="kube-system" podName="cilium-wlsw9" Apr 30 00:20:55.282209 kubelet[2788]: I0430 00:20:55.282136 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-cgroup\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282209 kubelet[2788]: I0430 00:20:55.282208 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tqq\" (UniqueName: \"kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-kube-api-access-82tqq\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282493 kubelet[2788]: I0430 00:20:55.282249 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a74cd86-f205-4075-8071-2eb6c8eb4379-cilium-config-path\") pod \"cilium-operator-599987898-d6fjj\" (UID: \"2a74cd86-f205-4075-8071-2eb6c8eb4379\") " pod="kube-system/cilium-operator-599987898-d6fjj" Apr 30 00:20:55.282493 kubelet[2788]: I0430 00:20:55.282272 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-kernel\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282493 kubelet[2788]: I0430 00:20:55.282296 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cni-path\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282493 kubelet[2788]: I0430 00:20:55.282323 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-hostproc\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282493 kubelet[2788]: I0430 00:20:55.282347 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65a9c2be-30a4-424f-94b1-5a304643461e-xtables-lock\") pod \"kube-proxy-hm7dt\" (UID: \"65a9c2be-30a4-424f-94b1-5a304643461e\") " pod="kube-system/kube-proxy-hm7dt" Apr 30 00:20:55.282792 kubelet[2788]: I0430 00:20:55.282371 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65a9c2be-30a4-424f-94b1-5a304643461e-lib-modules\") pod \"kube-proxy-hm7dt\" (UID: \"65a9c2be-30a4-424f-94b1-5a304643461e\") " pod="kube-system/kube-proxy-hm7dt" Apr 30 00:20:55.282792 kubelet[2788]: I0430 00:20:55.282399 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srv6p\" (UniqueName: \"kubernetes.io/projected/65a9c2be-30a4-424f-94b1-5a304643461e-kube-api-access-srv6p\") pod \"kube-proxy-hm7dt\" (UID: \"65a9c2be-30a4-424f-94b1-5a304643461e\") " pod="kube-system/kube-proxy-hm7dt" Apr 30 00:20:55.282792 kubelet[2788]: I0430 00:20:55.282422 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-bpf-maps\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282792 kubelet[2788]: I0430 00:20:55.282445 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-net\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.282792 kubelet[2788]: I0430 00:20:55.282474 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8pz\" (UniqueName: \"kubernetes.io/projected/2a74cd86-f205-4075-8071-2eb6c8eb4379-kube-api-access-9g8pz\") pod \"cilium-operator-599987898-d6fjj\" (UID: \"2a74cd86-f205-4075-8071-2eb6c8eb4379\") " pod="kube-system/cilium-operator-599987898-d6fjj" Apr 30 00:20:55.283065 kubelet[2788]: I0430 00:20:55.282654 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-etc-cni-netd\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.283065 kubelet[2788]: I0430 00:20:55.282696 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-config-path\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.283065 kubelet[2788]: I0430 00:20:55.282726 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-run\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.283065 kubelet[2788]: I0430 00:20:55.282755 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-lib-modules\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.283065 kubelet[2788]: I0430 00:20:55.282786 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-xtables-lock\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.283065 kubelet[2788]: I0430 00:20:55.282814 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d58bc04-0534-44ce-b01d-be955be7c0bb-clustermesh-secrets\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.283319 kubelet[2788]: I0430 00:20:55.282838 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65a9c2be-30a4-424f-94b1-5a304643461e-kube-proxy\") pod \"kube-proxy-hm7dt\" (UID: \"65a9c2be-30a4-424f-94b1-5a304643461e\") " pod="kube-system/kube-proxy-hm7dt" Apr 30 00:20:55.283319 kubelet[2788]: I0430 00:20:55.282881 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-hubble-tls\") pod \"cilium-wlsw9\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " pod="kube-system/cilium-wlsw9" Apr 30 00:20:55.479959 kubelet[2788]: E0430 00:20:55.479384 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.481891 containerd[1593]: time="2025-04-30T00:20:55.480470881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hm7dt,Uid:65a9c2be-30a4-424f-94b1-5a304643461e,Namespace:kube-system,Attempt:0,}" Apr 30 00:20:55.482094 kubelet[2788]: E0430 00:20:55.481539 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.483445 containerd[1593]: time="2025-04-30T00:20:55.483263977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d6fjj,Uid:2a74cd86-f205-4075-8071-2eb6c8eb4379,Namespace:kube-system,Attempt:0,}" Apr 30 00:20:55.514953 kubelet[2788]: E0430 00:20:55.514324 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.516748 containerd[1593]: time="2025-04-30T00:20:55.515436857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlsw9,Uid:0d58bc04-0534-44ce-b01d-be955be7c0bb,Namespace:kube-system,Attempt:0,}" Apr 30 00:20:55.560422 containerd[1593]: time="2025-04-30T00:20:55.560280504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:20:55.561745 containerd[1593]: time="2025-04-30T00:20:55.561608139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:20:55.562173 containerd[1593]: time="2025-04-30T00:20:55.562116137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:20:55.562481 containerd[1593]: time="2025-04-30T00:20:55.562379823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:55.562993 containerd[1593]: time="2025-04-30T00:20:55.562426953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:20:55.565384 containerd[1593]: time="2025-04-30T00:20:55.565056377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:55.568789 containerd[1593]: time="2025-04-30T00:20:55.566715417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:55.569283 containerd[1593]: time="2025-04-30T00:20:55.569120451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:55.577696 containerd[1593]: time="2025-04-30T00:20:55.577484279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:20:55.577696 containerd[1593]: time="2025-04-30T00:20:55.577582156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:20:55.577696 containerd[1593]: time="2025-04-30T00:20:55.577613512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:55.578348 containerd[1593]: time="2025-04-30T00:20:55.577812611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:20:55.682358 containerd[1593]: time="2025-04-30T00:20:55.682305311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlsw9,Uid:0d58bc04-0534-44ce-b01d-be955be7c0bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\"" Apr 30 00:20:55.684904 kubelet[2788]: E0430 00:20:55.684647 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.690034 containerd[1593]: time="2025-04-30T00:20:55.689842139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:20:55.706002 containerd[1593]: time="2025-04-30T00:20:55.705050675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hm7dt,Uid:65a9c2be-30a4-424f-94b1-5a304643461e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b33541a2e806842734994f7e0a319aa08848283b11cfbea30c1444d5ff4d3216\"" Apr 30 00:20:55.706887 kubelet[2788]: E0430 00:20:55.706744 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.713969 containerd[1593]: time="2025-04-30T00:20:55.713646554Z" level=info msg="CreateContainer within sandbox \"b33541a2e806842734994f7e0a319aa08848283b11cfbea30c1444d5ff4d3216\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:20:55.737289 containerd[1593]: time="2025-04-30T00:20:55.737036778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d6fjj,Uid:2a74cd86-f205-4075-8071-2eb6c8eb4379,Namespace:kube-system,Attempt:0,} returns sandbox id \"20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9\"" Apr 30 00:20:55.740772 kubelet[2788]: E0430 00:20:55.740368 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:55.754197 containerd[1593]: time="2025-04-30T00:20:55.754150789Z" level=info msg="CreateContainer within sandbox \"b33541a2e806842734994f7e0a319aa08848283b11cfbea30c1444d5ff4d3216\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f8db6efef50a706bd960c13dab3ff438c48d531eb8e4ecccd02c27b945e35c7\"" Apr 30 00:20:55.756897 containerd[1593]: time="2025-04-30T00:20:55.756848134Z" level=info msg="StartContainer for \"3f8db6efef50a706bd960c13dab3ff438c48d531eb8e4ecccd02c27b945e35c7\"" Apr 30 00:20:55.843211 containerd[1593]: time="2025-04-30T00:20:55.843141906Z" level=info msg="StartContainer for \"3f8db6efef50a706bd960c13dab3ff438c48d531eb8e4ecccd02c27b945e35c7\" returns successfully" Apr 30 00:20:56.797569 kubelet[2788]: E0430 00:20:56.797525 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:20:57.195171 update_engine[1579]: I20250430 00:20:57.194932 1579 update_attempter.cc:509] Updating boot flags... Apr 30 00:20:57.231989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3147) Apr 30 00:20:57.335047 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3151) Apr 30 00:20:57.434284 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3151) Apr 30 00:20:57.802176 kubelet[2788]: E0430 00:20:57.802133 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:00.775300 kubelet[2788]: I0430 00:21:00.775211 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hm7dt" podStartSLOduration=5.773787097 podStartE2EDuration="5.773787097s" podCreationTimestamp="2025-04-30 00:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:20:56.824146226 +0000 UTC m=+16.276393797" watchObservedRunningTime="2025-04-30 00:21:00.773787097 +0000 UTC m=+20.226034664" Apr 30 00:21:03.455370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280007576.mount: Deactivated successfully. Apr 30 00:21:05.859420 containerd[1593]: time="2025-04-30T00:21:05.859223406Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:05.862369 containerd[1593]: time="2025-04-30T00:21:05.862299977Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 00:21:05.862658 containerd[1593]: time="2025-04-30T00:21:05.862624074Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:05.864727 containerd[1593]: time="2025-04-30T00:21:05.864682470Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.174766211s" Apr 30 00:21:05.864727 containerd[1593]: time="2025-04-30T00:21:05.864726316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 00:21:05.867824 containerd[1593]: time="2025-04-30T00:21:05.867784680Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:21:05.876069 containerd[1593]: time="2025-04-30T00:21:05.875479543Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:21:05.953069 containerd[1593]: time="2025-04-30T00:21:05.952914754Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\"" Apr 30 00:21:05.954180 containerd[1593]: time="2025-04-30T00:21:05.954144625Z" level=info msg="StartContainer for \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\"" Apr 30 00:21:06.095998 containerd[1593]: time="2025-04-30T00:21:06.095935510Z" level=info msg="StartContainer for \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\" returns successfully" Apr 30 00:21:06.199461 containerd[1593]: time="2025-04-30T00:21:06.168984348Z" level=info msg="shim disconnected" id=16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b namespace=k8s.io Apr 30 00:21:06.199461 containerd[1593]: time="2025-04-30T00:21:06.199175744Z" level=warning msg="cleaning up after shim disconnected" id=16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b namespace=k8s.io Apr 30 00:21:06.199461 containerd[1593]: time="2025-04-30T00:21:06.199198801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:21:06.872720 kubelet[2788]: E0430 00:21:06.872547 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:06.880551 containerd[1593]: time="2025-04-30T00:21:06.880327252Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:21:06.910120 containerd[1593]: time="2025-04-30T00:21:06.910065165Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\"" Apr 30 00:21:06.915149 containerd[1593]: time="2025-04-30T00:21:06.915060464Z" level=info msg="StartContainer for \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\"" Apr 30 00:21:06.950345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b-rootfs.mount: Deactivated successfully. Apr 30 00:21:07.002558 systemd[1]: run-containerd-runc-k8s.io-f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca-runc.sIC7Re.mount: Deactivated successfully. Apr 30 00:21:07.046224 containerd[1593]: time="2025-04-30T00:21:07.046058474Z" level=info msg="StartContainer for \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\" returns successfully" Apr 30 00:21:07.061228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:21:07.061526 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:21:07.061602 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:21:07.069312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:21:07.107574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:21:07.123755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca-rootfs.mount: Deactivated successfully. Apr 30 00:21:07.125119 containerd[1593]: time="2025-04-30T00:21:07.122364449Z" level=info msg="shim disconnected" id=f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca namespace=k8s.io Apr 30 00:21:07.125119 containerd[1593]: time="2025-04-30T00:21:07.124322202Z" level=warning msg="cleaning up after shim disconnected" id=f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca namespace=k8s.io Apr 30 00:21:07.125119 containerd[1593]: time="2025-04-30T00:21:07.124342694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:21:07.874235 kubelet[2788]: E0430 00:21:07.874195 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:07.899981 containerd[1593]: time="2025-04-30T00:21:07.894635726Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:21:07.937367 containerd[1593]: time="2025-04-30T00:21:07.936059861Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\"" Apr 30 00:21:07.938691 containerd[1593]: time="2025-04-30T00:21:07.938185909Z" level=info msg="StartContainer for \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\"" Apr 30 00:21:08.074663 containerd[1593]: time="2025-04-30T00:21:08.074604025Z" level=info msg="StartContainer for \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\" returns successfully" Apr 30 00:21:08.130831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8-rootfs.mount: Deactivated successfully. Apr 30 00:21:08.133400 containerd[1593]: time="2025-04-30T00:21:08.133315698Z" level=info msg="shim disconnected" id=909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8 namespace=k8s.io Apr 30 00:21:08.134564 containerd[1593]: time="2025-04-30T00:21:08.134070697Z" level=warning msg="cleaning up after shim disconnected" id=909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8 namespace=k8s.io Apr 30 00:21:08.134564 containerd[1593]: time="2025-04-30T00:21:08.134109532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:21:08.559834 containerd[1593]: time="2025-04-30T00:21:08.559762350Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:08.560685 containerd[1593]: time="2025-04-30T00:21:08.560640772Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 00:21:08.562707 containerd[1593]: time="2025-04-30T00:21:08.561149118Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:08.563911 containerd[1593]: time="2025-04-30T00:21:08.563847968Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.69601989s" Apr 30 00:21:08.563911 containerd[1593]: time="2025-04-30T00:21:08.563908339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 00:21:08.568619 containerd[1593]: time="2025-04-30T00:21:08.568474234Z" level=info msg="CreateContainer within sandbox \"20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:21:08.584903 containerd[1593]: time="2025-04-30T00:21:08.584725513Z" level=info msg="CreateContainer within sandbox \"20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\"" Apr 30 00:21:08.586881 containerd[1593]: time="2025-04-30T00:21:08.586587101Z" level=info msg="StartContainer for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\"" Apr 30 00:21:08.666519 containerd[1593]: time="2025-04-30T00:21:08.666323809Z" level=info msg="StartContainer for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" returns successfully" Apr 30 00:21:08.884537 kubelet[2788]: E0430 00:21:08.884179 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:08.902703 kubelet[2788]: E0430 00:21:08.902102 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:08.909770 containerd[1593]: time="2025-04-30T00:21:08.908519985Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:21:08.975503 containerd[1593]: time="2025-04-30T00:21:08.975456955Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\"" Apr 30 00:21:08.978867 containerd[1593]: time="2025-04-30T00:21:08.977843782Z" level=info msg="StartContainer for \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\"" Apr 30 00:21:08.982714 kubelet[2788]: I0430 00:21:08.982570 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-d6fjj" podStartSLOduration=1.160718057 podStartE2EDuration="13.982521851s" podCreationTimestamp="2025-04-30 00:20:55 +0000 UTC" firstStartedPulling="2025-04-30 00:20:55.743553142 +0000 UTC m=+15.195800687" lastFinishedPulling="2025-04-30 00:21:08.565356919 +0000 UTC m=+28.017604481" observedRunningTime="2025-04-30 00:21:08.982186621 +0000 UTC m=+28.434434189" watchObservedRunningTime="2025-04-30 00:21:08.982521851 +0000 UTC m=+28.434769420" Apr 30 00:21:09.161574 containerd[1593]: time="2025-04-30T00:21:09.161430590Z" level=info msg="StartContainer for \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\" returns successfully" Apr 30 00:21:09.203166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab-rootfs.mount: Deactivated successfully. Apr 30 00:21:09.205640 containerd[1593]: time="2025-04-30T00:21:09.205434387Z" level=info msg="shim disconnected" id=def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab namespace=k8s.io Apr 30 00:21:09.205640 containerd[1593]: time="2025-04-30T00:21:09.205500009Z" level=warning msg="cleaning up after shim disconnected" id=def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab namespace=k8s.io Apr 30 00:21:09.205640 containerd[1593]: time="2025-04-30T00:21:09.205510549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:21:09.908209 kubelet[2788]: E0430 00:21:09.907364 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:09.908209 kubelet[2788]: E0430 00:21:09.907385 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:09.921318 containerd[1593]: time="2025-04-30T00:21:09.921073126Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:21:09.944049 containerd[1593]: time="2025-04-30T00:21:09.943172955Z" level=info msg="CreateContainer within sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\"" Apr 30 00:21:09.947130 containerd[1593]: time="2025-04-30T00:21:09.944534284Z" level=info msg="StartContainer for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\"" Apr 30 00:21:10.032722 containerd[1593]: time="2025-04-30T00:21:10.032648285Z" level=info msg="StartContainer for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" returns successfully" Apr 30 00:21:10.294926 kubelet[2788]: I0430 00:21:10.294730 2788 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:21:10.342756 kubelet[2788]: I0430 00:21:10.342466 2788 topology_manager.go:215] "Topology Admit Handler" podUID="252fcfa8-97f4-4d04-95ff-c628c672343d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6cp95" Apr 30 00:21:10.348031 kubelet[2788]: I0430 00:21:10.346008 2788 topology_manager.go:215] "Topology Admit Handler" podUID="7df48c2d-e696-41f9-afc6-ebf0c33248a4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hdhjk" Apr 30 00:21:10.415123 kubelet[2788]: I0430 00:21:10.415074 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/252fcfa8-97f4-4d04-95ff-c628c672343d-config-volume\") pod \"coredns-7db6d8ff4d-6cp95\" (UID: \"252fcfa8-97f4-4d04-95ff-c628c672343d\") " pod="kube-system/coredns-7db6d8ff4d-6cp95" Apr 30 00:21:10.415415 kubelet[2788]: I0430 00:21:10.415390 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7df48c2d-e696-41f9-afc6-ebf0c33248a4-config-volume\") pod \"coredns-7db6d8ff4d-hdhjk\" (UID: \"7df48c2d-e696-41f9-afc6-ebf0c33248a4\") " pod="kube-system/coredns-7db6d8ff4d-hdhjk" Apr 30 00:21:10.415540 kubelet[2788]: I0430 00:21:10.415524 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7pxl\" (UniqueName: \"kubernetes.io/projected/252fcfa8-97f4-4d04-95ff-c628c672343d-kube-api-access-d7pxl\") pod \"coredns-7db6d8ff4d-6cp95\" (UID: \"252fcfa8-97f4-4d04-95ff-c628c672343d\") " pod="kube-system/coredns-7db6d8ff4d-6cp95" Apr 30 00:21:10.415656 kubelet[2788]: I0430 00:21:10.415644 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhlr\" (UniqueName: \"kubernetes.io/projected/7df48c2d-e696-41f9-afc6-ebf0c33248a4-kube-api-access-9fhlr\") pod \"coredns-7db6d8ff4d-hdhjk\" (UID: \"7df48c2d-e696-41f9-afc6-ebf0c33248a4\") " pod="kube-system/coredns-7db6d8ff4d-hdhjk" Apr 30 00:21:10.669069 kubelet[2788]: E0430 00:21:10.668838 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:10.672831 kubelet[2788]: E0430 00:21:10.670795 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:10.673018 containerd[1593]: time="2025-04-30T00:21:10.671100496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cp95,Uid:252fcfa8-97f4-4d04-95ff-c628c672343d,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:10.673586 containerd[1593]: time="2025-04-30T00:21:10.673545448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hdhjk,Uid:7df48c2d-e696-41f9-afc6-ebf0c33248a4,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:10.916777 kubelet[2788]: E0430 00:21:10.915346 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:10.944961 kubelet[2788]: I0430 00:21:10.943625 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wlsw9" podStartSLOduration=5.76481879 podStartE2EDuration="15.943596707s" podCreationTimestamp="2025-04-30 00:20:55 +0000 UTC" firstStartedPulling="2025-04-30 00:20:55.687801699 +0000 UTC m=+15.140049254" lastFinishedPulling="2025-04-30 00:21:05.866579594 +0000 UTC m=+25.318827171" observedRunningTime="2025-04-30 00:21:10.941546947 +0000 UTC m=+30.393794516" watchObservedRunningTime="2025-04-30 00:21:10.943596707 +0000 UTC m=+30.395844274" Apr 30 00:21:11.918569 kubelet[2788]: E0430 00:21:11.918514 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:12.532294 systemd-networkd[1223]: cilium_host: Link UP Apr 30 00:21:12.532734 systemd-networkd[1223]: cilium_net: Link UP Apr 30 00:21:12.535087 systemd-networkd[1223]: cilium_net: Gained carrier Apr 30 00:21:12.535323 systemd-networkd[1223]: cilium_host: Gained carrier Apr 30 00:21:12.689377 systemd-networkd[1223]: cilium_vxlan: Link UP Apr 30 00:21:12.689387 systemd-networkd[1223]: cilium_vxlan: Gained carrier Apr 30 00:21:12.709712 systemd-networkd[1223]: cilium_host: Gained IPv6LL Apr 30 00:21:12.830645 systemd-networkd[1223]: cilium_net: Gained IPv6LL Apr 30 00:21:12.921505 kubelet[2788]: E0430 00:21:12.921440 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:13.143219 kernel: NET: Registered PF_ALG protocol family Apr 30 00:21:14.164600 systemd-networkd[1223]: lxc_health: Link UP Apr 30 00:21:14.177094 systemd-networkd[1223]: lxc_health: Gained carrier Apr 30 00:21:14.238232 systemd-networkd[1223]: cilium_vxlan: Gained IPv6LL Apr 30 00:21:14.812919 systemd-networkd[1223]: lxc02b9d968676f: Link UP Apr 30 00:21:14.819884 kernel: eth0: renamed from tmp805e0 Apr 30 00:21:14.833471 systemd-networkd[1223]: lxc02b9d968676f: Gained carrier Apr 30 00:21:14.869965 kernel: eth0: renamed from tmp3d5c2 Apr 30 00:21:14.867728 systemd-networkd[1223]: lxcd5d3bfae2528: Link UP Apr 30 00:21:14.883274 systemd-networkd[1223]: lxcd5d3bfae2528: Gained carrier Apr 30 00:21:15.390740 systemd-networkd[1223]: lxc_health: Gained IPv6LL Apr 30 00:21:15.522311 kubelet[2788]: E0430 00:21:15.522245 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:16.031740 systemd-networkd[1223]: lxc02b9d968676f: Gained IPv6LL Apr 30 00:21:16.157167 systemd-networkd[1223]: lxcd5d3bfae2528: Gained IPv6LL Apr 30 00:21:17.462665 kubelet[2788]: I0430 00:21:17.462173 2788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:21:17.464432 kubelet[2788]: E0430 00:21:17.464392 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:17.953765 kubelet[2788]: E0430 00:21:17.953618 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:18.871368 systemd[1]: Started sshd@7-146.190.146.79:22-147.75.109.163:60738.service - OpenSSH per-connection server daemon (147.75.109.163:60738). Apr 30 00:21:19.011398 sshd[4004]: Accepted publickey for core from 147.75.109.163 port 60738 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:19.014579 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:19.039300 systemd-logind[1569]: New session 8 of user core. Apr 30 00:21:19.048257 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:21:19.685890 sshd[4007]: Connection closed by 147.75.109.163 port 60738 Apr 30 00:21:19.686645 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:19.695269 systemd[1]: sshd@7-146.190.146.79:22-147.75.109.163:60738.service: Deactivated successfully. Apr 30 00:21:19.703322 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:21:19.706800 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:21:19.708410 systemd-logind[1569]: Removed session 8. Apr 30 00:21:21.030263 containerd[1593]: time="2025-04-30T00:21:21.028995448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:21.030263 containerd[1593]: time="2025-04-30T00:21:21.029075329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:21.030263 containerd[1593]: time="2025-04-30T00:21:21.029099321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:21.030263 containerd[1593]: time="2025-04-30T00:21:21.029231390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:21.083947 containerd[1593]: time="2025-04-30T00:21:21.063004764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:21.083947 containerd[1593]: time="2025-04-30T00:21:21.063296245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:21.083947 containerd[1593]: time="2025-04-30T00:21:21.063450711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:21.083947 containerd[1593]: time="2025-04-30T00:21:21.063899950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:21.201589 systemd[1]: run-containerd-runc-k8s.io-3d5c2df401d0579488ccb88b414281d93e182d5a35858b430279cd1ae5e86db6-runc.GgBjaY.mount: Deactivated successfully. Apr 30 00:21:21.255267 kubelet[2788]: E0430 00:21:21.255210 2788 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod252fcfa8-97f4-4d04-95ff-c628c672343d/805e077b42df9f0fbb5bde48f4c0f414fea2a3e59f5024d55a4b179893b1807f\": RecentStats: unable to find data in memory cache]" Apr 30 00:21:21.264753 containerd[1593]: time="2025-04-30T00:21:21.264704359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6cp95,Uid:252fcfa8-97f4-4d04-95ff-c628c672343d,Namespace:kube-system,Attempt:0,} returns sandbox id \"805e077b42df9f0fbb5bde48f4c0f414fea2a3e59f5024d55a4b179893b1807f\"" Apr 30 00:21:21.268847 kubelet[2788]: E0430 00:21:21.268808 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:21.275066 containerd[1593]: time="2025-04-30T00:21:21.274998676Z" level=info msg="CreateContainer within sandbox \"805e077b42df9f0fbb5bde48f4c0f414fea2a3e59f5024d55a4b179893b1807f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:21:21.298505 containerd[1593]: time="2025-04-30T00:21:21.298199929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hdhjk,Uid:7df48c2d-e696-41f9-afc6-ebf0c33248a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d5c2df401d0579488ccb88b414281d93e182d5a35858b430279cd1ae5e86db6\"" Apr 30 00:21:21.302022 kubelet[2788]: E0430 00:21:21.301981 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:21.308694 containerd[1593]: time="2025-04-30T00:21:21.307881417Z" level=info msg="CreateContainer within sandbox \"3d5c2df401d0579488ccb88b414281d93e182d5a35858b430279cd1ae5e86db6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:21:21.308694 containerd[1593]: time="2025-04-30T00:21:21.308354019Z" level=info msg="CreateContainer within sandbox \"805e077b42df9f0fbb5bde48f4c0f414fea2a3e59f5024d55a4b179893b1807f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80de708277fe6b34d124f06ea58956636314f62141d395cb78347903f8a5b495\"" Apr 30 00:21:21.309269 containerd[1593]: time="2025-04-30T00:21:21.309110268Z" level=info msg="StartContainer for \"80de708277fe6b34d124f06ea58956636314f62141d395cb78347903f8a5b495\"" Apr 30 00:21:21.332285 containerd[1593]: time="2025-04-30T00:21:21.332221750Z" level=info msg="CreateContainer within sandbox \"3d5c2df401d0579488ccb88b414281d93e182d5a35858b430279cd1ae5e86db6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2132a1d9817174d0c4a35417e7f05bc5b6c87a95dc07521716d4a16d7a761cd\"" Apr 30 00:21:21.335597 containerd[1593]: time="2025-04-30T00:21:21.335412049Z" level=info msg="StartContainer for \"a2132a1d9817174d0c4a35417e7f05bc5b6c87a95dc07521716d4a16d7a761cd\"" Apr 30 00:21:21.420100 containerd[1593]: time="2025-04-30T00:21:21.419632564Z" level=info msg="StartContainer for \"80de708277fe6b34d124f06ea58956636314f62141d395cb78347903f8a5b495\" returns successfully" Apr 30 00:21:21.442491 containerd[1593]: time="2025-04-30T00:21:21.442370901Z" level=info msg="StartContainer for \"a2132a1d9817174d0c4a35417e7f05bc5b6c87a95dc07521716d4a16d7a761cd\" returns successfully" Apr 30 00:21:21.981675 kubelet[2788]: E0430 00:21:21.980714 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:21.989285 kubelet[2788]: E0430 00:21:21.989254 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:22.023863 kubelet[2788]: I0430 00:21:22.022682 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6cp95" podStartSLOduration=27.022657594 podStartE2EDuration="27.022657594s" podCreationTimestamp="2025-04-30 00:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:21:22.021487907 +0000 UTC m=+41.473735476" watchObservedRunningTime="2025-04-30 00:21:22.022657594 +0000 UTC m=+41.474905161" Apr 30 00:21:22.026145 kubelet[2788]: I0430 00:21:22.026088 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hdhjk" podStartSLOduration=27.026054891 podStartE2EDuration="27.026054891s" podCreationTimestamp="2025-04-30 00:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:21:22.000967901 +0000 UTC m=+41.453215465" watchObservedRunningTime="2025-04-30 00:21:22.026054891 +0000 UTC m=+41.478302459" Apr 30 00:21:22.991707 kubelet[2788]: E0430 00:21:22.991671 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:22.993454 kubelet[2788]: E0430 00:21:22.993426 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:23.994752 kubelet[2788]: E0430 00:21:23.994686 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:23.995379 kubelet[2788]: E0430 00:21:23.995208 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:24.700340 systemd[1]: Started sshd@8-146.190.146.79:22-147.75.109.163:60742.service - OpenSSH per-connection server daemon (147.75.109.163:60742). Apr 30 00:21:24.811980 sshd[4190]: Accepted publickey for core from 147.75.109.163 port 60742 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:24.815063 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:24.823340 systemd-logind[1569]: New session 9 of user core. Apr 30 00:21:24.832443 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:21:25.080261 sshd[4193]: Connection closed by 147.75.109.163 port 60742 Apr 30 00:21:25.081429 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:25.087629 systemd[1]: sshd@8-146.190.146.79:22-147.75.109.163:60742.service: Deactivated successfully. Apr 30 00:21:25.092636 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:21:25.094059 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:21:25.095458 systemd-logind[1569]: Removed session 9. Apr 30 00:21:30.109212 systemd[1]: Started sshd@9-146.190.146.79:22-147.75.109.163:51224.service - OpenSSH per-connection server daemon (147.75.109.163:51224). Apr 30 00:21:30.169108 sshd[4207]: Accepted publickey for core from 147.75.109.163 port 51224 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:30.171698 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:30.181198 systemd-logind[1569]: New session 10 of user core. Apr 30 00:21:30.188495 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:21:30.357106 sshd[4210]: Connection closed by 147.75.109.163 port 51224 Apr 30 00:21:30.358232 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:30.364049 systemd[1]: sshd@9-146.190.146.79:22-147.75.109.163:51224.service: Deactivated successfully. Apr 30 00:21:30.369764 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:21:30.370137 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:21:30.372421 systemd-logind[1569]: Removed session 10. Apr 30 00:21:35.367264 systemd[1]: Started sshd@10-146.190.146.79:22-147.75.109.163:51226.service - OpenSSH per-connection server daemon (147.75.109.163:51226). Apr 30 00:21:35.438416 sshd[4223]: Accepted publickey for core from 147.75.109.163 port 51226 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:35.440722 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:35.447939 systemd-logind[1569]: New session 11 of user core. Apr 30 00:21:35.454572 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:21:35.610643 sshd[4226]: Connection closed by 147.75.109.163 port 51226 Apr 30 00:21:35.610017 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:35.616868 systemd[1]: sshd@10-146.190.146.79:22-147.75.109.163:51226.service: Deactivated successfully. Apr 30 00:21:35.621440 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:21:35.623677 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:21:35.631528 systemd[1]: Started sshd@11-146.190.146.79:22-147.75.109.163:51232.service - OpenSSH per-connection server daemon (147.75.109.163:51232). Apr 30 00:21:35.632748 systemd-logind[1569]: Removed session 11. Apr 30 00:21:35.687188 sshd[4238]: Accepted publickey for core from 147.75.109.163 port 51232 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:35.689096 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:35.694830 systemd-logind[1569]: New session 12 of user core. Apr 30 00:21:35.707401 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:21:35.915091 sshd[4241]: Connection closed by 147.75.109.163 port 51232 Apr 30 00:21:35.919085 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:35.931351 systemd[1]: Started sshd@12-146.190.146.79:22-147.75.109.163:51246.service - OpenSSH per-connection server daemon (147.75.109.163:51246). Apr 30 00:21:35.931902 systemd[1]: sshd@11-146.190.146.79:22-147.75.109.163:51232.service: Deactivated successfully. Apr 30 00:21:35.954224 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:21:35.966088 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:21:35.970266 systemd-logind[1569]: Removed session 12. Apr 30 00:21:36.015614 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 51246 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:36.017673 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:36.025919 systemd-logind[1569]: New session 13 of user core. Apr 30 00:21:36.031451 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:21:36.207715 sshd[4253]: Connection closed by 147.75.109.163 port 51246 Apr 30 00:21:36.208344 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:36.214530 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:21:36.215592 systemd[1]: sshd@12-146.190.146.79:22-147.75.109.163:51246.service: Deactivated successfully. Apr 30 00:21:36.225520 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:21:36.228621 systemd-logind[1569]: Removed session 13. Apr 30 00:21:41.221604 systemd[1]: Started sshd@13-146.190.146.79:22-147.75.109.163:34436.service - OpenSSH per-connection server daemon (147.75.109.163:34436). Apr 30 00:21:41.281102 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 34436 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:41.283351 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:41.289894 systemd-logind[1569]: New session 14 of user core. Apr 30 00:21:41.295273 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:21:41.457190 sshd[4269]: Connection closed by 147.75.109.163 port 34436 Apr 30 00:21:41.458143 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:41.464598 systemd[1]: sshd@13-146.190.146.79:22-147.75.109.163:34436.service: Deactivated successfully. Apr 30 00:21:41.471519 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:21:41.472986 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:21:41.474643 systemd-logind[1569]: Removed session 14. Apr 30 00:21:46.467338 systemd[1]: Started sshd@14-146.190.146.79:22-147.75.109.163:34440.service - OpenSSH per-connection server daemon (147.75.109.163:34440). Apr 30 00:21:46.532784 sshd[4279]: Accepted publickey for core from 147.75.109.163 port 34440 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:46.532536 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:46.539046 systemd-logind[1569]: New session 15 of user core. Apr 30 00:21:46.550331 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:21:46.701452 sshd[4282]: Connection closed by 147.75.109.163 port 34440 Apr 30 00:21:46.702449 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:46.708812 systemd[1]: sshd@14-146.190.146.79:22-147.75.109.163:34440.service: Deactivated successfully. Apr 30 00:21:46.714224 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:21:46.715445 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:21:46.716457 systemd-logind[1569]: Removed session 15. Apr 30 00:21:51.713273 systemd[1]: Started sshd@15-146.190.146.79:22-147.75.109.163:38536.service - OpenSSH per-connection server daemon (147.75.109.163:38536). Apr 30 00:21:51.769782 sshd[4293]: Accepted publickey for core from 147.75.109.163 port 38536 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:51.771738 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:51.777326 systemd-logind[1569]: New session 16 of user core. Apr 30 00:21:51.787923 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:21:51.934514 sshd[4296]: Connection closed by 147.75.109.163 port 38536 Apr 30 00:21:51.934796 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:51.939024 systemd[1]: sshd@15-146.190.146.79:22-147.75.109.163:38536.service: Deactivated successfully. Apr 30 00:21:51.944815 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:21:51.944932 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:21:51.951285 systemd[1]: Started sshd@16-146.190.146.79:22-147.75.109.163:38542.service - OpenSSH per-connection server daemon (147.75.109.163:38542). Apr 30 00:21:51.952445 systemd-logind[1569]: Removed session 16. Apr 30 00:21:52.016668 sshd[4307]: Accepted publickey for core from 147.75.109.163 port 38542 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:52.018838 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:52.025414 systemd-logind[1569]: New session 17 of user core. Apr 30 00:21:52.034552 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:21:52.381336 sshd[4310]: Connection closed by 147.75.109.163 port 38542 Apr 30 00:21:52.382912 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:52.393299 systemd[1]: Started sshd@17-146.190.146.79:22-147.75.109.163:38544.service - OpenSSH per-connection server daemon (147.75.109.163:38544). Apr 30 00:21:52.393934 systemd[1]: sshd@16-146.190.146.79:22-147.75.109.163:38542.service: Deactivated successfully. Apr 30 00:21:52.406124 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:21:52.406680 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:21:52.409765 systemd-logind[1569]: Removed session 17. Apr 30 00:21:52.496367 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 38544 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:52.498596 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:52.505747 systemd-logind[1569]: New session 18 of user core. Apr 30 00:21:52.513318 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:21:54.757218 sshd[4323]: Connection closed by 147.75.109.163 port 38544 Apr 30 00:21:54.761960 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:54.779343 systemd[1]: Started sshd@18-146.190.146.79:22-147.75.109.163:38550.service - OpenSSH per-connection server daemon (147.75.109.163:38550). Apr 30 00:21:54.784403 systemd[1]: sshd@17-146.190.146.79:22-147.75.109.163:38544.service: Deactivated successfully. Apr 30 00:21:54.791371 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:21:54.791514 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:21:54.801672 systemd-logind[1569]: Removed session 18. Apr 30 00:21:54.871623 sshd[4336]: Accepted publickey for core from 147.75.109.163 port 38550 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:54.873604 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:54.879687 systemd-logind[1569]: New session 19 of user core. Apr 30 00:21:54.886712 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:21:55.389826 sshd[4342]: Connection closed by 147.75.109.163 port 38550 Apr 30 00:21:55.390931 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:55.400350 systemd[1]: Started sshd@19-146.190.146.79:22-147.75.109.163:38558.service - OpenSSH per-connection server daemon (147.75.109.163:38558). Apr 30 00:21:55.407144 systemd[1]: sshd@18-146.190.146.79:22-147.75.109.163:38550.service: Deactivated successfully. Apr 30 00:21:55.418404 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:21:55.419652 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:21:55.422401 systemd-logind[1569]: Removed session 19. Apr 30 00:21:55.463770 sshd[4348]: Accepted publickey for core from 147.75.109.163 port 38558 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:21:55.466192 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:21:55.472082 systemd-logind[1569]: New session 20 of user core. Apr 30 00:21:55.476355 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:21:55.635542 sshd[4354]: Connection closed by 147.75.109.163 port 38558 Apr 30 00:21:55.636154 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:55.641930 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:21:55.643426 systemd[1]: sshd@19-146.190.146.79:22-147.75.109.163:38558.service: Deactivated successfully. Apr 30 00:21:55.651500 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:21:55.652976 systemd-logind[1569]: Removed session 20. Apr 30 00:21:55.723390 kubelet[2788]: E0430 00:21:55.723229 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:21:58.724699 kubelet[2788]: E0430 00:21:58.723871 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:00.648564 systemd[1]: Started sshd@20-146.190.146.79:22-147.75.109.163:43386.service - OpenSSH per-connection server daemon (147.75.109.163:43386). Apr 30 00:22:00.710770 sshd[4371]: Accepted publickey for core from 147.75.109.163 port 43386 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:00.713427 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:00.720696 systemd-logind[1569]: New session 21 of user core. Apr 30 00:22:00.725553 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:22:00.873608 sshd[4374]: Connection closed by 147.75.109.163 port 43386 Apr 30 00:22:00.876221 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:00.884358 systemd[1]: sshd@20-146.190.146.79:22-147.75.109.163:43386.service: Deactivated successfully. Apr 30 00:22:00.891682 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:22:00.893203 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:22:00.894730 systemd-logind[1569]: Removed session 21. Apr 30 00:22:05.723221 kubelet[2788]: E0430 00:22:05.723113 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:05.886704 systemd[1]: Started sshd@21-146.190.146.79:22-147.75.109.163:43396.service - OpenSSH per-connection server daemon (147.75.109.163:43396). Apr 30 00:22:05.937938 sshd[4385]: Accepted publickey for core from 147.75.109.163 port 43396 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:05.940183 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:05.946880 systemd-logind[1569]: New session 22 of user core. Apr 30 00:22:05.959812 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:22:06.105630 sshd[4388]: Connection closed by 147.75.109.163 port 43396 Apr 30 00:22:06.104304 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:06.110822 systemd[1]: sshd@21-146.190.146.79:22-147.75.109.163:43396.service: Deactivated successfully. Apr 30 00:22:06.115238 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:22:06.115470 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:22:06.119039 systemd-logind[1569]: Removed session 22. Apr 30 00:22:11.115487 systemd[1]: Started sshd@22-146.190.146.79:22-147.75.109.163:49006.service - OpenSSH per-connection server daemon (147.75.109.163:49006). Apr 30 00:22:11.168007 sshd[4398]: Accepted publickey for core from 147.75.109.163 port 49006 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:11.169839 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:11.175985 systemd-logind[1569]: New session 23 of user core. Apr 30 00:22:11.181370 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:22:11.331690 sshd[4401]: Connection closed by 147.75.109.163 port 49006 Apr 30 00:22:11.333197 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:11.338038 systemd[1]: sshd@22-146.190.146.79:22-147.75.109.163:49006.service: Deactivated successfully. Apr 30 00:22:11.344441 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:22:11.346126 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:22:11.347523 systemd-logind[1569]: Removed session 23. Apr 30 00:22:16.348415 systemd[1]: Started sshd@23-146.190.146.79:22-147.75.109.163:49020.service - OpenSSH per-connection server daemon (147.75.109.163:49020). Apr 30 00:22:16.403566 sshd[4412]: Accepted publickey for core from 147.75.109.163 port 49020 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:16.405729 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:16.411164 systemd-logind[1569]: New session 24 of user core. Apr 30 00:22:16.416516 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:22:16.575694 sshd[4415]: Connection closed by 147.75.109.163 port 49020 Apr 30 00:22:16.576624 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:16.582685 systemd[1]: sshd@23-146.190.146.79:22-147.75.109.163:49020.service: Deactivated successfully. Apr 30 00:22:16.589368 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:22:16.592569 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:22:16.597038 systemd[1]: Started sshd@24-146.190.146.79:22-147.75.109.163:49028.service - OpenSSH per-connection server daemon (147.75.109.163:49028). Apr 30 00:22:16.600406 systemd-logind[1569]: Removed session 24. Apr 30 00:22:16.663211 sshd[4426]: Accepted publickey for core from 147.75.109.163 port 49028 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:16.665429 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:16.672306 systemd-logind[1569]: New session 25 of user core. Apr 30 00:22:16.678426 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:22:18.394430 systemd[1]: run-containerd-runc-k8s.io-f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806-runc.GQrB0x.mount: Deactivated successfully. Apr 30 00:22:18.419307 containerd[1593]: time="2025-04-30T00:22:18.418865400Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:22:18.486301 containerd[1593]: time="2025-04-30T00:22:18.486102110Z" level=info msg="StopContainer for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" with timeout 30 (s)" Apr 30 00:22:18.486686 containerd[1593]: time="2025-04-30T00:22:18.486104107Z" level=info msg="StopContainer for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" with timeout 2 (s)" Apr 30 00:22:18.488271 containerd[1593]: time="2025-04-30T00:22:18.488168111Z" level=info msg="Stop container \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" with signal terminated" Apr 30 00:22:18.493428 containerd[1593]: time="2025-04-30T00:22:18.493307588Z" level=info msg="Stop container \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" with signal terminated" Apr 30 00:22:18.504974 systemd-networkd[1223]: lxc_health: Link DOWN Apr 30 00:22:18.504984 systemd-networkd[1223]: lxc_health: Lost carrier Apr 30 00:22:18.564066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806-rootfs.mount: Deactivated successfully. Apr 30 00:22:18.570300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610-rootfs.mount: Deactivated successfully. Apr 30 00:22:18.573910 containerd[1593]: time="2025-04-30T00:22:18.573807851Z" level=info msg="shim disconnected" id=f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806 namespace=k8s.io Apr 30 00:22:18.573910 containerd[1593]: time="2025-04-30T00:22:18.573911857Z" level=warning msg="cleaning up after shim disconnected" id=f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806 namespace=k8s.io Apr 30 00:22:18.574101 containerd[1593]: time="2025-04-30T00:22:18.573925490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:18.575357 containerd[1593]: time="2025-04-30T00:22:18.574456844Z" level=info msg="shim disconnected" id=d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610 namespace=k8s.io Apr 30 00:22:18.575357 containerd[1593]: time="2025-04-30T00:22:18.574528721Z" level=warning msg="cleaning up after shim disconnected" id=d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610 namespace=k8s.io Apr 30 00:22:18.575357 containerd[1593]: time="2025-04-30T00:22:18.574540577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:18.601916 containerd[1593]: time="2025-04-30T00:22:18.601135778Z" level=info msg="StopContainer for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" returns successfully" Apr 30 00:22:18.604388 containerd[1593]: time="2025-04-30T00:22:18.604348241Z" level=info msg="StopPodSandbox for \"20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9\"" Apr 30 00:22:18.614690 containerd[1593]: time="2025-04-30T00:22:18.613786618Z" level=info msg="StopContainer for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" returns successfully" Apr 30 00:22:18.615379 containerd[1593]: time="2025-04-30T00:22:18.615240659Z" level=info msg="Container to stop \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:22:18.615711 containerd[1593]: time="2025-04-30T00:22:18.615671147Z" level=info msg="StopPodSandbox for \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\"" Apr 30 00:22:18.615797 containerd[1593]: time="2025-04-30T00:22:18.615723864Z" level=info msg="Container to stop \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:22:18.615797 containerd[1593]: time="2025-04-30T00:22:18.615764614Z" level=info msg="Container to stop \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:22:18.615797 containerd[1593]: time="2025-04-30T00:22:18.615776652Z" level=info msg="Container to stop \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:22:18.615797 containerd[1593]: time="2025-04-30T00:22:18.615785933Z" level=info msg="Container to stop \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:22:18.615960 containerd[1593]: time="2025-04-30T00:22:18.615799830Z" level=info msg="Container to stop \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:22:18.618328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9-shm.mount: Deactivated successfully. Apr 30 00:22:18.684733 containerd[1593]: time="2025-04-30T00:22:18.684418325Z" level=info msg="shim disconnected" id=51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb namespace=k8s.io Apr 30 00:22:18.686015 containerd[1593]: time="2025-04-30T00:22:18.685937018Z" level=warning msg="cleaning up after shim disconnected" id=51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb namespace=k8s.io Apr 30 00:22:18.686015 containerd[1593]: time="2025-04-30T00:22:18.686001629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:18.688165 containerd[1593]: time="2025-04-30T00:22:18.687827853Z" level=info msg="shim disconnected" id=20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9 namespace=k8s.io Apr 30 00:22:18.688165 containerd[1593]: time="2025-04-30T00:22:18.687923768Z" level=warning msg="cleaning up after shim disconnected" id=20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9 namespace=k8s.io Apr 30 00:22:18.688165 containerd[1593]: time="2025-04-30T00:22:18.687937177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:18.723410 containerd[1593]: time="2025-04-30T00:22:18.723357431Z" level=info msg="TearDown network for sandbox \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" successfully" Apr 30 00:22:18.723410 containerd[1593]: time="2025-04-30T00:22:18.723399182Z" level=info msg="StopPodSandbox for \"51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb\" returns successfully" Apr 30 00:22:18.725801 containerd[1593]: time="2025-04-30T00:22:18.725689091Z" level=info msg="TearDown network for sandbox \"20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9\" successfully" Apr 30 00:22:18.725801 containerd[1593]: time="2025-04-30T00:22:18.725726483Z" level=info msg="StopPodSandbox for \"20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9\" returns successfully" Apr 30 00:22:18.934694 kubelet[2788]: I0430 00:22:18.934376 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-bpf-maps\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.934694 kubelet[2788]: I0430 00:22:18.934440 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-xtables-lock\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.934694 kubelet[2788]: I0430 00:22:18.934466 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g8pz\" (UniqueName: \"kubernetes.io/projected/2a74cd86-f205-4075-8071-2eb6c8eb4379-kube-api-access-9g8pz\") pod \"2a74cd86-f205-4075-8071-2eb6c8eb4379\" (UID: \"2a74cd86-f205-4075-8071-2eb6c8eb4379\") " Apr 30 00:22:18.934694 kubelet[2788]: I0430 00:22:18.934489 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-kernel\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.934694 kubelet[2788]: I0430 00:22:18.934508 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cni-path\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.934694 kubelet[2788]: I0430 00:22:18.934602 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-etc-cni-netd\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.935704 kubelet[2788]: I0430 00:22:18.934633 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d58bc04-0534-44ce-b01d-be955be7c0bb-clustermesh-secrets\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.935704 kubelet[2788]: I0430 00:22:18.934668 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82tqq\" (UniqueName: \"kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-kube-api-access-82tqq\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.935704 kubelet[2788]: I0430 00:22:18.934699 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-net\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.935704 kubelet[2788]: I0430 00:22:18.934719 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-run\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.935704 kubelet[2788]: I0430 00:22:18.934735 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-cgroup\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.935704 kubelet[2788]: I0430 00:22:18.934756 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-hubble-tls\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.936034 kubelet[2788]: I0430 00:22:18.934771 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-hostproc\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.936034 kubelet[2788]: I0430 00:22:18.934790 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-config-path\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.936034 kubelet[2788]: I0430 00:22:18.934807 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a74cd86-f205-4075-8071-2eb6c8eb4379-cilium-config-path\") pod \"2a74cd86-f205-4075-8071-2eb6c8eb4379\" (UID: \"2a74cd86-f205-4075-8071-2eb6c8eb4379\") " Apr 30 00:22:18.936034 kubelet[2788]: I0430 00:22:18.934823 2788 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-lib-modules\") pod \"0d58bc04-0534-44ce-b01d-be955be7c0bb\" (UID: \"0d58bc04-0534-44ce-b01d-be955be7c0bb\") " Apr 30 00:22:18.937800 kubelet[2788]: I0430 00:22:18.934927 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.937800 kubelet[2788]: I0430 00:22:18.934488 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.937800 kubelet[2788]: I0430 00:22:18.937280 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.937800 kubelet[2788]: I0430 00:22:18.937303 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.937800 kubelet[2788]: I0430 00:22:18.937319 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.942443 kubelet[2788]: I0430 00:22:18.942046 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.946096 kubelet[2788]: I0430 00:22:18.946043 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.946802 kubelet[2788]: I0430 00:22:18.946342 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.946802 kubelet[2788]: I0430 00:22:18.946383 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.948534 kubelet[2788]: I0430 00:22:18.948494 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:22:18.950428 kubelet[2788]: I0430 00:22:18.950387 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d58bc04-0534-44ce-b01d-be955be7c0bb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:22:18.950643 kubelet[2788]: I0430 00:22:18.950615 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:22:18.951648 kubelet[2788]: I0430 00:22:18.950736 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:22:18.951648 kubelet[2788]: I0430 00:22:18.950760 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a74cd86-f205-4075-8071-2eb6c8eb4379-kube-api-access-9g8pz" (OuterVolumeSpecName: "kube-api-access-9g8pz") pod "2a74cd86-f205-4075-8071-2eb6c8eb4379" (UID: "2a74cd86-f205-4075-8071-2eb6c8eb4379"). InnerVolumeSpecName "kube-api-access-9g8pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:22:18.951648 kubelet[2788]: I0430 00:22:18.950745 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-kube-api-access-82tqq" (OuterVolumeSpecName: "kube-api-access-82tqq") pod "0d58bc04-0534-44ce-b01d-be955be7c0bb" (UID: "0d58bc04-0534-44ce-b01d-be955be7c0bb"). InnerVolumeSpecName "kube-api-access-82tqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:22:18.954230 kubelet[2788]: I0430 00:22:18.954182 2788 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a74cd86-f205-4075-8071-2eb6c8eb4379-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a74cd86-f205-4075-8071-2eb6c8eb4379" (UID: "2a74cd86-f205-4075-8071-2eb6c8eb4379"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:22:19.035623 kubelet[2788]: I0430 00:22:19.035563 2788 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-cgroup\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035623 kubelet[2788]: I0430 00:22:19.035611 2788 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-hubble-tls\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035623 kubelet[2788]: I0430 00:22:19.035620 2788 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-hostproc\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035623 kubelet[2788]: I0430 00:22:19.035632 2788 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-config-path\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035623 kubelet[2788]: I0430 00:22:19.035645 2788 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a74cd86-f205-4075-8071-2eb6c8eb4379-cilium-config-path\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035654 2788 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-lib-modules\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035662 2788 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-bpf-maps\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035670 2788 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-xtables-lock\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035681 2788 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-kernel\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035690 2788 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9g8pz\" (UniqueName: \"kubernetes.io/projected/2a74cd86-f205-4075-8071-2eb6c8eb4379-kube-api-access-9g8pz\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035698 2788 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cni-path\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035707 2788 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-etc-cni-netd\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.035994 kubelet[2788]: I0430 00:22:19.035716 2788 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d58bc04-0534-44ce-b01d-be955be7c0bb-clustermesh-secrets\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.036231 kubelet[2788]: I0430 00:22:19.035724 2788 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-82tqq\" (UniqueName: \"kubernetes.io/projected/0d58bc04-0534-44ce-b01d-be955be7c0bb-kube-api-access-82tqq\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.036231 kubelet[2788]: I0430 00:22:19.035733 2788 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-host-proc-sys-net\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.036231 kubelet[2788]: I0430 00:22:19.035744 2788 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d58bc04-0534-44ce-b01d-be955be7c0bb-cilium-run\") on node \"ci-4152.2.3-1-91c0161c2f\" DevicePath \"\"" Apr 30 00:22:19.140596 kubelet[2788]: I0430 00:22:19.140548 2788 scope.go:117] "RemoveContainer" containerID="f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806" Apr 30 00:22:19.152991 containerd[1593]: time="2025-04-30T00:22:19.152749513Z" level=info msg="RemoveContainer for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\"" Apr 30 00:22:19.158193 containerd[1593]: time="2025-04-30T00:22:19.157834852Z" level=info msg="RemoveContainer for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" returns successfully" Apr 30 00:22:19.170391 kubelet[2788]: I0430 00:22:19.170341 2788 scope.go:117] "RemoveContainer" containerID="def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab" Apr 30 00:22:19.184010 containerd[1593]: time="2025-04-30T00:22:19.183839291Z" level=info msg="RemoveContainer for \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\"" Apr 30 00:22:19.188841 containerd[1593]: time="2025-04-30T00:22:19.188462241Z" level=info msg="RemoveContainer for \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\" returns successfully" Apr 30 00:22:19.189294 kubelet[2788]: I0430 00:22:19.189180 2788 scope.go:117] "RemoveContainer" containerID="909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8" Apr 30 00:22:19.197008 containerd[1593]: time="2025-04-30T00:22:19.196930588Z" level=info msg="RemoveContainer for \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\"" Apr 30 00:22:19.202470 containerd[1593]: time="2025-04-30T00:22:19.202398816Z" level=info msg="RemoveContainer for \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\" returns successfully" Apr 30 00:22:19.204793 kubelet[2788]: I0430 00:22:19.204724 2788 scope.go:117] "RemoveContainer" containerID="f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca" Apr 30 00:22:19.209909 containerd[1593]: time="2025-04-30T00:22:19.209243701Z" level=info msg="RemoveContainer for \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\"" Apr 30 00:22:19.215483 containerd[1593]: time="2025-04-30T00:22:19.215309319Z" level=info msg="RemoveContainer for \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\" returns successfully" Apr 30 00:22:19.216066 kubelet[2788]: I0430 00:22:19.215790 2788 scope.go:117] "RemoveContainer" containerID="16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b" Apr 30 00:22:19.222306 containerd[1593]: time="2025-04-30T00:22:19.222248478Z" level=info msg="RemoveContainer for \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\"" Apr 30 00:22:19.234201 containerd[1593]: time="2025-04-30T00:22:19.233978058Z" level=info msg="RemoveContainer for \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\" returns successfully" Apr 30 00:22:19.236052 kubelet[2788]: I0430 00:22:19.236011 2788 scope.go:117] "RemoveContainer" containerID="f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806" Apr 30 00:22:19.236720 containerd[1593]: time="2025-04-30T00:22:19.236641445Z" level=error msg="ContainerStatus for \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\": not found" Apr 30 00:22:19.245827 kubelet[2788]: E0430 00:22:19.245626 2788 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\": not found" containerID="f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806" Apr 30 00:22:19.248825 kubelet[2788]: I0430 00:22:19.245699 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806"} err="failed to get container status \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\": rpc error: code = NotFound desc = an error occurred when try to find container \"f07783a81729d572b31638f124bb8e824394e82fecbe0fb789a1f9d097e50806\": not found" Apr 30 00:22:19.249322 kubelet[2788]: I0430 00:22:19.249119 2788 scope.go:117] "RemoveContainer" containerID="def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab" Apr 30 00:22:19.249805 containerd[1593]: time="2025-04-30T00:22:19.249738983Z" level=error msg="ContainerStatus for \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\": not found" Apr 30 00:22:19.250038 kubelet[2788]: E0430 00:22:19.250005 2788 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\": not found" containerID="def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab" Apr 30 00:22:19.250144 kubelet[2788]: I0430 00:22:19.250055 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab"} err="failed to get container status \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\": rpc error: code = NotFound desc = an error occurred when try to find container \"def154dc3247e8f70a619e4271fa03a36d8a6190d29ce6488c145c4c3b1d0bab\": not found" Apr 30 00:22:19.250144 kubelet[2788]: I0430 00:22:19.250097 2788 scope.go:117] "RemoveContainer" containerID="909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8" Apr 30 00:22:19.250770 containerd[1593]: time="2025-04-30T00:22:19.250560687Z" level=error msg="ContainerStatus for \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\": not found" Apr 30 00:22:19.250832 kubelet[2788]: E0430 00:22:19.250802 2788 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\": not found" containerID="909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8" Apr 30 00:22:19.250915 kubelet[2788]: I0430 00:22:19.250837 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8"} err="failed to get container status \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"909e6e6ff0d73338768d5f1a3f0c6629cf954ededdb49e96d52854ff929d0bd8\": not found" Apr 30 00:22:19.250915 kubelet[2788]: I0430 00:22:19.250882 2788 scope.go:117] "RemoveContainer" containerID="f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca" Apr 30 00:22:19.251239 containerd[1593]: time="2025-04-30T00:22:19.251194262Z" level=error msg="ContainerStatus for \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\": not found" Apr 30 00:22:19.251432 kubelet[2788]: E0430 00:22:19.251368 2788 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\": not found" containerID="f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca" Apr 30 00:22:19.251432 kubelet[2788]: I0430 00:22:19.251400 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca"} err="failed to get container status \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0c1f181d3db13f68420ecf8eb2526e673533c82c347fda14a9f0c484f35c1ca\": not found" Apr 30 00:22:19.251432 kubelet[2788]: I0430 00:22:19.251427 2788 scope.go:117] "RemoveContainer" containerID="16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b" Apr 30 00:22:19.251986 containerd[1593]: time="2025-04-30T00:22:19.251870595Z" level=error msg="ContainerStatus for \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\": not found" Apr 30 00:22:19.252162 kubelet[2788]: E0430 00:22:19.252107 2788 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\": not found" containerID="16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b" Apr 30 00:22:19.252234 kubelet[2788]: I0430 00:22:19.252169 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b"} err="failed to get container status \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\": rpc error: code = NotFound desc = an error occurred when try to find container \"16a93f92d66eeeb038a880559f7cd72a360d79bc9fb086753f2b45223be7111b\": not found" Apr 30 00:22:19.252234 kubelet[2788]: I0430 00:22:19.252211 2788 scope.go:117] "RemoveContainer" containerID="d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610" Apr 30 00:22:19.254074 containerd[1593]: time="2025-04-30T00:22:19.254027494Z" level=info msg="RemoveContainer for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\"" Apr 30 00:22:19.257652 containerd[1593]: time="2025-04-30T00:22:19.257590691Z" level=info msg="RemoveContainer for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" returns successfully" Apr 30 00:22:19.258014 kubelet[2788]: I0430 00:22:19.257979 2788 scope.go:117] "RemoveContainer" containerID="d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610" Apr 30 00:22:19.258387 containerd[1593]: time="2025-04-30T00:22:19.258338642Z" level=error msg="ContainerStatus for \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\": not found" Apr 30 00:22:19.258803 kubelet[2788]: E0430 00:22:19.258753 2788 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\": not found" containerID="d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610" Apr 30 00:22:19.258891 kubelet[2788]: I0430 00:22:19.258817 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610"} err="failed to get container status \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3d0bc3f8fbf31f2207b2fbb37a6a7f1a3cbc4223d9c705438fcab262b7e5610\": not found" Apr 30 00:22:19.386554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb-rootfs.mount: Deactivated successfully. Apr 30 00:22:19.387112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20218d8065b07fa32b873c990e55397159df2ac4b6cfd9be3e5c2b6a70dc82a9-rootfs.mount: Deactivated successfully. Apr 30 00:22:19.387818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51722cf1961391b214315c59f84c72965cd3bfd4dbae62bf3265f00cff183bfb-shm.mount: Deactivated successfully. Apr 30 00:22:19.388258 systemd[1]: var-lib-kubelet-pods-0d58bc04\x2d0534\x2d44ce\x2db01d\x2dbe955be7c0bb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:22:19.388426 systemd[1]: var-lib-kubelet-pods-2a74cd86\x2df205\x2d4075\x2d8071\x2d2eb6c8eb4379-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9g8pz.mount: Deactivated successfully. Apr 30 00:22:19.388844 systemd[1]: var-lib-kubelet-pods-0d58bc04\x2d0534\x2d44ce\x2db01d\x2dbe955be7c0bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d82tqq.mount: Deactivated successfully. Apr 30 00:22:19.389084 systemd[1]: var-lib-kubelet-pods-0d58bc04\x2d0534\x2d44ce\x2db01d\x2dbe955be7c0bb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:22:20.305090 sshd[4429]: Connection closed by 147.75.109.163 port 49028 Apr 30 00:22:20.306933 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:20.316646 systemd[1]: Started sshd@25-146.190.146.79:22-147.75.109.163:35182.service - OpenSSH per-connection server daemon (147.75.109.163:35182). Apr 30 00:22:20.319459 systemd[1]: sshd@24-146.190.146.79:22-147.75.109.163:49028.service: Deactivated successfully. Apr 30 00:22:20.322610 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:22:20.325813 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:22:20.330925 systemd-logind[1569]: Removed session 25. Apr 30 00:22:20.408819 sshd[4587]: Accepted publickey for core from 147.75.109.163 port 35182 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:20.411125 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:20.419489 systemd-logind[1569]: New session 26 of user core. Apr 30 00:22:20.422243 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:22:20.723177 kubelet[2788]: E0430 00:22:20.723124 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:20.727499 kubelet[2788]: I0430 00:22:20.726029 2788 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" path="/var/lib/kubelet/pods/0d58bc04-0534-44ce-b01d-be955be7c0bb/volumes" Apr 30 00:22:20.727499 kubelet[2788]: I0430 00:22:20.726999 2788 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a74cd86-f205-4075-8071-2eb6c8eb4379" path="/var/lib/kubelet/pods/2a74cd86-f205-4075-8071-2eb6c8eb4379/volumes" Apr 30 00:22:20.885781 kubelet[2788]: E0430 00:22:20.879740 2788 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:22:21.206166 sshd[4593]: Connection closed by 147.75.109.163 port 35182 Apr 30 00:22:21.211765 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:21.225339 systemd[1]: Started sshd@26-146.190.146.79:22-147.75.109.163:35184.service - OpenSSH per-connection server daemon (147.75.109.163:35184). Apr 30 00:22:21.227288 systemd[1]: sshd@25-146.190.146.79:22-147.75.109.163:35182.service: Deactivated successfully. Apr 30 00:22:21.244159 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:22:21.257982 kubelet[2788]: I0430 00:22:21.252600 2788 topology_manager.go:215] "Topology Admit Handler" podUID="83f7953d-5a1d-4433-91b8-2b4c7b1982b9" podNamespace="kube-system" podName="cilium-b9p88" Apr 30 00:22:21.260946 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:22:21.267165 kubelet[2788]: E0430 00:22:21.265293 2788 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" containerName="apply-sysctl-overwrites" Apr 30 00:22:21.267165 kubelet[2788]: E0430 00:22:21.265340 2788 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a74cd86-f205-4075-8071-2eb6c8eb4379" containerName="cilium-operator" Apr 30 00:22:21.267165 kubelet[2788]: E0430 00:22:21.265348 2788 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" containerName="clean-cilium-state" Apr 30 00:22:21.267165 kubelet[2788]: E0430 00:22:21.265355 2788 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" containerName="cilium-agent" Apr 30 00:22:21.267165 kubelet[2788]: E0430 00:22:21.265367 2788 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" containerName="mount-cgroup" Apr 30 00:22:21.267165 kubelet[2788]: E0430 00:22:21.265850 2788 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" containerName="mount-bpf-fs" Apr 30 00:22:21.267322 systemd-logind[1569]: Removed session 26. Apr 30 00:22:21.279884 kubelet[2788]: I0430 00:22:21.265947 2788 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d58bc04-0534-44ce-b01d-be955be7c0bb" containerName="cilium-agent" Apr 30 00:22:21.279884 kubelet[2788]: I0430 00:22:21.279087 2788 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a74cd86-f205-4075-8071-2eb6c8eb4379" containerName="cilium-operator" Apr 30 00:22:21.353242 kubelet[2788]: I0430 00:22:21.352837 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-etc-cni-netd\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353242 kubelet[2788]: I0430 00:22:21.352955 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-cilium-config-path\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353242 kubelet[2788]: I0430 00:22:21.352983 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-hubble-tls\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353242 kubelet[2788]: I0430 00:22:21.353001 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwhw\" (UniqueName: \"kubernetes.io/projected/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-kube-api-access-xcwhw\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353242 kubelet[2788]: I0430 00:22:21.353021 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-cilium-cgroup\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353242 kubelet[2788]: I0430 00:22:21.353043 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-cilium-run\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353598 kubelet[2788]: I0430 00:22:21.353057 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-xtables-lock\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353598 kubelet[2788]: I0430 00:22:21.353072 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-cilium-ipsec-secrets\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353598 kubelet[2788]: I0430 00:22:21.353087 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-hostproc\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353598 kubelet[2788]: I0430 00:22:21.353100 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-cni-path\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353598 kubelet[2788]: I0430 00:22:21.353113 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-clustermesh-secrets\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353598 kubelet[2788]: I0430 00:22:21.353128 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-bpf-maps\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353827 kubelet[2788]: I0430 00:22:21.353142 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-host-proc-sys-net\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353827 kubelet[2788]: I0430 00:22:21.353386 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-lib-modules\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.353827 kubelet[2788]: I0430 00:22:21.353410 2788 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83f7953d-5a1d-4433-91b8-2b4c7b1982b9-host-proc-sys-kernel\") pod \"cilium-b9p88\" (UID: \"83f7953d-5a1d-4433-91b8-2b4c7b1982b9\") " pod="kube-system/cilium-b9p88" Apr 30 00:22:21.390260 sshd[4601]: Accepted publickey for core from 147.75.109.163 port 35184 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:21.392244 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:21.398729 systemd-logind[1569]: New session 27 of user core. Apr 30 00:22:21.404421 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:22:21.489898 sshd[4607]: Connection closed by 147.75.109.163 port 35184 Apr 30 00:22:21.488129 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:21.517937 systemd[1]: sshd@26-146.190.146.79:22-147.75.109.163:35184.service: Deactivated successfully. Apr 30 00:22:21.522142 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:22:21.525463 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:22:21.533532 systemd[1]: Started sshd@27-146.190.146.79:22-147.75.109.163:35192.service - OpenSSH per-connection server daemon (147.75.109.163:35192). Apr 30 00:22:21.537403 systemd-logind[1569]: Removed session 27. Apr 30 00:22:21.591409 sshd[4617]: Accepted publickey for core from 147.75.109.163 port 35192 ssh2: RSA SHA256:DLsEBMHzPaZLMXTor6ubuVW5EU3fgkINfvuTQTYDYW8 Apr 30 00:22:21.593188 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:21.599074 systemd-logind[1569]: New session 28 of user core. Apr 30 00:22:21.614373 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:22:21.651531 kubelet[2788]: E0430 00:22:21.650958 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:21.652306 containerd[1593]: time="2025-04-30T00:22:21.652243944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b9p88,Uid:83f7953d-5a1d-4433-91b8-2b4c7b1982b9,Namespace:kube-system,Attempt:0,}" Apr 30 00:22:21.683506 containerd[1593]: time="2025-04-30T00:22:21.683364551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:22:21.683506 containerd[1593]: time="2025-04-30T00:22:21.683439764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:22:21.683506 containerd[1593]: time="2025-04-30T00:22:21.683469626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:22:21.684259 containerd[1593]: time="2025-04-30T00:22:21.683613886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:22:21.755900 containerd[1593]: time="2025-04-30T00:22:21.754412659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b9p88,Uid:83f7953d-5a1d-4433-91b8-2b4c7b1982b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\"" Apr 30 00:22:21.759485 kubelet[2788]: E0430 00:22:21.759388 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:21.773196 containerd[1593]: time="2025-04-30T00:22:21.772245224Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:22:21.791715 containerd[1593]: time="2025-04-30T00:22:21.791646132Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85fb873ca399ff854595a7edc6f349fae61a3cb784ba10d611d06ed3afb0bb90\"" Apr 30 00:22:21.795382 containerd[1593]: time="2025-04-30T00:22:21.793738935Z" level=info msg="StartContainer for \"85fb873ca399ff854595a7edc6f349fae61a3cb784ba10d611d06ed3afb0bb90\"" Apr 30 00:22:21.874636 containerd[1593]: time="2025-04-30T00:22:21.874190914Z" level=info msg="StartContainer for \"85fb873ca399ff854595a7edc6f349fae61a3cb784ba10d611d06ed3afb0bb90\" returns successfully" Apr 30 00:22:21.931302 containerd[1593]: time="2025-04-30T00:22:21.931097922Z" level=info msg="shim disconnected" id=85fb873ca399ff854595a7edc6f349fae61a3cb784ba10d611d06ed3afb0bb90 namespace=k8s.io Apr 30 00:22:21.931302 containerd[1593]: time="2025-04-30T00:22:21.931262233Z" level=warning msg="cleaning up after shim disconnected" id=85fb873ca399ff854595a7edc6f349fae61a3cb784ba10d611d06ed3afb0bb90 namespace=k8s.io Apr 30 00:22:21.931302 containerd[1593]: time="2025-04-30T00:22:21.931275594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:21.952327 containerd[1593]: time="2025-04-30T00:22:21.951334824Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:22:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:22:22.185701 kubelet[2788]: E0430 00:22:22.185578 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:22.195211 containerd[1593]: time="2025-04-30T00:22:22.195081583Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:22:22.205617 containerd[1593]: time="2025-04-30T00:22:22.205546386Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15dba4b2e471974890eac88289778f4f92a7ee539e0341fcb4d6bd411ebf05a0\"" Apr 30 00:22:22.207596 containerd[1593]: time="2025-04-30T00:22:22.207415797Z" level=info msg="StartContainer for \"15dba4b2e471974890eac88289778f4f92a7ee539e0341fcb4d6bd411ebf05a0\"" Apr 30 00:22:22.279120 containerd[1593]: time="2025-04-30T00:22:22.278927257Z" level=info msg="StartContainer for \"15dba4b2e471974890eac88289778f4f92a7ee539e0341fcb4d6bd411ebf05a0\" returns successfully" Apr 30 00:22:22.319687 containerd[1593]: time="2025-04-30T00:22:22.319610129Z" level=info msg="shim disconnected" id=15dba4b2e471974890eac88289778f4f92a7ee539e0341fcb4d6bd411ebf05a0 namespace=k8s.io Apr 30 00:22:22.319687 containerd[1593]: time="2025-04-30T00:22:22.319682891Z" level=warning msg="cleaning up after shim disconnected" id=15dba4b2e471974890eac88289778f4f92a7ee539e0341fcb4d6bd411ebf05a0 namespace=k8s.io Apr 30 00:22:22.319687 containerd[1593]: time="2025-04-30T00:22:22.319695260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:22.342666 containerd[1593]: time="2025-04-30T00:22:22.342242328Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:22:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:22:22.779063 kubelet[2788]: I0430 00:22:22.779005 2788 setters.go:580] "Node became not ready" node="ci-4152.2.3-1-91c0161c2f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:22:22Z","lastTransitionTime":"2025-04-30T00:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:22:23.191405 kubelet[2788]: E0430 00:22:23.190938 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:23.196092 containerd[1593]: time="2025-04-30T00:22:23.196042568Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:22:23.238413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693009997.mount: Deactivated successfully. Apr 30 00:22:23.242187 containerd[1593]: time="2025-04-30T00:22:23.242120360Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ccd4a449214b9454d41df895d213b7b04f8a7a7c7c5b7703823e8b0ada9732df\"" Apr 30 00:22:23.243539 containerd[1593]: time="2025-04-30T00:22:23.243472717Z" level=info msg="StartContainer for \"ccd4a449214b9454d41df895d213b7b04f8a7a7c7c5b7703823e8b0ada9732df\"" Apr 30 00:22:23.328436 containerd[1593]: time="2025-04-30T00:22:23.328370403Z" level=info msg="StartContainer for \"ccd4a449214b9454d41df895d213b7b04f8a7a7c7c5b7703823e8b0ada9732df\" returns successfully" Apr 30 00:22:23.372260 containerd[1593]: time="2025-04-30T00:22:23.372009312Z" level=info msg="shim disconnected" id=ccd4a449214b9454d41df895d213b7b04f8a7a7c7c5b7703823e8b0ada9732df namespace=k8s.io Apr 30 00:22:23.372260 containerd[1593]: time="2025-04-30T00:22:23.372079262Z" level=warning msg="cleaning up after shim disconnected" id=ccd4a449214b9454d41df895d213b7b04f8a7a7c7c5b7703823e8b0ada9732df namespace=k8s.io Apr 30 00:22:23.372260 containerd[1593]: time="2025-04-30T00:22:23.372089650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:23.393772 containerd[1593]: time="2025-04-30T00:22:23.393699663Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:22:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:22:23.466054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccd4a449214b9454d41df895d213b7b04f8a7a7c7c5b7703823e8b0ada9732df-rootfs.mount: Deactivated successfully. Apr 30 00:22:24.196886 kubelet[2788]: E0430 00:22:24.196798 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:24.203517 containerd[1593]: time="2025-04-30T00:22:24.203265052Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:22:24.221707 containerd[1593]: time="2025-04-30T00:22:24.221657918Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ebad03a1fa5b3ef8429babd7502180dfbbcc4e190628c6e2e34bc8858831ad2c\"" Apr 30 00:22:24.228917 containerd[1593]: time="2025-04-30T00:22:24.223988098Z" level=info msg="StartContainer for \"ebad03a1fa5b3ef8429babd7502180dfbbcc4e190628c6e2e34bc8858831ad2c\"" Apr 30 00:22:24.320211 containerd[1593]: time="2025-04-30T00:22:24.320161177Z" level=info msg="StartContainer for \"ebad03a1fa5b3ef8429babd7502180dfbbcc4e190628c6e2e34bc8858831ad2c\" returns successfully" Apr 30 00:22:24.352067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebad03a1fa5b3ef8429babd7502180dfbbcc4e190628c6e2e34bc8858831ad2c-rootfs.mount: Deactivated successfully. Apr 30 00:22:24.355276 containerd[1593]: time="2025-04-30T00:22:24.354518747Z" level=info msg="shim disconnected" id=ebad03a1fa5b3ef8429babd7502180dfbbcc4e190628c6e2e34bc8858831ad2c namespace=k8s.io Apr 30 00:22:24.355276 containerd[1593]: time="2025-04-30T00:22:24.355255783Z" level=warning msg="cleaning up after shim disconnected" id=ebad03a1fa5b3ef8429babd7502180dfbbcc4e190628c6e2e34bc8858831ad2c namespace=k8s.io Apr 30 00:22:24.355276 containerd[1593]: time="2025-04-30T00:22:24.355278821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:25.205247 kubelet[2788]: E0430 00:22:25.204068 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:25.213025 containerd[1593]: time="2025-04-30T00:22:25.211813325Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:22:25.236499 containerd[1593]: time="2025-04-30T00:22:25.236437944Z" level=info msg="CreateContainer within sandbox \"f11fc0b2d156e13b97012bbbe5750bfbc23991033afdf7d199f62826050b3eb4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c4d6af7224d1b829e3cd98e75ea30bc1c5e474611ccc0b6cf2070bc16d138e6\"" Apr 30 00:22:25.242002 containerd[1593]: time="2025-04-30T00:22:25.238068323Z" level=info msg="StartContainer for \"7c4d6af7224d1b829e3cd98e75ea30bc1c5e474611ccc0b6cf2070bc16d138e6\"" Apr 30 00:22:25.297137 systemd[1]: run-containerd-runc-k8s.io-7c4d6af7224d1b829e3cd98e75ea30bc1c5e474611ccc0b6cf2070bc16d138e6-runc.7v9Hk4.mount: Deactivated successfully. Apr 30 00:22:25.340137 containerd[1593]: time="2025-04-30T00:22:25.339957963Z" level=info msg="StartContainer for \"7c4d6af7224d1b829e3cd98e75ea30bc1c5e474611ccc0b6cf2070bc16d138e6\" returns successfully" Apr 30 00:22:25.723758 kubelet[2788]: E0430 00:22:25.723049 2788 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-6cp95" podUID="252fcfa8-97f4-4d04-95ff-c628c672343d" Apr 30 00:22:25.887887 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 00:22:26.213135 kubelet[2788]: E0430 00:22:26.211258 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:27.654827 kubelet[2788]: E0430 00:22:27.653071 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:27.722881 kubelet[2788]: E0430 00:22:27.722718 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:29.352754 systemd-networkd[1223]: lxc_health: Link UP Apr 30 00:22:29.371288 systemd-networkd[1223]: lxc_health: Gained carrier Apr 30 00:22:29.658001 kubelet[2788]: E0430 00:22:29.654632 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:29.694905 kubelet[2788]: I0430 00:22:29.693735 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b9p88" podStartSLOduration=8.693709602 podStartE2EDuration="8.693709602s" podCreationTimestamp="2025-04-30 00:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:22:26.236494425 +0000 UTC m=+105.688741994" watchObservedRunningTime="2025-04-30 00:22:29.693709602 +0000 UTC m=+109.145957171" Apr 30 00:22:30.241030 kubelet[2788]: E0430 00:22:30.240886 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:30.398047 systemd-networkd[1223]: lxc_health: Gained IPv6LL Apr 30 00:22:31.240878 kubelet[2788]: E0430 00:22:31.239515 2788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Apr 30 00:22:35.117403 sshd[4621]: Connection closed by 147.75.109.163 port 35192 Apr 30 00:22:35.118418 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:35.125152 systemd[1]: sshd@27-146.190.146.79:22-147.75.109.163:35192.service: Deactivated successfully. Apr 30 00:22:35.138100 systemd-logind[1569]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:22:35.139950 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:22:35.141634 systemd-logind[1569]: Removed session 28.