Feb 13 16:13:50.992641 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 16:13:50.992682 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:13:50.992721 kernel: BIOS-provided physical RAM map: Feb 13 16:13:50.992733 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 16:13:50.992743 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 16:13:50.992754 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 16:13:50.992768 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 16:13:50.992779 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 16:13:50.992790 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 16:13:50.992805 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 16:13:50.992816 kernel: NX (Execute Disable) protection: active Feb 13 16:13:50.992827 kernel: APIC: Static calls initialized Feb 13 16:13:50.992844 kernel: SMBIOS 2.8 present. Feb 13 16:13:50.992856 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 16:13:50.992870 kernel: Hypervisor detected: KVM Feb 13 16:13:50.992886 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 16:13:50.992903 kernel: kvm-clock: using sched offset of 3200382038 cycles Feb 13 16:13:50.992916 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 16:13:50.992946 kernel: tsc: Detected 1999.997 MHz processor Feb 13 16:13:50.992958 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 16:13:50.992971 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 16:13:50.992984 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 16:13:50.992996 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 16:13:50.993009 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 16:13:50.993026 kernel: ACPI: Early table checksum verification disabled Feb 13 16:13:50.993038 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 16:13:50.993051 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993063 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993076 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993088 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 16:13:50.993100 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993112 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993125 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993140 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:13:50.993153 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 16:13:50.993165 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 16:13:50.993177 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 16:13:50.993189 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 16:13:50.993201 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 16:13:50.993214 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 16:13:50.993232 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 16:13:50.993248 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 16:13:50.993261 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 16:13:50.993274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 16:13:50.993287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 16:13:50.993306 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 16:13:50.993319 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 16:13:50.993335 kernel: Zone ranges: Feb 13 16:13:50.993360 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 16:13:50.993373 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 16:13:50.993386 kernel: Normal empty Feb 13 16:13:50.993399 kernel: Movable zone start for each node Feb 13 16:13:50.993412 kernel: Early memory node ranges Feb 13 16:13:50.993425 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 16:13:50.993438 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 16:13:50.993451 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 16:13:50.993468 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 16:13:50.993481 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 16:13:50.993498 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 16:13:50.993510 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 16:13:50.993524 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 16:13:50.993537 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 16:13:50.993550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 16:13:50.993564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 16:13:50.993577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 16:13:50.993590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 16:13:50.993607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 16:13:50.993620 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 16:13:50.993633 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 16:13:50.993646 kernel: TSC deadline timer available Feb 13 16:13:50.993659 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 16:13:50.993672 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 16:13:50.993685 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 16:13:50.993716 kernel: Booting paravirtualized kernel on KVM Feb 13 16:13:50.993729 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 16:13:50.993752 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 16:13:50.993766 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 16:13:50.993779 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 16:13:50.993791 kernel: pcpu-alloc: [0] 0 1 Feb 13 16:13:50.993805 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 16:13:50.993819 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:13:50.993833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:13:50.993846 kernel: random: crng init done Feb 13 16:13:50.993863 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:13:50.993877 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 16:13:50.993890 kernel: Fallback order for Node 0: 0 Feb 13 16:13:50.993904 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 16:13:50.993917 kernel: Policy zone: DMA32 Feb 13 16:13:50.993930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:13:50.993945 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125148K reserved, 0K cma-reserved) Feb 13 16:13:50.993958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:13:50.993971 kernel: Kernel/User page tables isolation: enabled Feb 13 16:13:50.993987 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 16:13:50.994000 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 16:13:50.994013 kernel: Dynamic Preempt: voluntary Feb 13 16:13:50.994026 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:13:50.994041 kernel: rcu: RCU event tracing is enabled. Feb 13 16:13:50.994054 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:13:50.994067 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:13:50.994081 kernel: Rude variant of Tasks RCU enabled. Feb 13 16:13:50.994094 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:13:50.994110 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:13:50.994124 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:13:50.994136 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 16:13:50.994149 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:13:50.994167 kernel: Console: colour VGA+ 80x25 Feb 13 16:13:50.994180 kernel: printk: console [tty0] enabled Feb 13 16:13:50.994193 kernel: printk: console [ttyS0] enabled Feb 13 16:13:50.994215 kernel: ACPI: Core revision 20230628 Feb 13 16:13:50.994228 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 16:13:50.994245 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 16:13:50.994258 kernel: x2apic enabled Feb 13 16:13:50.994271 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 16:13:50.994284 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 16:13:50.994297 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Feb 13 16:13:50.994311 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Feb 13 16:13:50.994324 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 16:13:50.994338 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 16:13:50.994365 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 16:13:50.994379 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 16:13:50.994394 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 16:13:50.994408 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 16:13:50.994425 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 16:13:50.994439 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 16:13:50.994453 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 16:13:50.994467 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 16:13:50.994479 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:13:50.994501 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 16:13:50.994515 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 16:13:50.994529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 16:13:50.994543 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 16:13:50.994558 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 16:13:50.994571 kernel: Freeing SMP alternatives memory: 32K Feb 13 16:13:50.994584 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:13:50.994597 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:13:50.994614 kernel: landlock: Up and running. Feb 13 16:13:50.994629 kernel: SELinux: Initializing. Feb 13 16:13:50.994643 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 16:13:50.994657 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 16:13:50.994671 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 16:13:50.994686 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:13:50.997874 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:13:50.997900 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:13:50.997916 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 16:13:50.997939 kernel: signal: max sigframe size: 1776 Feb 13 16:13:50.997953 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:13:50.997969 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:13:50.997984 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 16:13:50.997998 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:13:50.998013 kernel: smpboot: x86: Booting SMP configuration: Feb 13 16:13:50.998027 kernel: .... node #0, CPUs: #1 Feb 13 16:13:50.998041 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:13:50.998062 kernel: smpboot: Max logical packages: 1 Feb 13 16:13:50.998080 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Feb 13 16:13:50.998094 kernel: devtmpfs: initialized Feb 13 16:13:50.998108 kernel: x86/mm: Memory block size: 128MB Feb 13 16:13:50.998122 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:13:50.998137 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:13:50.998152 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:13:50.998166 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:13:50.998190 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:13:50.998205 kernel: audit: type=2000 audit(1739463230.018:1): state=initialized audit_enabled=0 res=1 Feb 13 16:13:50.998229 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:13:50.998244 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 16:13:50.998258 kernel: cpuidle: using governor menu Feb 13 16:13:50.998272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:13:50.998286 kernel: dca service started, version 1.12.1 Feb 13 16:13:50.998301 kernel: PCI: Using configuration type 1 for base access Feb 13 16:13:50.998315 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 16:13:50.998329 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:13:50.998344 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:13:50.998361 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:13:50.998376 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:13:50.998391 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:13:50.998405 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:13:50.998420 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:13:50.998433 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 16:13:50.998447 kernel: ACPI: Interpreter enabled Feb 13 16:13:50.998462 kernel: ACPI: PM: (supports S0 S5) Feb 13 16:13:50.998476 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 16:13:50.998494 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 16:13:50.998508 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 16:13:50.998522 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 16:13:50.998537 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 16:13:50.998860 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:13:50.999016 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 16:13:50.999153 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 16:13:50.999177 kernel: acpiphp: Slot [3] registered Feb 13 16:13:50.999191 kernel: acpiphp: Slot [4] registered Feb 13 16:13:50.999205 kernel: acpiphp: Slot [5] registered Feb 13 16:13:50.999219 kernel: acpiphp: Slot [6] registered Feb 13 16:13:50.999234 kernel: acpiphp: Slot [7] registered Feb 13 16:13:50.999247 kernel: acpiphp: Slot [8] registered Feb 13 16:13:50.999261 kernel: acpiphp: Slot [9] registered Feb 13 16:13:50.999275 kernel: acpiphp: Slot [10] registered Feb 13 16:13:50.999289 kernel: acpiphp: Slot [11] registered Feb 13 16:13:50.999303 kernel: acpiphp: Slot [12] registered Feb 13 16:13:50.999321 kernel: acpiphp: Slot [13] registered Feb 13 16:13:50.999335 kernel: acpiphp: Slot [14] registered Feb 13 16:13:50.999350 kernel: acpiphp: Slot [15] registered Feb 13 16:13:50.999363 kernel: acpiphp: Slot [16] registered Feb 13 16:13:50.999377 kernel: acpiphp: Slot [17] registered Feb 13 16:13:50.999391 kernel: acpiphp: Slot [18] registered Feb 13 16:13:50.999405 kernel: acpiphp: Slot [19] registered Feb 13 16:13:50.999419 kernel: acpiphp: Slot [20] registered Feb 13 16:13:50.999433 kernel: acpiphp: Slot [21] registered Feb 13 16:13:50.999450 kernel: acpiphp: Slot [22] registered Feb 13 16:13:50.999464 kernel: acpiphp: Slot [23] registered Feb 13 16:13:50.999478 kernel: acpiphp: Slot [24] registered Feb 13 16:13:50.999492 kernel: acpiphp: Slot [25] registered Feb 13 16:13:50.999506 kernel: acpiphp: Slot [26] registered Feb 13 16:13:50.999519 kernel: acpiphp: Slot [27] registered Feb 13 16:13:50.999532 kernel: acpiphp: Slot [28] registered Feb 13 16:13:50.999545 kernel: acpiphp: Slot [29] registered Feb 13 16:13:50.999559 kernel: acpiphp: Slot [30] registered Feb 13 16:13:50.999572 kernel: acpiphp: Slot [31] registered Feb 13 16:13:50.999590 kernel: PCI host bridge to bus 0000:00 Feb 13 16:13:51.000859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 16:13:51.001003 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 16:13:51.001127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 16:13:51.001253 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 16:13:51.001391 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 16:13:51.001519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 16:13:51.001724 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 16:13:51.003364 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 16:13:51.003520 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 16:13:51.003656 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 16:13:51.003838 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 16:13:51.003983 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 16:13:51.004137 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 16:13:51.004281 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 16:13:51.004432 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 16:13:51.004574 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 16:13:51.006318 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 16:13:51.006488 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 16:13:51.006642 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 16:13:51.006841 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 16:13:51.006982 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 16:13:51.007122 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 16:13:51.007258 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 16:13:51.007407 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 16:13:51.007545 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 16:13:51.007808 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 16:13:51.007954 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 16:13:51.008090 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 16:13:51.008224 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 16:13:51.008399 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 16:13:51.008534 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 16:13:51.008669 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 16:13:51.008837 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 16:13:51.009032 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 16:13:51.009172 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 16:13:51.009311 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 16:13:51.009508 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 16:13:51.009810 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 16:13:51.009963 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 16:13:51.010134 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 16:13:51.010273 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 16:13:51.010451 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 16:13:51.010613 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 16:13:51.010818 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 16:13:51.010961 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 16:13:51.011119 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 16:13:51.011289 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 16:13:51.011453 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 16:13:51.011472 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 16:13:51.011487 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 16:13:51.011501 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 16:13:51.011515 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 16:13:51.011530 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 16:13:51.011549 kernel: iommu: Default domain type: Translated Feb 13 16:13:51.011563 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 16:13:51.011577 kernel: PCI: Using ACPI for IRQ routing Feb 13 16:13:51.011591 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 16:13:51.011605 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 16:13:51.011620 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 16:13:51.011846 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 16:13:51.011987 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 16:13:51.012134 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 16:13:51.012151 kernel: vgaarb: loaded Feb 13 16:13:51.012167 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 16:13:51.012181 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 16:13:51.012195 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 16:13:51.012208 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:13:51.012222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:13:51.012236 kernel: pnp: PnP ACPI init Feb 13 16:13:51.012250 kernel: pnp: PnP ACPI: found 4 devices Feb 13 16:13:51.012269 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 16:13:51.012283 kernel: NET: Registered PF_INET protocol family Feb 13 16:13:51.012298 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:13:51.012312 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 16:13:51.012327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:13:51.012341 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:13:51.012356 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 16:13:51.012370 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 16:13:51.012384 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 16:13:51.012402 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 16:13:51.012416 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:13:51.012431 kernel: NET: Registered PF_XDP protocol family Feb 13 16:13:51.012569 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 16:13:51.012709 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 16:13:51.012844 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 16:13:51.012969 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 16:13:51.013094 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 16:13:51.013240 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 16:13:51.013406 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 16:13:51.013426 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 16:13:51.013568 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 35554 usecs Feb 13 16:13:51.013587 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:13:51.013602 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 16:13:51.013616 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Feb 13 16:13:51.013630 kernel: Initialise system trusted keyrings Feb 13 16:13:51.013645 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 16:13:51.013663 kernel: Key type asymmetric registered Feb 13 16:13:51.013677 kernel: Asymmetric key parser 'x509' registered Feb 13 16:13:51.013692 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 16:13:51.015754 kernel: io scheduler mq-deadline registered Feb 13 16:13:51.015773 kernel: io scheduler kyber registered Feb 13 16:13:51.015790 kernel: io scheduler bfq registered Feb 13 16:13:51.015804 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 16:13:51.015821 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 16:13:51.015835 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 16:13:51.015855 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 16:13:51.015869 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:13:51.015884 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 16:13:51.015898 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 16:13:51.015912 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 16:13:51.015927 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 16:13:51.016149 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 16:13:51.016283 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 16:13:51.016416 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T16:13:50 UTC (1739463230) Feb 13 16:13:51.016540 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 16:13:51.016557 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 16:13:51.016572 kernel: intel_pstate: CPU model not supported Feb 13 16:13:51.016586 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:13:51.016600 kernel: Segment Routing with IPv6 Feb 13 16:13:51.016614 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:13:51.016629 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:13:51.016643 kernel: Key type dns_resolver registered Feb 13 16:13:51.016661 kernel: IPI shorthand broadcast: enabled Feb 13 16:13:51.016676 kernel: sched_clock: Marking stable (960004704, 128623453)->(1201332034, -112703877) Feb 13 16:13:51.016690 kernel: registered taskstats version 1 Feb 13 16:13:51.018753 kernel: Loading compiled-in X.509 certificates Feb 13 16:13:51.018766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 16:13:51.018776 kernel: Key type .fscrypt registered Feb 13 16:13:51.018784 kernel: Key type fscrypt-provisioning registered Feb 13 16:13:51.018793 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:13:51.018807 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:13:51.018816 kernel: ima: No architecture policies found Feb 13 16:13:51.018824 kernel: clk: Disabling unused clocks Feb 13 16:13:51.018832 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 16:13:51.018841 kernel: Write protecting the kernel read-only data: 36864k Feb 13 16:13:51.018866 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 16:13:51.018877 kernel: Run /init as init process Feb 13 16:13:51.018885 kernel: with arguments: Feb 13 16:13:51.018894 kernel: /init Feb 13 16:13:51.018905 kernel: with environment: Feb 13 16:13:51.018913 kernel: HOME=/ Feb 13 16:13:51.018921 kernel: TERM=linux Feb 13 16:13:51.018930 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:13:51.018943 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:13:51.018955 systemd[1]: Detected virtualization kvm. Feb 13 16:13:51.018964 systemd[1]: Detected architecture x86-64. Feb 13 16:13:51.018972 systemd[1]: Running in initrd. Feb 13 16:13:51.018983 systemd[1]: No hostname configured, using default hostname. Feb 13 16:13:51.018992 systemd[1]: Hostname set to . Feb 13 16:13:51.019001 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:13:51.019010 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:13:51.019019 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:13:51.019028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:13:51.019038 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:13:51.019049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:13:51.019060 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:13:51.019070 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:13:51.019080 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:13:51.019089 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:13:51.019098 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:13:51.019107 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:13:51.019116 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:13:51.019127 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:13:51.019136 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:13:51.019147 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:13:51.019156 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:13:51.019165 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:13:51.019176 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:13:51.019185 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:13:51.019194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:13:51.019202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:13:51.019211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:13:51.019220 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:13:51.019229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:13:51.019237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:13:51.019246 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:13:51.019257 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:13:51.019266 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:13:51.019275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:13:51.019327 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 16:13:51.019352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:13:51.019361 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:13:51.019370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:13:51.019379 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:13:51.019400 systemd-journald[183]: Journal started Feb 13 16:13:51.019423 systemd-journald[183]: Runtime Journal (/run/log/journal/e7efc229edc84256842dde87a27c54db) is 4.9M, max 39.3M, 34.4M free. Feb 13 16:13:51.021744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:13:51.023738 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 16:13:51.062558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:13:51.062589 kernel: Bridge firewalling registered Feb 13 16:13:51.060298 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 16:13:51.069760 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:13:51.070502 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:13:51.076717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:13:51.077585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:13:51.085882 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:13:51.087868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:13:51.090923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:13:51.093774 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:13:51.111078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:13:51.111985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:13:51.117754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:13:51.126977 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:13:51.129329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:13:51.133910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:13:51.140722 dracut-cmdline[216]: dracut-dracut-053 Feb 13 16:13:51.141520 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:13:51.178359 systemd-resolved[222]: Positive Trust Anchors: Feb 13 16:13:51.178379 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:13:51.178422 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:13:51.181252 systemd-resolved[222]: Defaulting to hostname 'linux'. Feb 13 16:13:51.182940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:13:51.183983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:13:51.236783 kernel: SCSI subsystem initialized Feb 13 16:13:51.248736 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:13:51.262737 kernel: iscsi: registered transport (tcp) Feb 13 16:13:51.286840 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:13:51.286949 kernel: QLogic iSCSI HBA Driver Feb 13 16:13:51.336684 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:13:51.342965 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:13:51.384741 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:13:51.387452 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:13:51.387535 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:13:51.433819 kernel: raid6: avx2x4 gen() 28502 MB/s Feb 13 16:13:51.450799 kernel: raid6: avx2x2 gen() 27379 MB/s Feb 13 16:13:51.469124 kernel: raid6: avx2x1 gen() 22548 MB/s Feb 13 16:13:51.469237 kernel: raid6: using algorithm avx2x4 gen() 28502 MB/s Feb 13 16:13:51.486984 kernel: raid6: .... xor() 9747 MB/s, rmw enabled Feb 13 16:13:51.487075 kernel: raid6: using avx2x2 recovery algorithm Feb 13 16:13:51.512754 kernel: xor: automatically using best checksumming function avx Feb 13 16:13:51.684748 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:13:51.697914 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:13:51.703929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:13:51.720117 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 16:13:51.724596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:13:51.734049 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:13:51.749019 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Feb 13 16:13:51.784830 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:13:51.792004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:13:51.849106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:13:51.856904 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:13:51.878953 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:13:51.881240 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:13:51.882088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:13:51.884389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:13:51.890871 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:13:51.916035 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:13:51.932744 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 16:13:52.013109 kernel: scsi host0: Virtio SCSI HBA Feb 13 16:13:52.013393 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 16:13:52.013581 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 16:13:52.013602 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:13:52.013621 kernel: GPT:9289727 != 125829119 Feb 13 16:13:52.013640 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:13:52.013660 kernel: GPT:9289727 != 125829119 Feb 13 16:13:52.013684 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:13:52.013724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:13:52.013744 kernel: libata version 3.00 loaded. Feb 13 16:13:52.013762 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 16:13:52.013963 kernel: scsi host1: ata_piix Feb 13 16:13:52.014142 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 16:13:52.014160 kernel: AES CTR mode by8 optimization enabled Feb 13 16:13:52.014176 kernel: scsi host2: ata_piix Feb 13 16:13:52.014353 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 16:13:52.014373 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 16:13:52.014391 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 16:13:52.027429 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Feb 13 16:13:52.000849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:13:52.000963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:13:52.042587 kernel: ACPI: bus type USB registered Feb 13 16:13:52.042614 kernel: usbcore: registered new interface driver usbfs Feb 13 16:13:52.042626 kernel: usbcore: registered new interface driver hub Feb 13 16:13:52.042637 kernel: usbcore: registered new device driver usb Feb 13 16:13:52.002818 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:13:52.005245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:13:52.005487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:13:52.006190 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:13:52.024083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:13:52.102122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:13:52.108937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:13:52.139783 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:13:52.196737 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (456) Feb 13 16:13:52.207717 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459) Feb 13 16:13:52.208796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 16:13:52.215935 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 16:13:52.219609 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 16:13:52.219822 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 16:13:52.219939 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 16:13:52.220060 kernel: hub 1-0:1.0: USB hub found Feb 13 16:13:52.220258 kernel: hub 1-0:1.0: 2 ports detected Feb 13 16:13:52.217546 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 16:13:52.231661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 16:13:52.235767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 16:13:52.236409 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 16:13:52.252089 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:13:52.259744 disk-uuid[549]: Primary Header is updated. Feb 13 16:13:52.259744 disk-uuid[549]: Secondary Entries is updated. Feb 13 16:13:52.259744 disk-uuid[549]: Secondary Header is updated. Feb 13 16:13:52.264727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:13:52.268766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:13:53.270733 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:13:53.272354 disk-uuid[550]: The operation has completed successfully. Feb 13 16:13:53.306558 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:13:53.306675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:13:53.325946 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:13:53.329164 sh[561]: Success Feb 13 16:13:53.344739 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 16:13:53.410414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:13:53.411993 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:13:53.418870 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:13:53.443772 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 16:13:53.443866 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:13:53.443887 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:13:53.444590 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:13:53.446783 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:13:53.453550 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:13:53.455101 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:13:53.462939 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:13:53.466937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:13:53.477003 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:13:53.477072 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:13:53.477084 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:13:53.481746 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:13:53.491822 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:13:53.494864 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:13:53.499900 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:13:53.508171 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:13:53.631823 ignition[646]: Ignition 2.20.0 Feb 13 16:13:53.631838 ignition[646]: Stage: fetch-offline Feb 13 16:13:53.631894 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:53.633748 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:13:53.631905 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:53.632011 ignition[646]: parsed url from cmdline: "" Feb 13 16:13:53.632015 ignition[646]: no config URL provided Feb 13 16:13:53.632021 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:13:53.632029 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:13:53.632035 ignition[646]: failed to fetch config: resource requires networking Feb 13 16:13:53.632252 ignition[646]: Ignition finished successfully Feb 13 16:13:53.654597 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:13:53.662026 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:13:53.685246 systemd-networkd[750]: lo: Link UP Feb 13 16:13:53.685263 systemd-networkd[750]: lo: Gained carrier Feb 13 16:13:53.687891 systemd-networkd[750]: Enumeration completed Feb 13 16:13:53.688321 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:13:53.688398 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 16:13:53.688404 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 16:13:53.689376 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:13:53.689380 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:13:53.689415 systemd[1]: Reached target network.target - Network. Feb 13 16:13:53.690461 systemd-networkd[750]: eth0: Link UP Feb 13 16:13:53.690467 systemd-networkd[750]: eth0: Gained carrier Feb 13 16:13:53.690478 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 16:13:53.695424 systemd-networkd[750]: eth1: Link UP Feb 13 16:13:53.695429 systemd-networkd[750]: eth1: Gained carrier Feb 13 16:13:53.695442 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:13:53.696069 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:13:53.709801 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Feb 13 16:13:53.714803 systemd-networkd[750]: eth0: DHCPv4 address 24.199.97.58/20, gateway 24.199.96.1 acquired from 169.254.169.253 Feb 13 16:13:53.727206 ignition[752]: Ignition 2.20.0 Feb 13 16:13:53.727219 ignition[752]: Stage: fetch Feb 13 16:13:53.727420 ignition[752]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:53.727432 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:53.727595 ignition[752]: parsed url from cmdline: "" Feb 13 16:13:53.727599 ignition[752]: no config URL provided Feb 13 16:13:53.727605 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:13:53.727616 ignition[752]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:13:53.727639 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 16:13:53.741889 ignition[752]: GET result: OK Feb 13 16:13:53.742022 ignition[752]: parsing config with SHA512: 093647f7c42b6cabe63069083c8a12d806d977c7a7c1fbff960e64596144aee44b1d5fc26e58dfffea4daf98a3110e36914b5727de1ee8455ee0fba394c1d4b1 Feb 13 16:13:53.747920 unknown[752]: fetched base config from "system" Feb 13 16:13:53.747938 unknown[752]: fetched base config from "system" Feb 13 16:13:53.747946 unknown[752]: fetched user config from "digitalocean" Feb 13 16:13:53.751866 ignition[752]: fetch: fetch complete Feb 13 16:13:53.751885 ignition[752]: fetch: fetch passed Feb 13 16:13:53.751994 ignition[752]: Ignition finished successfully Feb 13 16:13:53.755109 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:13:53.761914 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:13:53.777984 ignition[759]: Ignition 2.20.0 Feb 13 16:13:53.777996 ignition[759]: Stage: kargs Feb 13 16:13:53.778181 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:53.780106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:13:53.778192 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:53.779081 ignition[759]: kargs: kargs passed Feb 13 16:13:53.779135 ignition[759]: Ignition finished successfully Feb 13 16:13:53.785882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:13:53.801022 ignition[766]: Ignition 2.20.0 Feb 13 16:13:53.801035 ignition[766]: Stage: disks Feb 13 16:13:53.801225 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:53.801235 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:53.803123 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:13:53.802139 ignition[766]: disks: disks passed Feb 13 16:13:53.809219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:13:53.802190 ignition[766]: Ignition finished successfully Feb 13 16:13:53.810414 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:13:53.811331 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:13:53.812481 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:13:53.813427 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:13:53.819910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:13:53.836692 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:13:53.840095 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:13:53.846096 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:13:53.957710 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 16:13:53.958021 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:13:53.959163 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:13:53.971918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:13:53.974740 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:13:53.978043 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Feb 13 16:13:53.987725 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (783) Feb 13 16:13:53.987784 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 16:13:53.998278 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:13:53.998307 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:13:53.998319 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:13:53.990171 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:13:53.990206 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:13:53.995386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:13:54.007793 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:13:54.009922 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:13:54.012508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:13:54.076014 coreos-metadata[785]: Feb 13 16:13:54.075 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:13:54.085290 coreos-metadata[786]: Feb 13 16:13:54.084 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:13:54.086631 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:13:54.089090 coreos-metadata[785]: Feb 13 16:13:54.088 INFO Fetch successful Feb 13 16:13:54.093618 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:13:54.098013 coreos-metadata[786]: Feb 13 16:13:54.096 INFO Fetch successful Feb 13 16:13:54.100021 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Feb 13 16:13:54.100132 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Feb 13 16:13:54.105028 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:13:54.106233 coreos-metadata[786]: Feb 13 16:13:54.105 INFO wrote hostname ci-4152.2.1-f-cf79e5d115 to /sysroot/etc/hostname Feb 13 16:13:54.107528 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:13:54.114832 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:13:54.219495 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:13:54.225879 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:13:54.229881 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:13:54.242767 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:13:54.270117 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:13:54.273660 ignition[908]: INFO : Ignition 2.20.0 Feb 13 16:13:54.273660 ignition[908]: INFO : Stage: mount Feb 13 16:13:54.275412 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:54.275412 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:54.275412 ignition[908]: INFO : mount: mount passed Feb 13 16:13:54.275412 ignition[908]: INFO : Ignition finished successfully Feb 13 16:13:54.276597 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:13:54.289925 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:13:54.441051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:13:54.446989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:13:54.466732 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (919) Feb 13 16:13:54.470815 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:13:54.470876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:13:54.470888 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:13:54.474725 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:13:54.476346 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:13:54.504585 ignition[935]: INFO : Ignition 2.20.0 Feb 13 16:13:54.505467 ignition[935]: INFO : Stage: files Feb 13 16:13:54.506217 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:54.507823 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:54.508928 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:13:54.509686 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:13:54.509686 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:13:54.513478 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:13:54.514560 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:13:54.514560 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:13:54.514546 unknown[935]: wrote ssh authorized keys file for user: core Feb 13 16:13:54.517661 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 16:13:54.517661 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 16:13:54.616167 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 16:13:54.859041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 16:13:54.859041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 16:13:54.859041 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 16:13:55.263006 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 16:13:55.357295 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 16:13:55.391480 systemd-networkd[750]: eth1: Gained IPv6LL Feb 13 16:13:55.458339 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 16:13:55.458339 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:13:55.460851 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 16:13:55.885892 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 16:13:56.168335 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 16:13:56.168335 ignition[935]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 16:13:56.171360 ignition[935]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:13:56.171360 ignition[935]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:13:56.171360 ignition[935]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 16:13:56.171360 ignition[935]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:13:56.176252 ignition[935]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:13:56.176252 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:13:56.176252 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:13:56.176252 ignition[935]: INFO : files: files passed Feb 13 16:13:56.176252 ignition[935]: INFO : Ignition finished successfully Feb 13 16:13:56.173006 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:13:56.189048 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:13:56.192644 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:13:56.196664 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:13:56.197867 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:13:56.216413 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:13:56.216413 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:13:56.220451 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:13:56.223673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:13:56.226239 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:13:56.232052 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:13:56.279960 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:13:56.280202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:13:56.283340 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:13:56.284265 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:13:56.285963 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:13:56.297027 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:13:56.317361 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:13:56.323962 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:13:56.346493 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:13:56.348122 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:13:56.348783 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:13:56.349340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:13:56.349524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:13:56.351035 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:13:56.351824 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:13:56.352880 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:13:56.354106 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:13:56.355467 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:13:56.356770 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:13:56.358093 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:13:56.359539 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:13:56.360852 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:13:56.362175 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:13:56.363437 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:13:56.363630 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:13:56.365418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:13:56.366209 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:13:56.367616 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:13:56.367803 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:13:56.368875 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:13:56.369031 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:13:56.370802 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:13:56.371006 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:13:56.372650 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:13:56.372792 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:13:56.374275 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 16:13:56.374410 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:13:56.384442 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:13:56.385075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:13:56.385268 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:13:56.387948 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:13:56.388762 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:13:56.389894 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:13:56.391670 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:13:56.391961 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:13:56.404099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:13:56.404218 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:13:56.412568 ignition[988]: INFO : Ignition 2.20.0 Feb 13 16:13:56.412568 ignition[988]: INFO : Stage: umount Feb 13 16:13:56.412568 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:13:56.412568 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:13:56.424732 ignition[988]: INFO : umount: umount passed Feb 13 16:13:56.424732 ignition[988]: INFO : Ignition finished successfully Feb 13 16:13:56.420150 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:13:56.420305 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:13:56.424761 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:13:56.424887 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:13:56.426073 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:13:56.426134 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:13:56.427103 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:13:56.427150 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:13:56.429638 systemd[1]: Stopped target network.target - Network. Feb 13 16:13:56.459302 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:13:56.459422 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:13:56.460155 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:13:56.476103 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:13:56.479811 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:13:56.480658 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:13:56.482022 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:13:56.483161 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:13:56.483238 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:13:56.484357 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:13:56.484404 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:13:56.502077 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:13:56.502200 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:13:56.503593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:13:56.503684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:13:56.505390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:13:56.506869 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:13:56.529507 systemd-networkd[750]: eth0: DHCPv6 lease lost Feb 13 16:13:56.551342 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:13:56.551971 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:13:56.552165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:13:56.554871 systemd-networkd[750]: eth1: DHCPv6 lease lost Feb 13 16:13:56.558047 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:13:56.558758 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:13:56.560216 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:13:56.560383 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:13:56.563167 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:13:56.563222 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:13:56.564396 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:13:56.564455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:13:56.572946 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:13:56.573650 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:13:56.573757 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:13:56.574444 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:13:56.574499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:13:56.575134 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:13:56.575177 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:13:56.575711 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:13:56.575759 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:13:56.576443 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:13:56.598227 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:13:56.598859 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:13:56.600122 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:13:56.600212 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:13:56.602382 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:13:56.602498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:13:56.603946 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:13:56.603985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:13:56.605401 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:13:56.605466 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:13:56.607491 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:13:56.607557 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:13:56.608831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:13:56.608891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:13:56.621558 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:13:56.622343 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:13:56.622439 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:13:56.623212 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:13:56.623277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:13:56.629890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:13:56.630008 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:13:56.632097 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:13:56.638076 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:13:56.651055 systemd[1]: Switching root. Feb 13 16:13:56.731032 systemd-journald[183]: Journal stopped Feb 13 16:13:57.870243 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 16:13:57.870321 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:13:57.870336 kernel: SELinux: policy capability open_perms=1 Feb 13 16:13:57.870354 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:13:57.870364 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:13:57.870378 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:13:57.870389 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:13:57.870399 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:13:57.870410 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:13:57.870421 kernel: audit: type=1403 audit(1739463236.881:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:13:57.870434 systemd[1]: Successfully loaded SELinux policy in 41.614ms. Feb 13 16:13:57.870460 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.860ms. Feb 13 16:13:57.870477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:13:57.870489 systemd[1]: Detected virtualization kvm. Feb 13 16:13:57.870503 systemd[1]: Detected architecture x86-64. Feb 13 16:13:57.870515 systemd[1]: Detected first boot. Feb 13 16:13:57.870527 systemd[1]: Hostname set to . Feb 13 16:13:57.870538 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:13:57.870550 zram_generator::config[1031]: No configuration found. Feb 13 16:13:57.870568 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:13:57.870579 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:13:57.870591 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:13:57.870605 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:13:57.870618 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:13:57.870629 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:13:57.870641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:13:57.870657 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:13:57.870668 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:13:57.870679 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:13:57.870691 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:13:57.870728 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:13:57.870740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:13:57.870751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:13:57.870764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:13:57.870775 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:13:57.870787 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:13:57.870798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:13:57.870810 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:13:57.870821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:13:57.870835 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:13:57.870847 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:13:57.870859 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:13:57.870870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:13:57.870882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:13:57.870898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:13:57.870912 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:13:57.870923 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:13:57.870935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:13:57.870947 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:13:57.870958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:13:57.870970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:13:57.870981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:13:57.870992 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:13:57.871003 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:13:57.871014 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:13:57.871029 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:13:57.871040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:57.871052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:13:57.871063 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:13:57.871075 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:13:57.871087 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:13:57.871099 systemd[1]: Reached target machines.target - Containers. Feb 13 16:13:57.871110 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:13:57.871125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:13:57.871136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:13:57.871147 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:13:57.871160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:13:57.871172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:13:57.871184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:13:57.871195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:13:57.871207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:13:57.871219 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:13:57.871232 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:13:57.871243 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:13:57.871254 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:13:57.871265 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:13:57.871277 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:13:57.871288 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:13:57.871299 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:13:57.871310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:13:57.871323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:13:57.871335 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:13:57.871346 systemd[1]: Stopped verity-setup.service. Feb 13 16:13:57.871357 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:57.871369 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:13:57.871379 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:13:57.871391 kernel: loop: module loaded Feb 13 16:13:57.871402 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:13:57.871413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:13:57.871426 kernel: fuse: init (API version 7.39) Feb 13 16:13:57.871437 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:13:57.871449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:13:57.871459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:13:57.871471 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:13:57.871485 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:13:57.871497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:13:57.871510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:13:57.871522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:13:57.871534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:13:57.871547 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:13:57.871558 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:13:57.871569 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:13:57.871580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:13:57.871593 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:13:57.871604 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:13:57.871615 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:13:57.871627 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:13:57.871665 systemd-journald[1107]: Collecting audit messages is disabled. Feb 13 16:13:57.873929 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:13:57.873993 systemd-journald[1107]: Journal started Feb 13 16:13:57.874036 systemd-journald[1107]: Runtime Journal (/run/log/journal/e7efc229edc84256842dde87a27c54db) is 4.9M, max 39.3M, 34.4M free. Feb 13 16:13:57.507926 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:13:57.531050 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 16:13:57.531655 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:13:57.890252 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:13:57.890335 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:13:57.895802 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:13:57.900435 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:13:57.907780 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:13:57.915724 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:13:57.919506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:13:57.925431 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:13:57.925515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:13:57.943589 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:13:57.943675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:13:57.943713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:13:57.986736 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:13:57.986819 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:13:57.971791 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:13:57.972530 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:13:57.974118 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:13:57.976220 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:13:58.003239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:13:58.017762 kernel: ACPI: bus type drm_connector registered Feb 13 16:13:58.019443 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:13:58.019606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:13:58.022858 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:13:58.026742 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 16:13:58.029599 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:13:58.042011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:13:58.050823 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:13:58.056470 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:13:58.056719 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:13:58.060922 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:13:58.062138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:13:58.103692 kernel: loop1: detected capacity change from 0 to 211296 Feb 13 16:13:58.103848 systemd-journald[1107]: Time spent on flushing to /var/log/journal/e7efc229edc84256842dde87a27c54db is 46.810ms for 999 entries. Feb 13 16:13:58.103848 systemd-journald[1107]: System Journal (/var/log/journal/e7efc229edc84256842dde87a27c54db) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:13:58.168813 systemd-journald[1107]: Received client request to flush runtime journal. Feb 13 16:13:58.116949 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 16:13:58.118894 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:13:58.122995 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:13:58.143466 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:13:58.156248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:13:58.176755 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:13:58.177753 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 16:13:58.210623 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Feb 13 16:13:58.210660 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Feb 13 16:13:58.227173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:13:58.277788 kernel: loop3: detected capacity change from 0 to 8 Feb 13 16:13:58.297747 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 16:13:58.321724 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 16:13:58.342727 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 16:13:58.359591 kernel: loop7: detected capacity change from 0 to 8 Feb 13 16:13:58.361822 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 16:13:58.365762 (sd-merge)[1176]: Merged extensions into '/usr'. Feb 13 16:13:58.379126 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:13:58.379150 systemd[1]: Reloading... Feb 13 16:13:58.529808 zram_generator::config[1202]: No configuration found. Feb 13 16:13:58.637487 ldconfig[1129]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:13:58.722913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:13:58.772375 systemd[1]: Reloading finished in 392 ms. Feb 13 16:13:58.799580 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:13:58.801015 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:13:58.813044 systemd[1]: Starting ensure-sysext.service... Feb 13 16:13:58.817927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:13:58.826557 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:13:58.826680 systemd[1]: Reloading... Feb 13 16:13:58.891661 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:13:58.893073 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:13:58.896505 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:13:58.896825 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 13 16:13:58.896885 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 13 16:13:58.910437 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:13:58.910456 systemd-tmpfiles[1246]: Skipping /boot Feb 13 16:13:58.919732 zram_generator::config[1272]: No configuration found. Feb 13 16:13:58.937452 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:13:58.937469 systemd-tmpfiles[1246]: Skipping /boot Feb 13 16:13:59.076171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:13:59.122970 systemd[1]: Reloading finished in 295 ms. Feb 13 16:13:59.140058 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:13:59.148326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:13:59.157899 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:13:59.163847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:13:59.170838 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:13:59.177932 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:13:59.185931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:13:59.194900 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:13:59.210004 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:13:59.211955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.212110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:13:59.223567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:13:59.235900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:13:59.246646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:13:59.248060 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:13:59.248199 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.249390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:13:59.251798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:13:59.258207 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:13:59.258382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:13:59.267906 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:13:59.272336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.273122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:13:59.281008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:13:59.290003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:13:59.290634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:13:59.290767 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:13:59.290837 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.291452 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:13:59.298869 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Feb 13 16:13:59.303081 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:13:59.304245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:13:59.304781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:13:59.322780 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:13:59.325256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:13:59.325579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:13:59.327266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.329177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:13:59.335435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:13:59.339999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:13:59.340810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:13:59.354276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:13:59.354946 augenrules[1367]: No rules Feb 13 16:13:59.355137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:13:59.355257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.356040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:13:59.359782 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:13:59.360032 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:13:59.361062 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:13:59.361782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:13:59.374865 systemd[1]: Finished ensure-sysext.service. Feb 13 16:13:59.395013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:13:59.395592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:13:59.401539 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 16:13:59.402441 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:13:59.403862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:13:59.420260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:13:59.421836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:13:59.424961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:13:59.431351 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:13:59.461541 systemd-resolved[1323]: Positive Trust Anchors: Feb 13 16:13:59.461558 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:13:59.461594 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:13:59.469868 systemd-resolved[1323]: Using system hostname 'ci-4152.2.1-f-cf79e5d115'. Feb 13 16:13:59.472593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:13:59.473852 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:13:59.498826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Feb 13 16:13:59.547893 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 16:13:59.549862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.550085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:13:59.562633 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:13:59.567261 systemd-networkd[1387]: lo: Link UP Feb 13 16:13:59.567722 systemd-networkd[1387]: lo: Gained carrier Feb 13 16:13:59.569026 systemd-networkd[1387]: Enumeration completed Feb 13 16:13:59.573988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:13:59.584019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:13:59.586271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:13:59.586358 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:13:59.586383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:13:59.588944 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:13:59.595688 systemd[1]: Reached target network.target - Network. Feb 13 16:13:59.607992 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:13:59.609630 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:13:59.612052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:13:59.627741 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 16:13:59.633217 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 16:13:59.653035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:13:59.653298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:13:59.655686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:13:59.656812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:13:59.664509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:13:59.664593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:13:59.682569 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 16:13:59.683688 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:13:59.683805 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:13:59.714746 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 16:13:59.721355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 16:13:59.730461 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:13:59.742847 kernel: ACPI: button: Power Button [PWRF] Feb 13 16:13:59.750727 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 16:13:59.808159 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 16:13:59.765854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:13:59.808676 systemd-networkd[1387]: eth1: Configuring with /run/systemd/network/10-ae:e2:b7:13:c6:64.network. Feb 13 16:13:59.811296 systemd-networkd[1387]: eth1: Link UP Feb 13 16:13:59.811663 systemd-networkd[1387]: eth1: Gained carrier Feb 13 16:13:59.817011 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Feb 13 16:13:59.844674 systemd-networkd[1387]: eth0: Configuring with /run/systemd/network/10-c2:3a:94:16:59:3e.network. Feb 13 16:13:59.848453 systemd-networkd[1387]: eth0: Link UP Feb 13 16:13:59.848470 systemd-networkd[1387]: eth0: Gained carrier Feb 13 16:13:59.850212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:13:59.864734 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 16:13:59.924382 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 16:13:59.924474 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 16:13:59.931731 kernel: Console: switching to colour dummy device 80x25 Feb 13 16:13:59.931835 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 16:13:59.931859 kernel: [drm] features: -context_init Feb 13 16:13:59.933810 kernel: [drm] number of scanouts: 1 Feb 13 16:13:59.933872 kernel: [drm] number of cap sets: 0 Feb 13 16:13:59.937763 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 16:13:59.953560 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 16:13:59.953675 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 16:13:59.993855 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 16:14:00.002165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:14:00.002400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:14:00.012080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:14:00.054917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:14:00.072798 kernel: EDAC MC: Ver: 3.0.0 Feb 13 16:14:00.099490 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:14:00.107008 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:14:00.137013 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:14:00.173091 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:14:00.175168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:14:00.175327 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:14:00.175544 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:14:00.175661 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:14:00.177242 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:14:00.177558 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:14:00.177638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:14:00.177717 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:14:00.177744 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:14:00.177790 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:14:00.179630 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:14:00.182283 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:14:00.189211 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:14:00.193426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:14:00.195666 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:14:00.198719 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:14:00.200540 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:14:00.202793 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:14:00.202842 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:14:00.215904 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:14:00.219508 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:14:00.229025 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:14:00.233972 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:14:00.238441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:14:00.241455 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:14:00.243426 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:14:00.252056 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:14:00.263916 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:14:00.266998 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:14:00.280210 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:14:00.292715 coreos-metadata[1442]: Feb 13 16:14:00.292 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:14:00.295997 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:14:00.300157 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:14:00.302061 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:14:00.310327 coreos-metadata[1442]: Feb 13 16:14:00.308 INFO Fetch successful Feb 13 16:14:00.315019 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:14:00.321942 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:14:00.324326 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:14:00.338241 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:14:00.339201 jq[1444]: false Feb 13 16:14:00.339429 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:14:00.348304 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:14:00.348528 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:14:00.378289 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:14:00.378592 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:14:00.397149 dbus-daemon[1443]: [system] SELinux support is enabled Feb 13 16:14:00.398845 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:14:00.412838 update_engine[1455]: I20250213 16:14:00.412225 1455 main.cc:92] Flatcar Update Engine starting Feb 13 16:14:00.417947 update_engine[1455]: I20250213 16:14:00.416163 1455 update_check_scheduler.cc:74] Next update check in 5m9s Feb 13 16:14:00.418886 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:14:00.418933 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:14:00.419526 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:14:00.419605 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 16:14:00.419624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:14:00.420259 tar[1462]: linux-amd64/helm Feb 13 16:14:00.421446 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:14:00.427207 jq[1457]: true Feb 13 16:14:00.431059 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:14:00.434751 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:14:00.455835 extend-filesystems[1445]: Found loop4 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found loop5 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found loop6 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found loop7 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda1 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda2 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda3 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found usr Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda4 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda6 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda7 Feb 13 16:14:00.461132 extend-filesystems[1445]: Found vda9 Feb 13 16:14:00.461132 extend-filesystems[1445]: Checking size of /dev/vda9 Feb 13 16:14:00.483053 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:14:00.493344 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:14:00.535933 extend-filesystems[1445]: Resized partition /dev/vda9 Feb 13 16:14:00.558464 jq[1479]: true Feb 13 16:14:00.568765 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:14:00.582751 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 16:14:00.612679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Feb 13 16:14:00.714385 systemd-logind[1452]: New seat seat0. Feb 13 16:14:00.721349 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:14:00.729240 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 16:14:00.729300 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 16:14:00.730944 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:14:00.776340 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:14:00.778322 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:14:00.788218 systemd[1]: Starting sshkeys.service... Feb 13 16:14:00.820664 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 16:14:00.855670 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:14:00.878059 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:14:00.887072 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 16:14:00.887072 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 16:14:00.887072 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 16:14:00.902957 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Feb 13 16:14:00.902957 extend-filesystems[1445]: Found vdb Feb 13 16:14:00.893241 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:14:00.893838 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:14:00.968341 coreos-metadata[1514]: Feb 13 16:14:00.967 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:14:00.984734 coreos-metadata[1514]: Feb 13 16:14:00.983 INFO Fetch successful Feb 13 16:14:01.009966 unknown[1514]: wrote ssh authorized keys file for user: core Feb 13 16:14:01.078528 update-ssh-keys[1523]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:14:01.082196 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:14:01.087836 systemd[1]: Finished sshkeys.service. Feb 13 16:14:01.165985 containerd[1476]: time="2025-02-13T16:14:01.165842672Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 16:14:01.268396 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:14:01.270164 containerd[1476]: time="2025-02-13T16:14:01.270106029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276146 containerd[1476]: time="2025-02-13T16:14:01.276076915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276146 containerd[1476]: time="2025-02-13T16:14:01.276129579Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:14:01.276146 containerd[1476]: time="2025-02-13T16:14:01.276154108Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:14:01.276406 containerd[1476]: time="2025-02-13T16:14:01.276321279Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:14:01.276406 containerd[1476]: time="2025-02-13T16:14:01.276338443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276486 containerd[1476]: time="2025-02-13T16:14:01.276406015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276486 containerd[1476]: time="2025-02-13T16:14:01.276421962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276638 containerd[1476]: time="2025-02-13T16:14:01.276605986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276638 containerd[1476]: time="2025-02-13T16:14:01.276626477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276745 containerd[1476]: time="2025-02-13T16:14:01.276640047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276745 containerd[1476]: time="2025-02-13T16:14:01.276649331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.276745 containerd[1476]: time="2025-02-13T16:14:01.276733493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.277165 containerd[1476]: time="2025-02-13T16:14:01.276929307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:14:01.277165 containerd[1476]: time="2025-02-13T16:14:01.277060447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:14:01.277165 containerd[1476]: time="2025-02-13T16:14:01.277082220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:14:01.277306 containerd[1476]: time="2025-02-13T16:14:01.277197456Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:14:01.277306 containerd[1476]: time="2025-02-13T16:14:01.277282295Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:14:01.288429 containerd[1476]: time="2025-02-13T16:14:01.288116306Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:14:01.288429 containerd[1476]: time="2025-02-13T16:14:01.288229097Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:14:01.288429 containerd[1476]: time="2025-02-13T16:14:01.288255817Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:14:01.288429 containerd[1476]: time="2025-02-13T16:14:01.288324342Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:14:01.288429 containerd[1476]: time="2025-02-13T16:14:01.288364508Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:14:01.288641 containerd[1476]: time="2025-02-13T16:14:01.288543397Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290100513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290354164Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290383082Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290405955Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290427674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290450408Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290465548Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290485611Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290499 containerd[1476]: time="2025-02-13T16:14:01.290511237Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290535138Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290556652Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290571477Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290611488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290633949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290654136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290673191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290692225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290759796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290823876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290852447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290873569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.290913 containerd[1476]: time="2025-02-13T16:14:01.290898557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.290931373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.290952701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.290972305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.290995627Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.291029119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.291089450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.291229 containerd[1476]: time="2025-02-13T16:14:01.291106235Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292809860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292857711Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292870778Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292883212Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292892691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292907701Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292918786Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:14:01.293628 containerd[1476]: time="2025-02-13T16:14:01.292930123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:14:01.294061 containerd[1476]: time="2025-02-13T16:14:01.293280225Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:14:01.294061 containerd[1476]: time="2025-02-13T16:14:01.293340444Z" level=info msg="Connect containerd service" Feb 13 16:14:01.294061 containerd[1476]: time="2025-02-13T16:14:01.293397638Z" level=info msg="using legacy CRI server" Feb 13 16:14:01.294061 containerd[1476]: time="2025-02-13T16:14:01.293408076Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:14:01.294061 containerd[1476]: time="2025-02-13T16:14:01.293567664Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:14:01.294374 containerd[1476]: time="2025-02-13T16:14:01.294343522Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.295774273Z" level=info msg="Start subscribing containerd event" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.295859593Z" level=info msg="Start recovering state" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.295959014Z" level=info msg="Start event monitor" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.295972328Z" level=info msg="Start snapshots syncer" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.295983446Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.295990606Z" level=info msg="Start streaming server" Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.296286845Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:14:01.296733 containerd[1476]: time="2025-02-13T16:14:01.296337264Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:14:01.296564 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:14:01.301955 containerd[1476]: time="2025-02-13T16:14:01.300989887Z" level=info msg="containerd successfully booted in 0.138596s" Feb 13 16:14:01.335556 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:14:01.343033 systemd-networkd[1387]: eth0: Gained IPv6LL Feb 13 16:14:01.353087 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:14:01.357824 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:14:01.362528 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:14:01.380126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:14:01.392167 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:14:01.405623 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:14:01.406022 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:14:01.411684 systemd-networkd[1387]: eth1: Gained IPv6LL Feb 13 16:14:01.431224 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:14:01.457514 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:14:01.477403 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:14:01.490137 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:14:01.502727 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:14:01.504632 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:14:01.777409 tar[1462]: linux-amd64/LICENSE Feb 13 16:14:01.777925 tar[1462]: linux-amd64/README.md Feb 13 16:14:01.796226 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:14:02.287557 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:14:02.296882 systemd[1]: Started sshd@0-24.199.97.58:22-139.178.89.65:46990.service - OpenSSH per-connection server daemon (139.178.89.65:46990). Feb 13 16:14:02.396558 sshd[1561]: Accepted publickey for core from 139.178.89.65 port 46990 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:02.399208 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:02.413297 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:14:02.421146 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:14:02.432352 systemd-logind[1452]: New session 1 of user core. Feb 13 16:14:02.460727 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:14:02.475252 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:14:02.494009 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:14:02.591121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:02.592475 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:14:02.600258 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:14:02.655465 systemd[1565]: Queued start job for default target default.target. Feb 13 16:14:02.664715 systemd[1565]: Created slice app.slice - User Application Slice. Feb 13 16:14:02.664956 systemd[1565]: Reached target paths.target - Paths. Feb 13 16:14:02.665108 systemd[1565]: Reached target timers.target - Timers. Feb 13 16:14:02.669456 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:14:02.695877 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:14:02.696033 systemd[1565]: Reached target sockets.target - Sockets. Feb 13 16:14:02.696051 systemd[1565]: Reached target basic.target - Basic System. Feb 13 16:14:02.696110 systemd[1565]: Reached target default.target - Main User Target. Feb 13 16:14:02.696144 systemd[1565]: Startup finished in 188ms. Feb 13 16:14:02.696589 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:14:02.702055 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:14:02.705008 systemd[1]: Startup finished in 1.144s (kernel) + 6.124s (initrd) + 5.864s (userspace) = 13.133s. Feb 13 16:14:02.785218 systemd[1]: Started sshd@1-24.199.97.58:22-139.178.89.65:47006.service - OpenSSH per-connection server daemon (139.178.89.65:47006). Feb 13 16:14:02.860114 sshd[1586]: Accepted publickey for core from 139.178.89.65 port 47006 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:02.862545 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:02.872564 systemd-logind[1452]: New session 2 of user core. Feb 13 16:14:02.876071 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:14:02.945787 sshd[1593]: Connection closed by 139.178.89.65 port 47006 Feb 13 16:14:02.946937 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Feb 13 16:14:02.957089 systemd[1]: sshd@1-24.199.97.58:22-139.178.89.65:47006.service: Deactivated successfully. Feb 13 16:14:02.960242 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:14:02.963792 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:14:02.971113 systemd[1]: Started sshd@2-24.199.97.58:22-139.178.89.65:47012.service - OpenSSH per-connection server daemon (139.178.89.65:47012). Feb 13 16:14:02.973616 systemd-logind[1452]: Removed session 2. Feb 13 16:14:03.025800 sshd[1598]: Accepted publickey for core from 139.178.89.65 port 47012 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:03.026854 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:03.035099 systemd-logind[1452]: New session 3 of user core. Feb 13 16:14:03.037989 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:14:03.099771 sshd[1600]: Connection closed by 139.178.89.65 port 47012 Feb 13 16:14:03.100348 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 16:14:03.110781 systemd[1]: sshd@2-24.199.97.58:22-139.178.89.65:47012.service: Deactivated successfully. Feb 13 16:14:03.115192 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:14:03.117176 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:14:03.126051 systemd[1]: Started sshd@3-24.199.97.58:22-139.178.89.65:47028.service - OpenSSH per-connection server daemon (139.178.89.65:47028). Feb 13 16:14:03.127966 systemd-logind[1452]: Removed session 3. Feb 13 16:14:03.184692 sshd[1605]: Accepted publickey for core from 139.178.89.65 port 47028 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:03.187306 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:03.194494 systemd-logind[1452]: New session 4 of user core. Feb 13 16:14:03.199923 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:14:03.271024 sshd[1607]: Connection closed by 139.178.89.65 port 47028 Feb 13 16:14:03.274233 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 16:14:03.283788 systemd[1]: sshd@3-24.199.97.58:22-139.178.89.65:47028.service: Deactivated successfully. Feb 13 16:14:03.287558 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:14:03.289096 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:14:03.300192 systemd[1]: Started sshd@4-24.199.97.58:22-139.178.89.65:47042.service - OpenSSH per-connection server daemon (139.178.89.65:47042). Feb 13 16:14:03.305227 systemd-logind[1452]: Removed session 4. Feb 13 16:14:03.353095 sshd[1613]: Accepted publickey for core from 139.178.89.65 port 47042 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:03.354465 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:03.362106 systemd-logind[1452]: New session 5 of user core. Feb 13 16:14:03.366999 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:14:03.447140 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:14:03.448188 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:14:03.460874 sudo[1616]: pam_unix(sudo:session): session closed for user root Feb 13 16:14:03.465747 sshd[1615]: Connection closed by 139.178.89.65 port 47042 Feb 13 16:14:03.467962 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Feb 13 16:14:03.477508 systemd[1]: sshd@4-24.199.97.58:22-139.178.89.65:47042.service: Deactivated successfully. Feb 13 16:14:03.480997 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:14:03.484037 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:14:03.495526 systemd[1]: Started sshd@5-24.199.97.58:22-139.178.89.65:47058.service - OpenSSH per-connection server daemon (139.178.89.65:47058). Feb 13 16:14:03.499822 systemd-logind[1452]: Removed session 5. Feb 13 16:14:03.539568 kubelet[1576]: E0213 16:14:03.539438 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:14:03.544258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:14:03.544546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:14:03.546065 systemd[1]: kubelet.service: Consumed 1.476s CPU time. Feb 13 16:14:03.562379 sshd[1622]: Accepted publickey for core from 139.178.89.65 port 47058 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:03.565146 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:03.572096 systemd-logind[1452]: New session 6 of user core. Feb 13 16:14:03.579057 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:14:03.638585 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:14:03.638929 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:14:03.643850 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 16:14:03.651100 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 16:14:03.651395 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:14:03.675353 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:14:03.713983 augenrules[1649]: No rules Feb 13 16:14:03.715395 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:14:03.715602 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:14:03.716892 sudo[1626]: pam_unix(sudo:session): session closed for user root Feb 13 16:14:03.722108 sshd[1625]: Connection closed by 139.178.89.65 port 47058 Feb 13 16:14:03.722606 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Feb 13 16:14:03.729470 systemd[1]: sshd@5-24.199.97.58:22-139.178.89.65:47058.service: Deactivated successfully. Feb 13 16:14:03.732327 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:14:03.733355 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:14:03.740143 systemd[1]: Started sshd@6-24.199.97.58:22-139.178.89.65:47074.service - OpenSSH per-connection server daemon (139.178.89.65:47074). Feb 13 16:14:03.742054 systemd-logind[1452]: Removed session 6. Feb 13 16:14:03.791081 sshd[1657]: Accepted publickey for core from 139.178.89.65 port 47074 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:14:03.792680 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:14:03.797607 systemd-logind[1452]: New session 7 of user core. Feb 13 16:14:03.806971 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:14:03.865770 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:14:03.866099 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:14:04.354050 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:14:04.358405 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:14:04.794988 dockerd[1677]: time="2025-02-13T16:14:04.794481915Z" level=info msg="Starting up" Feb 13 16:14:04.911064 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1869414409-merged.mount: Deactivated successfully. Feb 13 16:14:05.097115 dockerd[1677]: time="2025-02-13T16:14:05.096935578Z" level=info msg="Loading containers: start." Feb 13 16:14:05.180937 systemd[1]: Started sshd@7-24.199.97.58:22-218.92.0.188:27826.service - OpenSSH per-connection server daemon (218.92.0.188:27826). Feb 13 16:14:05.304742 kernel: Initializing XFRM netlink socket Feb 13 16:14:05.422059 systemd-networkd[1387]: docker0: Link UP Feb 13 16:14:05.458755 dockerd[1677]: time="2025-02-13T16:14:05.458694414Z" level=info msg="Loading containers: done." Feb 13 16:14:05.478373 dockerd[1677]: time="2025-02-13T16:14:05.477897429Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:14:05.478373 dockerd[1677]: time="2025-02-13T16:14:05.478017982Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 16:14:05.478373 dockerd[1677]: time="2025-02-13T16:14:05.478136033Z" level=info msg="Daemon has completed initialization" Feb 13 16:14:05.521827 dockerd[1677]: time="2025-02-13T16:14:05.521690822Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:14:05.522173 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:14:06.613899 systemd-resolved[1323]: Clock change detected. Flushing caches. Feb 13 16:14:06.614238 systemd-timesyncd[1388]: Contacted time server 149.28.61.105:123 (1.flatcar.pool.ntp.org). Feb 13 16:14:06.614316 systemd-timesyncd[1388]: Initial clock synchronization to Thu 2025-02-13 16:14:06.613689 UTC. Feb 13 16:14:06.941419 sshd-session[1875]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Feb 13 16:14:07.073556 containerd[1476]: time="2025-02-13T16:14:07.073512544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 16:14:07.706188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179289546.mount: Deactivated successfully. Feb 13 16:14:08.957332 sshd[1722]: PAM: Permission denied for root from 218.92.0.188 Feb 13 16:14:09.258844 sshd-session[1936]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Feb 13 16:14:09.325978 containerd[1476]: time="2025-02-13T16:14:09.325903418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:09.327139 containerd[1476]: time="2025-02-13T16:14:09.326678379Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 16:14:09.328168 containerd[1476]: time="2025-02-13T16:14:09.328022706Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:09.332545 containerd[1476]: time="2025-02-13T16:14:09.332488518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.258931761s" Feb 13 16:14:09.332545 containerd[1476]: time="2025-02-13T16:14:09.332537021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 16:14:09.333102 containerd[1476]: time="2025-02-13T16:14:09.331394608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:09.370720 containerd[1476]: time="2025-02-13T16:14:09.370393950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 16:14:11.019802 sshd[1722]: PAM: Permission denied for root from 218.92.0.188 Feb 13 16:14:11.330726 sshd-session[1945]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root Feb 13 16:14:11.346722 containerd[1476]: time="2025-02-13T16:14:11.345460200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:11.346722 containerd[1476]: time="2025-02-13T16:14:11.346663035Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 16:14:11.347280 containerd[1476]: time="2025-02-13T16:14:11.347253715Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:11.349729 containerd[1476]: time="2025-02-13T16:14:11.349696055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:11.351014 containerd[1476]: time="2025-02-13T16:14:11.350976444Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 1.980520752s" Feb 13 16:14:11.351094 containerd[1476]: time="2025-02-13T16:14:11.351017222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 16:14:11.380996 containerd[1476]: time="2025-02-13T16:14:11.380939309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 16:14:12.576878 containerd[1476]: time="2025-02-13T16:14:12.575230973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:12.578689 containerd[1476]: time="2025-02-13T16:14:12.578613366Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 16:14:12.579828 containerd[1476]: time="2025-02-13T16:14:12.579770091Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:12.584211 containerd[1476]: time="2025-02-13T16:14:12.584133550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:12.585596 containerd[1476]: time="2025-02-13T16:14:12.585427312Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.204269162s" Feb 13 16:14:12.585596 containerd[1476]: time="2025-02-13T16:14:12.585469141Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 16:14:12.625597 containerd[1476]: time="2025-02-13T16:14:12.625270453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 16:14:12.769244 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 16:14:13.367434 sshd[1722]: PAM: Permission denied for root from 218.92.0.188 Feb 13 16:14:13.523995 sshd[1722]: Received disconnect from 218.92.0.188 port 27826:11: [preauth] Feb 13 16:14:13.523995 sshd[1722]: Disconnected from authenticating user root 218.92.0.188 port 27826 [preauth] Feb 13 16:14:13.526870 systemd[1]: sshd@7-24.199.97.58:22-218.92.0.188:27826.service: Deactivated successfully. Feb 13 16:14:13.707209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283229595.mount: Deactivated successfully. Feb 13 16:14:14.216645 containerd[1476]: time="2025-02-13T16:14:14.214560860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:14.218130 containerd[1476]: time="2025-02-13T16:14:14.218062181Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 16:14:14.218617 containerd[1476]: time="2025-02-13T16:14:14.218564248Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:14.221650 containerd[1476]: time="2025-02-13T16:14:14.221553002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:14.227305 containerd[1476]: time="2025-02-13T16:14:14.222696690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.597377887s" Feb 13 16:14:14.227305 containerd[1476]: time="2025-02-13T16:14:14.225979700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 16:14:14.269434 containerd[1476]: time="2025-02-13T16:14:14.269349000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:14:14.385060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:14:14.392322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:14:14.515312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:14.526476 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:14:14.595896 kubelet[1981]: E0213 16:14:14.595840 1981 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:14:14.599800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:14:14.599977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:14:14.805521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780303725.mount: Deactivated successfully. Feb 13 16:14:15.752474 containerd[1476]: time="2025-02-13T16:14:15.752365155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:15.753795 containerd[1476]: time="2025-02-13T16:14:15.753621643Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 16:14:15.756028 containerd[1476]: time="2025-02-13T16:14:15.754489517Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:15.757761 containerd[1476]: time="2025-02-13T16:14:15.757714443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:15.758898 containerd[1476]: time="2025-02-13T16:14:15.758867809Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.489074895s" Feb 13 16:14:15.759026 containerd[1476]: time="2025-02-13T16:14:15.759011845Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 16:14:15.787370 containerd[1476]: time="2025-02-13T16:14:15.787316534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 16:14:15.821250 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 16:14:16.247353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26337465.mount: Deactivated successfully. Feb 13 16:14:16.291984 containerd[1476]: time="2025-02-13T16:14:16.291895569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:16.292806 containerd[1476]: time="2025-02-13T16:14:16.292765675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 16:14:16.294939 containerd[1476]: time="2025-02-13T16:14:16.293602487Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:16.295706 containerd[1476]: time="2025-02-13T16:14:16.295675873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:16.296458 containerd[1476]: time="2025-02-13T16:14:16.296427736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 509.07171ms" Feb 13 16:14:16.296532 containerd[1476]: time="2025-02-13T16:14:16.296463147Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 16:14:16.322278 containerd[1476]: time="2025-02-13T16:14:16.322234689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 16:14:16.946699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189880433.mount: Deactivated successfully. Feb 13 16:14:19.021102 containerd[1476]: time="2025-02-13T16:14:19.021006581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:19.022260 containerd[1476]: time="2025-02-13T16:14:19.022208762Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 16:14:19.022994 containerd[1476]: time="2025-02-13T16:14:19.022925319Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:19.026048 containerd[1476]: time="2025-02-13T16:14:19.025995022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:19.027619 containerd[1476]: time="2025-02-13T16:14:19.027441002Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.704979689s" Feb 13 16:14:19.027619 containerd[1476]: time="2025-02-13T16:14:19.027486734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 16:14:22.502523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:22.516188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:14:22.531755 systemd[1]: Reloading requested from client PID 2155 ('systemctl') (unit session-7.scope)... Feb 13 16:14:22.531774 systemd[1]: Reloading... Feb 13 16:14:22.656525 zram_generator::config[2197]: No configuration found. Feb 13 16:14:22.790449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:14:22.864966 systemd[1]: Reloading finished in 332 ms. Feb 13 16:14:22.910616 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:14:22.910712 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:14:22.910998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:22.914354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:14:23.049598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:23.062427 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:14:23.129544 kubelet[2248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:14:23.130456 kubelet[2248]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:14:23.130456 kubelet[2248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:14:23.133011 kubelet[2248]: I0213 16:14:23.132730 2248 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:14:23.726818 kubelet[2248]: I0213 16:14:23.726766 2248 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:14:23.726818 kubelet[2248]: I0213 16:14:23.726800 2248 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:14:23.731700 kubelet[2248]: I0213 16:14:23.730050 2248 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:14:23.751200 kubelet[2248]: E0213 16:14:23.751165 2248 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://24.199.97.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.752155 kubelet[2248]: I0213 16:14:23.752134 2248 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:14:23.765997 kubelet[2248]: I0213 16:14:23.765942 2248 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:14:23.766431 kubelet[2248]: I0213 16:14:23.766410 2248 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:14:23.767594 kubelet[2248]: I0213 16:14:23.767566 2248 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:14:23.768217 kubelet[2248]: I0213 16:14:23.768200 2248 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:14:23.768302 kubelet[2248]: I0213 16:14:23.768294 2248 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:14:23.768466 kubelet[2248]: I0213 16:14:23.768455 2248 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:14:23.768640 kubelet[2248]: I0213 16:14:23.768623 2248 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:14:23.768730 kubelet[2248]: I0213 16:14:23.768722 2248 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:14:23.768796 kubelet[2248]: I0213 16:14:23.768790 2248 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:14:23.768855 kubelet[2248]: I0213 16:14:23.768848 2248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:14:23.769666 kubelet[2248]: W0213 16:14:23.769252 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://24.199.97.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-f-cf79e5d115&limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.769666 kubelet[2248]: E0213 16:14:23.769341 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://24.199.97.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-f-cf79e5d115&limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.770348 kubelet[2248]: W0213 16:14:23.770315 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://24.199.97.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.770457 kubelet[2248]: E0213 16:14:23.770445 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://24.199.97.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.771322 kubelet[2248]: I0213 16:14:23.771305 2248 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:14:23.775596 kubelet[2248]: I0213 16:14:23.775565 2248 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:14:23.775790 kubelet[2248]: W0213 16:14:23.775778 2248 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:14:23.776439 kubelet[2248]: I0213 16:14:23.776420 2248 server.go:1256] "Started kubelet" Feb 13 16:14:23.778058 kubelet[2248]: I0213 16:14:23.777648 2248 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:14:23.779017 kubelet[2248]: I0213 16:14:23.778599 2248 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:14:23.781262 kubelet[2248]: I0213 16:14:23.781234 2248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:14:23.781692 kubelet[2248]: I0213 16:14:23.781678 2248 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:14:23.784606 kubelet[2248]: I0213 16:14:23.784431 2248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:14:23.792041 kubelet[2248]: I0213 16:14:23.791999 2248 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:14:23.793556 kubelet[2248]: I0213 16:14:23.792641 2248 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:14:23.793556 kubelet[2248]: I0213 16:14:23.792726 2248 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:14:23.793556 kubelet[2248]: E0213 16:14:23.793148 2248 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.97.58:6443/api/v1/namespaces/default/events\": dial tcp 24.199.97.58:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.1-f-cf79e5d115.1823d09f513bd3e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-f-cf79e5d115,UID:ci-4152.2.1-f-cf79e5d115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-f-cf79e5d115,},FirstTimestamp:2025-02-13 16:14:23.776396265 +0000 UTC m=+0.709857689,LastTimestamp:2025-02-13 16:14:23.776396265 +0000 UTC m=+0.709857689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-f-cf79e5d115,}" Feb 13 16:14:23.794751 kubelet[2248]: I0213 16:14:23.794435 2248 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:14:23.794751 kubelet[2248]: I0213 16:14:23.794542 2248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:14:23.796506 kubelet[2248]: W0213 16:14:23.796452 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://24.199.97.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.796633 kubelet[2248]: E0213 16:14:23.796622 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://24.199.97.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.796812 kubelet[2248]: E0213 16:14:23.796795 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.97.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-f-cf79e5d115?timeout=10s\": dial tcp 24.199.97.58:6443: connect: connection refused" interval="200ms" Feb 13 16:14:23.805310 kubelet[2248]: E0213 16:14:23.805266 2248 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:14:23.805637 kubelet[2248]: I0213 16:14:23.805625 2248 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:14:23.814383 kubelet[2248]: I0213 16:14:23.814332 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:14:23.815944 kubelet[2248]: I0213 16:14:23.815913 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:14:23.816039 kubelet[2248]: I0213 16:14:23.815974 2248 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:14:23.816039 kubelet[2248]: I0213 16:14:23.815997 2248 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:14:23.816088 kubelet[2248]: E0213 16:14:23.816052 2248 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:14:23.824651 kubelet[2248]: W0213 16:14:23.824576 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://24.199.97.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.824651 kubelet[2248]: E0213 16:14:23.824646 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://24.199.97.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:23.832642 kubelet[2248]: I0213 16:14:23.832585 2248 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:14:23.832642 kubelet[2248]: I0213 16:14:23.832607 2248 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:14:23.832977 kubelet[2248]: I0213 16:14:23.832814 2248 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:14:23.835656 kubelet[2248]: I0213 16:14:23.835623 2248 policy_none.go:49] "None policy: Start" Feb 13 16:14:23.836580 kubelet[2248]: I0213 16:14:23.836551 2248 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:14:23.837115 kubelet[2248]: I0213 16:14:23.836730 2248 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:14:23.844882 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:14:23.855046 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:14:23.865354 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:14:23.867806 kubelet[2248]: I0213 16:14:23.867771 2248 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:14:23.868280 kubelet[2248]: I0213 16:14:23.868047 2248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:14:23.870067 kubelet[2248]: E0213 16:14:23.870044 2248 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.1-f-cf79e5d115\" not found" Feb 13 16:14:23.893827 kubelet[2248]: I0213 16:14:23.893780 2248 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.894190 kubelet[2248]: E0213 16:14:23.894162 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.97.58:6443/api/v1/nodes\": dial tcp 24.199.97.58:6443: connect: connection refused" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.916668 kubelet[2248]: I0213 16:14:23.916582 2248 topology_manager.go:215] "Topology Admit Handler" podUID="8b461baa1245b30f9081d4503f2bb375" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.919756 kubelet[2248]: I0213 16:14:23.919718 2248 topology_manager.go:215] "Topology Admit Handler" podUID="dc231fc86dfe5c609d4e76d438061361" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.920763 kubelet[2248]: I0213 16:14:23.920741 2248 topology_manager.go:215] "Topology Admit Handler" podUID="37cc79a3ab82177083c7e58b99042ccb" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.928425 systemd[1]: Created slice kubepods-burstable-pod8b461baa1245b30f9081d4503f2bb375.slice - libcontainer container kubepods-burstable-pod8b461baa1245b30f9081d4503f2bb375.slice. Feb 13 16:14:23.940437 systemd[1]: Created slice kubepods-burstable-pod37cc79a3ab82177083c7e58b99042ccb.slice - libcontainer container kubepods-burstable-pod37cc79a3ab82177083c7e58b99042ccb.slice. Feb 13 16:14:23.955493 systemd[1]: Created slice kubepods-burstable-poddc231fc86dfe5c609d4e76d438061361.slice - libcontainer container kubepods-burstable-poddc231fc86dfe5c609d4e76d438061361.slice. Feb 13 16:14:23.993524 kubelet[2248]: I0213 16:14:23.993266 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b461baa1245b30f9081d4503f2bb375-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" (UID: \"8b461baa1245b30f9081d4503f2bb375\") " pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993524 kubelet[2248]: I0213 16:14:23.993313 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993524 kubelet[2248]: I0213 16:14:23.993336 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993524 kubelet[2248]: I0213 16:14:23.993358 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993524 kubelet[2248]: I0213 16:14:23.993377 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37cc79a3ab82177083c7e58b99042ccb-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-f-cf79e5d115\" (UID: \"37cc79a3ab82177083c7e58b99042ccb\") " pod="kube-system/kube-scheduler-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993758 kubelet[2248]: I0213 16:14:23.993446 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b461baa1245b30f9081d4503f2bb375-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" (UID: \"8b461baa1245b30f9081d4503f2bb375\") " pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993758 kubelet[2248]: I0213 16:14:23.993507 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b461baa1245b30f9081d4503f2bb375-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" (UID: \"8b461baa1245b30f9081d4503f2bb375\") " pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993758 kubelet[2248]: I0213 16:14:23.993533 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.993758 kubelet[2248]: I0213 16:14:23.993556 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:23.997754 kubelet[2248]: E0213 16:14:23.997715 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.97.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-f-cf79e5d115?timeout=10s\": dial tcp 24.199.97.58:6443: connect: connection refused" interval="400ms" Feb 13 16:14:24.095400 kubelet[2248]: I0213 16:14:24.095314 2248 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:24.095783 kubelet[2248]: E0213 16:14:24.095757 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.97.58:6443/api/v1/nodes\": dial tcp 24.199.97.58:6443: connect: connection refused" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:24.236707 kubelet[2248]: E0213 16:14:24.236665 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:24.237726 containerd[1476]: time="2025-02-13T16:14:24.237452260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-f-cf79e5d115,Uid:8b461baa1245b30f9081d4503f2bb375,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:24.239641 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Feb 13 16:14:24.254152 kubelet[2248]: E0213 16:14:24.253649 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:24.256109 containerd[1476]: time="2025-02-13T16:14:24.256049538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-f-cf79e5d115,Uid:37cc79a3ab82177083c7e58b99042ccb,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:24.258520 kubelet[2248]: E0213 16:14:24.258311 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:24.259739 containerd[1476]: time="2025-02-13T16:14:24.259130681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-f-cf79e5d115,Uid:dc231fc86dfe5c609d4e76d438061361,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:24.398470 kubelet[2248]: E0213 16:14:24.398401 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.97.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-f-cf79e5d115?timeout=10s\": dial tcp 24.199.97.58:6443: connect: connection refused" interval="800ms" Feb 13 16:14:24.497090 kubelet[2248]: I0213 16:14:24.497038 2248 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:24.497728 kubelet[2248]: E0213 16:14:24.497708 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.97.58:6443/api/v1/nodes\": dial tcp 24.199.97.58:6443: connect: connection refused" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:24.619187 kubelet[2248]: W0213 16:14:24.618984 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://24.199.97.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:24.619187 kubelet[2248]: E0213 16:14:24.619070 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://24.199.97.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:24.682942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87846811.mount: Deactivated successfully. Feb 13 16:14:24.689591 containerd[1476]: time="2025-02-13T16:14:24.689517712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:14:24.690816 containerd[1476]: time="2025-02-13T16:14:24.690761548Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:14:24.692077 containerd[1476]: time="2025-02-13T16:14:24.692028525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:14:24.692547 containerd[1476]: time="2025-02-13T16:14:24.692502524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 16:14:24.693475 containerd[1476]: time="2025-02-13T16:14:24.693425592Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:14:24.694600 containerd[1476]: time="2025-02-13T16:14:24.694502733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:14:24.695992 containerd[1476]: time="2025-02-13T16:14:24.694735664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:14:24.697989 containerd[1476]: time="2025-02-13T16:14:24.697896127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:14:24.699147 containerd[1476]: time="2025-02-13T16:14:24.698760831Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.588233ms" Feb 13 16:14:24.702005 containerd[1476]: time="2025-02-13T16:14:24.701015664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 441.722106ms" Feb 13 16:14:24.702930 containerd[1476]: time="2025-02-13T16:14:24.702895359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.340564ms" Feb 13 16:14:24.863445 containerd[1476]: time="2025-02-13T16:14:24.863043686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:14:24.865101 containerd[1476]: time="2025-02-13T16:14:24.864996010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:14:24.865101 containerd[1476]: time="2025-02-13T16:14:24.865035758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:24.865784 containerd[1476]: time="2025-02-13T16:14:24.865718987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:24.871008 containerd[1476]: time="2025-02-13T16:14:24.870789885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:14:24.871008 containerd[1476]: time="2025-02-13T16:14:24.870850054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:14:24.871855 containerd[1476]: time="2025-02-13T16:14:24.871770812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:24.872522 containerd[1476]: time="2025-02-13T16:14:24.872432240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:24.875509 containerd[1476]: time="2025-02-13T16:14:24.875411479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:14:24.876083 containerd[1476]: time="2025-02-13T16:14:24.875468868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:14:24.876083 containerd[1476]: time="2025-02-13T16:14:24.876048551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:24.876365 containerd[1476]: time="2025-02-13T16:14:24.876292702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:24.902207 systemd[1]: Started cri-containerd-383779e694b0346a9d38aca6a5df16fb00945ebbd23a8901253fcd1eb0678b75.scope - libcontainer container 383779e694b0346a9d38aca6a5df16fb00945ebbd23a8901253fcd1eb0678b75. Feb 13 16:14:24.918115 systemd[1]: Started cri-containerd-28597291151f437f49771d5e172e09d102634a9ea2f3ce350f154e15bc6f1ff4.scope - libcontainer container 28597291151f437f49771d5e172e09d102634a9ea2f3ce350f154e15bc6f1ff4. Feb 13 16:14:24.921077 kubelet[2248]: W0213 16:14:24.921007 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://24.199.97.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-f-cf79e5d115&limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:24.921221 kubelet[2248]: E0213 16:14:24.921090 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://24.199.97.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-f-cf79e5d115&limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:24.931178 systemd[1]: Started cri-containerd-bbec3e14d19e492819926bf79fc3252131365ba61f2657cfa0b4bea32f63d739.scope - libcontainer container bbec3e14d19e492819926bf79fc3252131365ba61f2657cfa0b4bea32f63d739. Feb 13 16:14:24.990754 containerd[1476]: time="2025-02-13T16:14:24.990342270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-f-cf79e5d115,Uid:dc231fc86dfe5c609d4e76d438061361,Namespace:kube-system,Attempt:0,} returns sandbox id \"383779e694b0346a9d38aca6a5df16fb00945ebbd23a8901253fcd1eb0678b75\"" Feb 13 16:14:24.994414 kubelet[2248]: E0213 16:14:24.993811 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:25.004232 containerd[1476]: time="2025-02-13T16:14:25.004148178Z" level=info msg="CreateContainer within sandbox \"383779e694b0346a9d38aca6a5df16fb00945ebbd23a8901253fcd1eb0678b75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:14:25.024593 containerd[1476]: time="2025-02-13T16:14:25.024518586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-f-cf79e5d115,Uid:8b461baa1245b30f9081d4503f2bb375,Namespace:kube-system,Attempt:0,} returns sandbox id \"28597291151f437f49771d5e172e09d102634a9ea2f3ce350f154e15bc6f1ff4\"" Feb 13 16:14:25.026755 kubelet[2248]: E0213 16:14:25.026566 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:25.032010 containerd[1476]: time="2025-02-13T16:14:25.031601343Z" level=info msg="CreateContainer within sandbox \"28597291151f437f49771d5e172e09d102634a9ea2f3ce350f154e15bc6f1ff4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:14:25.035830 containerd[1476]: time="2025-02-13T16:14:25.035636718Z" level=info msg="CreateContainer within sandbox \"383779e694b0346a9d38aca6a5df16fb00945ebbd23a8901253fcd1eb0678b75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ccd37ae2590f2626cc8883f96b26152598d605496975ad292fd3f4b3b5eabf8\"" Feb 13 16:14:25.037988 containerd[1476]: time="2025-02-13T16:14:25.036563711Z" level=info msg="StartContainer for \"3ccd37ae2590f2626cc8883f96b26152598d605496975ad292fd3f4b3b5eabf8\"" Feb 13 16:14:25.045162 containerd[1476]: time="2025-02-13T16:14:25.044797220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-f-cf79e5d115,Uid:37cc79a3ab82177083c7e58b99042ccb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbec3e14d19e492819926bf79fc3252131365ba61f2657cfa0b4bea32f63d739\"" Feb 13 16:14:25.045344 kubelet[2248]: W0213 16:14:25.045062 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://24.199.97.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:25.045344 kubelet[2248]: E0213 16:14:25.045112 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://24.199.97.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:25.045878 kubelet[2248]: E0213 16:14:25.045676 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:25.048374 containerd[1476]: time="2025-02-13T16:14:25.048313079Z" level=info msg="CreateContainer within sandbox \"bbec3e14d19e492819926bf79fc3252131365ba61f2657cfa0b4bea32f63d739\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:14:25.052318 containerd[1476]: time="2025-02-13T16:14:25.052272189Z" level=info msg="CreateContainer within sandbox \"28597291151f437f49771d5e172e09d102634a9ea2f3ce350f154e15bc6f1ff4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dfed3f0918ef1f9a2340b9224f1a688b04740601a2ffb7ff15d96ef13dc49f0e\"" Feb 13 16:14:25.054061 containerd[1476]: time="2025-02-13T16:14:25.054019840Z" level=info msg="StartContainer for \"dfed3f0918ef1f9a2340b9224f1a688b04740601a2ffb7ff15d96ef13dc49f0e\"" Feb 13 16:14:25.068846 containerd[1476]: time="2025-02-13T16:14:25.068800204Z" level=info msg="CreateContainer within sandbox \"bbec3e14d19e492819926bf79fc3252131365ba61f2657cfa0b4bea32f63d739\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b39141e11f0141ebeb234f5b67411d580e3c9c4b723b6a3daaee72505854c1a\"" Feb 13 16:14:25.070293 containerd[1476]: time="2025-02-13T16:14:25.070252861Z" level=info msg="StartContainer for \"8b39141e11f0141ebeb234f5b67411d580e3c9c4b723b6a3daaee72505854c1a\"" Feb 13 16:14:25.091121 systemd[1]: Started cri-containerd-3ccd37ae2590f2626cc8883f96b26152598d605496975ad292fd3f4b3b5eabf8.scope - libcontainer container 3ccd37ae2590f2626cc8883f96b26152598d605496975ad292fd3f4b3b5eabf8. Feb 13 16:14:25.120170 systemd[1]: Started cri-containerd-dfed3f0918ef1f9a2340b9224f1a688b04740601a2ffb7ff15d96ef13dc49f0e.scope - libcontainer container dfed3f0918ef1f9a2340b9224f1a688b04740601a2ffb7ff15d96ef13dc49f0e. Feb 13 16:14:25.135744 kubelet[2248]: W0213 16:14:25.135507 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://24.199.97.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:25.135744 kubelet[2248]: E0213 16:14:25.135586 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://24.199.97.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.97.58:6443: connect: connection refused Feb 13 16:14:25.137190 systemd[1]: Started cri-containerd-8b39141e11f0141ebeb234f5b67411d580e3c9c4b723b6a3daaee72505854c1a.scope - libcontainer container 8b39141e11f0141ebeb234f5b67411d580e3c9c4b723b6a3daaee72505854c1a. Feb 13 16:14:25.199522 kubelet[2248]: E0213 16:14:25.199481 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.97.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-f-cf79e5d115?timeout=10s\": dial tcp 24.199.97.58:6443: connect: connection refused" interval="1.6s" Feb 13 16:14:25.215518 containerd[1476]: time="2025-02-13T16:14:25.215383475Z" level=info msg="StartContainer for \"3ccd37ae2590f2626cc8883f96b26152598d605496975ad292fd3f4b3b5eabf8\" returns successfully" Feb 13 16:14:25.224297 containerd[1476]: time="2025-02-13T16:14:25.224252689Z" level=info msg="StartContainer for \"dfed3f0918ef1f9a2340b9224f1a688b04740601a2ffb7ff15d96ef13dc49f0e\" returns successfully" Feb 13 16:14:25.240020 containerd[1476]: time="2025-02-13T16:14:25.239974929Z" level=info msg="StartContainer for \"8b39141e11f0141ebeb234f5b67411d580e3c9c4b723b6a3daaee72505854c1a\" returns successfully" Feb 13 16:14:25.300254 kubelet[2248]: I0213 16:14:25.299843 2248 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:25.301628 kubelet[2248]: E0213 16:14:25.301461 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.199.97.58:6443/api/v1/nodes\": dial tcp 24.199.97.58:6443: connect: connection refused" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:25.843980 kubelet[2248]: E0213 16:14:25.841464 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:25.845072 kubelet[2248]: E0213 16:14:25.844858 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:25.846678 kubelet[2248]: E0213 16:14:25.846610 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:26.851008 kubelet[2248]: E0213 16:14:26.850359 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:26.851651 kubelet[2248]: E0213 16:14:26.851389 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:26.903701 kubelet[2248]: I0213 16:14:26.903654 2248 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:27.325387 kubelet[2248]: E0213 16:14:27.325248 2248 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.1-f-cf79e5d115\" not found" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:27.383925 kubelet[2248]: I0213 16:14:27.383880 2248 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:27.773200 kubelet[2248]: I0213 16:14:27.773030 2248 apiserver.go:52] "Watching apiserver" Feb 13 16:14:27.793883 kubelet[2248]: I0213 16:14:27.793811 2248 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:14:27.909340 kubelet[2248]: E0213 16:14:27.909231 2248 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:27.909940 kubelet[2248]: E0213 16:14:27.909848 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:30.744607 systemd[1]: Reloading requested from client PID 2519 ('systemctl') (unit session-7.scope)... Feb 13 16:14:30.745108 systemd[1]: Reloading... Feb 13 16:14:30.840988 zram_generator::config[2555]: No configuration found. Feb 13 16:14:30.991562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:14:31.094908 systemd[1]: Reloading finished in 349 ms. Feb 13 16:14:31.153753 kubelet[2248]: I0213 16:14:31.153651 2248 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:14:31.154535 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:14:31.172575 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:14:31.172995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:31.173108 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 109.6M memory peak, 0B memory swap peak. Feb 13 16:14:31.177416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:14:31.329038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:14:31.344797 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:14:31.449386 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:14:31.449386 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:14:31.449386 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:14:31.449386 kubelet[2609]: I0213 16:14:31.446447 2609 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:14:31.458984 kubelet[2609]: I0213 16:14:31.455349 2609 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:14:31.458984 kubelet[2609]: I0213 16:14:31.455384 2609 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:14:31.458984 kubelet[2609]: I0213 16:14:31.455608 2609 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:14:31.460275 kubelet[2609]: I0213 16:14:31.460234 2609 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:14:31.471571 kubelet[2609]: I0213 16:14:31.471518 2609 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:14:31.482247 kubelet[2609]: I0213 16:14:31.480944 2609 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:14:31.482247 kubelet[2609]: I0213 16:14:31.481219 2609 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:14:31.482247 kubelet[2609]: I0213 16:14:31.481455 2609 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:14:31.482247 kubelet[2609]: I0213 16:14:31.481485 2609 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:14:31.482247 kubelet[2609]: I0213 16:14:31.481496 2609 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:14:31.482247 kubelet[2609]: I0213 16:14:31.481536 2609 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:14:31.482629 kubelet[2609]: I0213 16:14:31.481637 2609 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:14:31.482629 kubelet[2609]: I0213 16:14:31.481651 2609 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:14:31.482629 kubelet[2609]: I0213 16:14:31.481683 2609 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:14:31.482629 kubelet[2609]: I0213 16:14:31.481704 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:14:31.488005 kubelet[2609]: I0213 16:14:31.487195 2609 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:14:31.488005 kubelet[2609]: I0213 16:14:31.487452 2609 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:14:31.489167 kubelet[2609]: I0213 16:14:31.489122 2609 server.go:1256] "Started kubelet" Feb 13 16:14:31.497990 kubelet[2609]: I0213 16:14:31.497244 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:14:31.514029 kubelet[2609]: I0213 16:14:31.513927 2609 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:14:31.518529 kubelet[2609]: I0213 16:14:31.516339 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:14:31.518529 kubelet[2609]: I0213 16:14:31.516831 2609 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:14:31.526156 kubelet[2609]: I0213 16:14:31.526110 2609 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:14:31.528988 kubelet[2609]: I0213 16:14:31.527221 2609 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:14:31.528988 kubelet[2609]: I0213 16:14:31.527464 2609 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:14:31.532975 kubelet[2609]: E0213 16:14:31.532364 2609 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:14:31.541007 kubelet[2609]: I0213 16:14:31.533431 2609 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:14:31.542524 kubelet[2609]: I0213 16:14:31.539128 2609 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:14:31.543687 kubelet[2609]: I0213 16:14:31.542671 2609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:14:31.553151 kubelet[2609]: I0213 16:14:31.551533 2609 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:14:31.572549 sudo[2629]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 16:14:31.573316 sudo[2629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 16:14:31.588647 kubelet[2609]: I0213 16:14:31.588590 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:14:31.593705 kubelet[2609]: I0213 16:14:31.593657 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:14:31.593705 kubelet[2609]: I0213 16:14:31.593710 2609 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:14:31.594254 kubelet[2609]: I0213 16:14:31.593739 2609 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:14:31.594254 kubelet[2609]: E0213 16:14:31.593824 2609 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:14:31.632308 kubelet[2609]: I0213 16:14:31.631900 2609 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.651416 kubelet[2609]: I0213 16:14:31.651378 2609 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.651546 kubelet[2609]: I0213 16:14:31.651461 2609 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.689924 kubelet[2609]: I0213 16:14:31.689883 2609 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:14:31.689924 kubelet[2609]: I0213 16:14:31.689920 2609 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:14:31.690127 kubelet[2609]: I0213 16:14:31.689972 2609 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:14:31.690218 kubelet[2609]: I0213 16:14:31.690198 2609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:14:31.690255 kubelet[2609]: I0213 16:14:31.690243 2609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:14:31.690282 kubelet[2609]: I0213 16:14:31.690259 2609 policy_none.go:49] "None policy: Start" Feb 13 16:14:31.695135 kubelet[2609]: E0213 16:14:31.695086 2609 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:14:31.695822 kubelet[2609]: I0213 16:14:31.695793 2609 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:14:31.695931 kubelet[2609]: I0213 16:14:31.695843 2609 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:14:31.696309 kubelet[2609]: I0213 16:14:31.696286 2609 state_mem.go:75] "Updated machine memory state" Feb 13 16:14:31.709146 kubelet[2609]: I0213 16:14:31.707737 2609 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:14:31.709146 kubelet[2609]: I0213 16:14:31.708372 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:14:31.897463 kubelet[2609]: I0213 16:14:31.895991 2609 topology_manager.go:215] "Topology Admit Handler" podUID="8b461baa1245b30f9081d4503f2bb375" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.897463 kubelet[2609]: I0213 16:14:31.896114 2609 topology_manager.go:215] "Topology Admit Handler" podUID="dc231fc86dfe5c609d4e76d438061361" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.897463 kubelet[2609]: I0213 16:14:31.896163 2609 topology_manager.go:215] "Topology Admit Handler" podUID="37cc79a3ab82177083c7e58b99042ccb" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.910884 kubelet[2609]: W0213 16:14:31.910825 2609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:14:31.912763 kubelet[2609]: W0213 16:14:31.912718 2609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:14:31.917379 kubelet[2609]: W0213 16:14:31.917340 2609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:14:31.929992 kubelet[2609]: I0213 16:14:31.929920 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b461baa1245b30f9081d4503f2bb375-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" (UID: \"8b461baa1245b30f9081d4503f2bb375\") " pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.929992 kubelet[2609]: I0213 16:14:31.930009 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b461baa1245b30f9081d4503f2bb375-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" (UID: \"8b461baa1245b30f9081d4503f2bb375\") " pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.931977 kubelet[2609]: I0213 16:14:31.930427 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b461baa1245b30f9081d4503f2bb375-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" (UID: \"8b461baa1245b30f9081d4503f2bb375\") " pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.931977 kubelet[2609]: I0213 16:14:31.930488 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.931977 kubelet[2609]: I0213 16:14:31.930517 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.931977 kubelet[2609]: I0213 16:14:31.930555 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.931977 kubelet[2609]: I0213 16:14:31.930580 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.932218 kubelet[2609]: I0213 16:14:31.930605 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc231fc86dfe5c609d4e76d438061361-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" (UID: \"dc231fc86dfe5c609d4e76d438061361\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:31.932218 kubelet[2609]: I0213 16:14:31.930627 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37cc79a3ab82177083c7e58b99042ccb-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-f-cf79e5d115\" (UID: \"37cc79a3ab82177083c7e58b99042ccb\") " pod="kube-system/kube-scheduler-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:32.214006 kubelet[2609]: E0213 16:14:32.213565 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:32.214724 kubelet[2609]: E0213 16:14:32.214696 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:32.218930 kubelet[2609]: E0213 16:14:32.218887 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:32.366724 sudo[2629]: pam_unix(sudo:session): session closed for user root Feb 13 16:14:32.494158 kubelet[2609]: I0213 16:14:32.493760 2609 apiserver.go:52] "Watching apiserver" Feb 13 16:14:32.527679 kubelet[2609]: I0213 16:14:32.527591 2609 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:14:32.656144 kubelet[2609]: E0213 16:14:32.655914 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:32.680238 kubelet[2609]: W0213 16:14:32.679668 2609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:14:32.680238 kubelet[2609]: E0213 16:14:32.679756 2609 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.1-f-cf79e5d115\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:32.680238 kubelet[2609]: E0213 16:14:32.680164 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:32.682881 kubelet[2609]: W0213 16:14:32.682850 2609 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:14:32.683197 kubelet[2609]: E0213 16:14:32.683098 2609 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-f-cf79e5d115\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" Feb 13 16:14:32.684125 kubelet[2609]: E0213 16:14:32.684012 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:32.700092 kubelet[2609]: I0213 16:14:32.699845 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.1-f-cf79e5d115" podStartSLOduration=1.699786317 podStartE2EDuration="1.699786317s" podCreationTimestamp="2025-02-13 16:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:14:32.699499449 +0000 UTC m=+1.342754893" watchObservedRunningTime="2025-02-13 16:14:32.699786317 +0000 UTC m=+1.343041754" Feb 13 16:14:32.733299 kubelet[2609]: I0213 16:14:32.733098 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.1-f-cf79e5d115" podStartSLOduration=1.733050403 podStartE2EDuration="1.733050403s" podCreationTimestamp="2025-02-13 16:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:14:32.720559984 +0000 UTC m=+1.363815413" watchObservedRunningTime="2025-02-13 16:14:32.733050403 +0000 UTC m=+1.376305826" Feb 13 16:14:32.750582 kubelet[2609]: I0213 16:14:32.750374 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.1-f-cf79e5d115" podStartSLOduration=1.750328572 podStartE2EDuration="1.750328572s" podCreationTimestamp="2025-02-13 16:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:14:32.734405874 +0000 UTC m=+1.377661305" watchObservedRunningTime="2025-02-13 16:14:32.750328572 +0000 UTC m=+1.393584001" Feb 13 16:14:33.657693 kubelet[2609]: E0213 16:14:33.655444 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:33.657693 kubelet[2609]: E0213 16:14:33.656308 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:33.666655 kubelet[2609]: E0213 16:14:33.666597 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:34.234011 sudo[1660]: pam_unix(sudo:session): session closed for user root Feb 13 16:14:34.237128 sshd[1659]: Connection closed by 139.178.89.65 port 47074 Feb 13 16:14:34.238295 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Feb 13 16:14:34.244058 systemd[1]: sshd@6-24.199.97.58:22-139.178.89.65:47074.service: Deactivated successfully. Feb 13 16:14:34.246228 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:14:34.246481 systemd[1]: session-7.scope: Consumed 6.355s CPU time, 183.3M memory peak, 0B memory swap peak. Feb 13 16:14:34.247187 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:14:34.248867 systemd-logind[1452]: Removed session 7. Feb 13 16:14:34.658658 kubelet[2609]: E0213 16:14:34.658496 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:39.235731 kubelet[2609]: E0213 16:14:39.235185 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:39.667255 kubelet[2609]: E0213 16:14:39.667107 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:43.025373 kubelet[2609]: E0213 16:14:43.025142 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:43.309590 kubelet[2609]: E0213 16:14:43.309435 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:43.675194 kubelet[2609]: E0213 16:14:43.674548 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:44.872668 kubelet[2609]: I0213 16:14:44.872576 2609 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:14:44.873526 kubelet[2609]: I0213 16:14:44.873241 2609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:14:44.873583 containerd[1476]: time="2025-02-13T16:14:44.873007173Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:14:45.502503 kubelet[2609]: I0213 16:14:45.501548 2609 topology_manager.go:215] "Topology Admit Handler" podUID="551f2793-fe96-43d6-aae3-be601caad768" podNamespace="kube-system" podName="cilium-crxfp" Feb 13 16:14:45.504995 kubelet[2609]: I0213 16:14:45.504783 2609 topology_manager.go:215] "Topology Admit Handler" podUID="799cea54-07ee-4a20-8ff4-66e92ffe9740" podNamespace="kube-system" podName="kube-proxy-nxc96" Feb 13 16:14:45.521077 systemd[1]: Created slice kubepods-burstable-pod551f2793_fe96_43d6_aae3_be601caad768.slice - libcontainer container kubepods-burstable-pod551f2793_fe96_43d6_aae3_be601caad768.slice. Feb 13 16:14:45.530972 systemd[1]: Created slice kubepods-besteffort-pod799cea54_07ee_4a20_8ff4_66e92ffe9740.slice - libcontainer container kubepods-besteffort-pod799cea54_07ee_4a20_8ff4_66e92ffe9740.slice. Feb 13 16:14:45.609074 kubelet[2609]: I0213 16:14:45.609025 2609 topology_manager.go:215] "Topology Admit Handler" podUID="d1b4ba2d-0739-4161-94a5-72d8d6297f0d" podNamespace="kube-system" podName="cilium-operator-5cc964979-zddmf" Feb 13 16:14:45.613701 kubelet[2609]: I0213 16:14:45.613429 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-etc-cni-netd\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.613701 kubelet[2609]: I0213 16:14:45.613468 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cni-path\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.613701 kubelet[2609]: I0213 16:14:45.613492 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-lib-modules\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.613701 kubelet[2609]: I0213 16:14:45.613511 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/799cea54-07ee-4a20-8ff4-66e92ffe9740-xtables-lock\") pod \"kube-proxy-nxc96\" (UID: \"799cea54-07ee-4a20-8ff4-66e92ffe9740\") " pod="kube-system/kube-proxy-nxc96" Feb 13 16:14:45.613701 kubelet[2609]: I0213 16:14:45.613529 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-hubble-tls\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.613701 kubelet[2609]: I0213 16:14:45.613595 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/799cea54-07ee-4a20-8ff4-66e92ffe9740-kube-proxy\") pod \"kube-proxy-nxc96\" (UID: \"799cea54-07ee-4a20-8ff4-66e92ffe9740\") " pod="kube-system/kube-proxy-nxc96" Feb 13 16:14:45.614452 kubelet[2609]: I0213 16:14:45.613644 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-bpf-maps\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614452 kubelet[2609]: I0213 16:14:45.613663 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-hostproc\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614452 kubelet[2609]: I0213 16:14:45.613680 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-cgroup\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614452 kubelet[2609]: I0213 16:14:45.613703 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-net\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614452 kubelet[2609]: I0213 16:14:45.613722 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-run\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614452 kubelet[2609]: I0213 16:14:45.613742 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/551f2793-fe96-43d6-aae3-be601caad768-clustermesh-secrets\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614591 kubelet[2609]: I0213 16:14:45.613762 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-kernel\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614591 kubelet[2609]: I0213 16:14:45.613781 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbwlg\" (UniqueName: \"kubernetes.io/projected/799cea54-07ee-4a20-8ff4-66e92ffe9740-kube-api-access-vbwlg\") pod \"kube-proxy-nxc96\" (UID: \"799cea54-07ee-4a20-8ff4-66e92ffe9740\") " pod="kube-system/kube-proxy-nxc96" Feb 13 16:14:45.614591 kubelet[2609]: I0213 16:14:45.613800 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlb5\" (UniqueName: \"kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-kube-api-access-jwlb5\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614591 kubelet[2609]: I0213 16:14:45.613816 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/799cea54-07ee-4a20-8ff4-66e92ffe9740-lib-modules\") pod \"kube-proxy-nxc96\" (UID: \"799cea54-07ee-4a20-8ff4-66e92ffe9740\") " pod="kube-system/kube-proxy-nxc96" Feb 13 16:14:45.614591 kubelet[2609]: I0213 16:14:45.613835 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-xtables-lock\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.614706 kubelet[2609]: I0213 16:14:45.613851 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/551f2793-fe96-43d6-aae3-be601caad768-cilium-config-path\") pod \"cilium-crxfp\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " pod="kube-system/cilium-crxfp" Feb 13 16:14:45.619157 systemd[1]: Created slice kubepods-besteffort-podd1b4ba2d_0739_4161_94a5_72d8d6297f0d.slice - libcontainer container kubepods-besteffort-podd1b4ba2d_0739_4161_94a5_72d8d6297f0d.slice. Feb 13 16:14:45.715030 kubelet[2609]: I0213 16:14:45.714563 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-cilium-config-path\") pod \"cilium-operator-5cc964979-zddmf\" (UID: \"d1b4ba2d-0739-4161-94a5-72d8d6297f0d\") " pod="kube-system/cilium-operator-5cc964979-zddmf" Feb 13 16:14:45.715030 kubelet[2609]: I0213 16:14:45.714654 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d87rs\" (UniqueName: \"kubernetes.io/projected/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-kube-api-access-d87rs\") pod \"cilium-operator-5cc964979-zddmf\" (UID: \"d1b4ba2d-0739-4161-94a5-72d8d6297f0d\") " pod="kube-system/cilium-operator-5cc964979-zddmf" Feb 13 16:14:45.828634 kubelet[2609]: E0213 16:14:45.827551 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:45.829220 containerd[1476]: time="2025-02-13T16:14:45.829120978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-crxfp,Uid:551f2793-fe96-43d6-aae3-be601caad768,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:45.841346 kubelet[2609]: E0213 16:14:45.841303 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:45.841947 containerd[1476]: time="2025-02-13T16:14:45.841800047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxc96,Uid:799cea54-07ee-4a20-8ff4-66e92ffe9740,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:45.883729 containerd[1476]: time="2025-02-13T16:14:45.883511580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:14:45.883729 containerd[1476]: time="2025-02-13T16:14:45.883568866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:14:45.883729 containerd[1476]: time="2025-02-13T16:14:45.883590200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:45.883729 containerd[1476]: time="2025-02-13T16:14:45.883674600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:45.892056 containerd[1476]: time="2025-02-13T16:14:45.891513554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:14:45.892056 containerd[1476]: time="2025-02-13T16:14:45.891605496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:14:45.892056 containerd[1476]: time="2025-02-13T16:14:45.891630188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:45.892056 containerd[1476]: time="2025-02-13T16:14:45.892002307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:45.914833 systemd[1]: Started cri-containerd-aecc0eba9e96b1dcadacbd1c99720f046ef91fa11900e14248ec5fa008ecde9c.scope - libcontainer container aecc0eba9e96b1dcadacbd1c99720f046ef91fa11900e14248ec5fa008ecde9c. Feb 13 16:14:45.926608 kubelet[2609]: E0213 16:14:45.926106 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:45.928192 systemd[1]: Started cri-containerd-1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900.scope - libcontainer container 1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900. Feb 13 16:14:45.930542 containerd[1476]: time="2025-02-13T16:14:45.929628029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zddmf,Uid:d1b4ba2d-0739-4161-94a5-72d8d6297f0d,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:45.969541 containerd[1476]: time="2025-02-13T16:14:45.969382954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxc96,Uid:799cea54-07ee-4a20-8ff4-66e92ffe9740,Namespace:kube-system,Attempt:0,} returns sandbox id \"aecc0eba9e96b1dcadacbd1c99720f046ef91fa11900e14248ec5fa008ecde9c\"" Feb 13 16:14:45.972051 kubelet[2609]: E0213 16:14:45.971908 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:45.981094 containerd[1476]: time="2025-02-13T16:14:45.980764885Z" level=info msg="CreateContainer within sandbox \"aecc0eba9e96b1dcadacbd1c99720f046ef91fa11900e14248ec5fa008ecde9c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:14:45.984305 containerd[1476]: time="2025-02-13T16:14:45.983695782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-crxfp,Uid:551f2793-fe96-43d6-aae3-be601caad768,Namespace:kube-system,Attempt:0,} returns sandbox id \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\"" Feb 13 16:14:45.987413 kubelet[2609]: E0213 16:14:45.987328 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:45.990197 containerd[1476]: time="2025-02-13T16:14:45.990160205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 16:14:46.015132 containerd[1476]: time="2025-02-13T16:14:46.015076198Z" level=info msg="CreateContainer within sandbox \"aecc0eba9e96b1dcadacbd1c99720f046ef91fa11900e14248ec5fa008ecde9c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e1a62a30aad9ce32822aa753f224347fc8468958616fd38c581d2f663d8e5cf\"" Feb 13 16:14:46.015593 containerd[1476]: time="2025-02-13T16:14:46.007650119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:14:46.015593 containerd[1476]: time="2025-02-13T16:14:46.007724374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:14:46.015593 containerd[1476]: time="2025-02-13T16:14:46.007740334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:46.015593 containerd[1476]: time="2025-02-13T16:14:46.008919667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:14:46.018175 containerd[1476]: time="2025-02-13T16:14:46.018137601Z" level=info msg="StartContainer for \"7e1a62a30aad9ce32822aa753f224347fc8468958616fd38c581d2f663d8e5cf\"" Feb 13 16:14:46.044313 systemd[1]: Started cri-containerd-0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0.scope - libcontainer container 0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0. Feb 13 16:14:46.063196 systemd[1]: Started cri-containerd-7e1a62a30aad9ce32822aa753f224347fc8468958616fd38c581d2f663d8e5cf.scope - libcontainer container 7e1a62a30aad9ce32822aa753f224347fc8468958616fd38c581d2f663d8e5cf. Feb 13 16:14:46.112650 containerd[1476]: time="2025-02-13T16:14:46.112589600Z" level=info msg="StartContainer for \"7e1a62a30aad9ce32822aa753f224347fc8468958616fd38c581d2f663d8e5cf\" returns successfully" Feb 13 16:14:46.121627 containerd[1476]: time="2025-02-13T16:14:46.121308017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-zddmf,Uid:d1b4ba2d-0739-4161-94a5-72d8d6297f0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\"" Feb 13 16:14:46.123866 kubelet[2609]: E0213 16:14:46.123841 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:46.162374 update_engine[1455]: I20250213 16:14:46.162287 1455 update_attempter.cc:509] Updating boot flags... Feb 13 16:14:46.216320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2849) Feb 13 16:14:46.317010 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2858) Feb 13 16:14:46.689379 kubelet[2609]: E0213 16:14:46.689305 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:46.714073 kubelet[2609]: I0213 16:14:46.713802 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nxc96" podStartSLOduration=1.713743912 podStartE2EDuration="1.713743912s" podCreationTimestamp="2025-02-13 16:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:14:46.709996916 +0000 UTC m=+15.353252350" watchObservedRunningTime="2025-02-13 16:14:46.713743912 +0000 UTC m=+15.356999340" Feb 13 16:14:51.401164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474105603.mount: Deactivated successfully. Feb 13 16:14:53.929741 containerd[1476]: time="2025-02-13T16:14:53.929482056Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:53.930857 containerd[1476]: time="2025-02-13T16:14:53.930601365Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 16:14:53.931652 containerd[1476]: time="2025-02-13T16:14:53.931228310Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:53.932914 containerd[1476]: time="2025-02-13T16:14:53.932879182Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.94267576s" Feb 13 16:14:53.972877 containerd[1476]: time="2025-02-13T16:14:53.932915135Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 16:14:53.974532 containerd[1476]: time="2025-02-13T16:14:53.974481011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 16:14:53.976028 containerd[1476]: time="2025-02-13T16:14:53.975874845Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:14:54.078439 containerd[1476]: time="2025-02-13T16:14:54.078388292Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\"" Feb 13 16:14:54.079597 containerd[1476]: time="2025-02-13T16:14:54.079551056Z" level=info msg="StartContainer for \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\"" Feb 13 16:14:54.293229 systemd[1]: Started cri-containerd-4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc.scope - libcontainer container 4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc. Feb 13 16:14:54.326846 containerd[1476]: time="2025-02-13T16:14:54.326713166Z" level=info msg="StartContainer for \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\" returns successfully" Feb 13 16:14:54.341481 systemd[1]: cri-containerd-4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc.scope: Deactivated successfully. Feb 13 16:14:54.386544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc-rootfs.mount: Deactivated successfully. Feb 13 16:14:54.465312 containerd[1476]: time="2025-02-13T16:14:54.455010683Z" level=info msg="shim disconnected" id=4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc namespace=k8s.io Feb 13 16:14:54.465312 containerd[1476]: time="2025-02-13T16:14:54.465314173Z" level=warning msg="cleaning up after shim disconnected" id=4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc namespace=k8s.io Feb 13 16:14:54.465312 containerd[1476]: time="2025-02-13T16:14:54.465331711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:14:54.729813 kubelet[2609]: E0213 16:14:54.729416 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:54.734397 containerd[1476]: time="2025-02-13T16:14:54.733403339Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:14:54.746766 containerd[1476]: time="2025-02-13T16:14:54.746629900Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\"" Feb 13 16:14:54.749523 containerd[1476]: time="2025-02-13T16:14:54.747580400Z" level=info msg="StartContainer for \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\"" Feb 13 16:14:54.786300 systemd[1]: Started cri-containerd-871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3.scope - libcontainer container 871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3. Feb 13 16:14:54.827849 containerd[1476]: time="2025-02-13T16:14:54.827763394Z" level=info msg="StartContainer for \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\" returns successfully" Feb 13 16:14:54.842067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:14:54.842587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:14:54.842850 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:14:54.850622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:14:54.850920 systemd[1]: cri-containerd-871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3.scope: Deactivated successfully. Feb 13 16:14:54.878170 containerd[1476]: time="2025-02-13T16:14:54.877737806Z" level=info msg="shim disconnected" id=871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3 namespace=k8s.io Feb 13 16:14:54.878367 containerd[1476]: time="2025-02-13T16:14:54.878123192Z" level=warning msg="cleaning up after shim disconnected" id=871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3 namespace=k8s.io Feb 13 16:14:54.878367 containerd[1476]: time="2025-02-13T16:14:54.878240434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:14:54.882716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:14:55.654662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943247245.mount: Deactivated successfully. Feb 13 16:14:55.735679 kubelet[2609]: E0213 16:14:55.734688 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:55.740659 containerd[1476]: time="2025-02-13T16:14:55.740345642Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:14:55.791581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265317061.mount: Deactivated successfully. Feb 13 16:14:55.805118 containerd[1476]: time="2025-02-13T16:14:55.805056152Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\"" Feb 13 16:14:55.807082 containerd[1476]: time="2025-02-13T16:14:55.806029082Z" level=info msg="StartContainer for \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\"" Feb 13 16:14:55.861347 systemd[1]: Started cri-containerd-5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998.scope - libcontainer container 5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998. Feb 13 16:14:55.921483 containerd[1476]: time="2025-02-13T16:14:55.921355253Z" level=info msg="StartContainer for \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\" returns successfully" Feb 13 16:14:55.922577 systemd[1]: cri-containerd-5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998.scope: Deactivated successfully. Feb 13 16:14:55.990174 containerd[1476]: time="2025-02-13T16:14:55.990082916Z" level=info msg="shim disconnected" id=5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998 namespace=k8s.io Feb 13 16:14:55.990174 containerd[1476]: time="2025-02-13T16:14:55.990160551Z" level=warning msg="cleaning up after shim disconnected" id=5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998 namespace=k8s.io Feb 13 16:14:55.990174 containerd[1476]: time="2025-02-13T16:14:55.990174453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:14:56.443203 containerd[1476]: time="2025-02-13T16:14:56.443133456Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:56.444374 containerd[1476]: time="2025-02-13T16:14:56.444271398Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 16:14:56.446178 containerd[1476]: time="2025-02-13T16:14:56.444762552Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:14:56.448874 containerd[1476]: time="2025-02-13T16:14:56.448827378Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.474293816s" Feb 13 16:14:56.449053 containerd[1476]: time="2025-02-13T16:14:56.449034343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 16:14:56.452029 containerd[1476]: time="2025-02-13T16:14:56.451995298Z" level=info msg="CreateContainer within sandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 16:14:56.466909 containerd[1476]: time="2025-02-13T16:14:56.466846307Z" level=info msg="CreateContainer within sandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\"" Feb 13 16:14:56.468365 containerd[1476]: time="2025-02-13T16:14:56.468287303Z" level=info msg="StartContainer for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\"" Feb 13 16:14:56.511187 systemd[1]: Started cri-containerd-7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f.scope - libcontainer container 7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f. Feb 13 16:14:56.548509 containerd[1476]: time="2025-02-13T16:14:56.548355416Z" level=info msg="StartContainer for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" returns successfully" Feb 13 16:14:56.739358 kubelet[2609]: E0213 16:14:56.739218 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:56.745202 kubelet[2609]: E0213 16:14:56.745167 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:56.747924 containerd[1476]: time="2025-02-13T16:14:56.747870730Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:14:56.821875 containerd[1476]: time="2025-02-13T16:14:56.821573042Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\"" Feb 13 16:14:56.822421 containerd[1476]: time="2025-02-13T16:14:56.822399552Z" level=info msg="StartContainer for \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\"" Feb 13 16:14:56.881159 systemd[1]: Started cri-containerd-0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b.scope - libcontainer container 0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b. Feb 13 16:14:56.885768 kubelet[2609]: I0213 16:14:56.885708 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-zddmf" podStartSLOduration=1.5614757099999999 podStartE2EDuration="11.885648405s" podCreationTimestamp="2025-02-13 16:14:45 +0000 UTC" firstStartedPulling="2025-02-13 16:14:46.125124167 +0000 UTC m=+14.768379572" lastFinishedPulling="2025-02-13 16:14:56.449296832 +0000 UTC m=+25.092552267" observedRunningTime="2025-02-13 16:14:56.829554598 +0000 UTC m=+25.472810029" watchObservedRunningTime="2025-02-13 16:14:56.885648405 +0000 UTC m=+25.528903846" Feb 13 16:14:56.946424 systemd[1]: cri-containerd-0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b.scope: Deactivated successfully. Feb 13 16:14:56.948713 containerd[1476]: time="2025-02-13T16:14:56.948675085Z" level=info msg="StartContainer for \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\" returns successfully" Feb 13 16:14:57.053588 containerd[1476]: time="2025-02-13T16:14:57.053420187Z" level=info msg="shim disconnected" id=0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b namespace=k8s.io Feb 13 16:14:57.053588 containerd[1476]: time="2025-02-13T16:14:57.053490919Z" level=warning msg="cleaning up after shim disconnected" id=0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b namespace=k8s.io Feb 13 16:14:57.053588 containerd[1476]: time="2025-02-13T16:14:57.053500176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:14:57.752741 kubelet[2609]: E0213 16:14:57.751858 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:57.752741 kubelet[2609]: E0213 16:14:57.752436 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:57.757606 containerd[1476]: time="2025-02-13T16:14:57.756611118Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:14:57.784535 containerd[1476]: time="2025-02-13T16:14:57.781824484Z" level=info msg="CreateContainer within sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\"" Feb 13 16:14:57.785599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249493844.mount: Deactivated successfully. Feb 13 16:14:57.788924 containerd[1476]: time="2025-02-13T16:14:57.788176495Z" level=info msg="StartContainer for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\"" Feb 13 16:14:57.837316 systemd[1]: Started cri-containerd-a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66.scope - libcontainer container a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66. Feb 13 16:14:57.880527 containerd[1476]: time="2025-02-13T16:14:57.880456023Z" level=info msg="StartContainer for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" returns successfully" Feb 13 16:14:58.107261 kubelet[2609]: I0213 16:14:58.107221 2609 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:14:58.159145 kubelet[2609]: I0213 16:14:58.158896 2609 topology_manager.go:215] "Topology Admit Handler" podUID="0d8f2a60-f36e-4182-bdfe-bf8e61081755" podNamespace="kube-system" podName="coredns-76f75df574-5fxg9" Feb 13 16:14:58.165320 kubelet[2609]: I0213 16:14:58.164078 2609 topology_manager.go:215] "Topology Admit Handler" podUID="ab26d23e-0653-4e6a-b70b-820c3adcb1a5" podNamespace="kube-system" podName="coredns-76f75df574-262cg" Feb 13 16:14:58.171807 systemd[1]: Created slice kubepods-burstable-pod0d8f2a60_f36e_4182_bdfe_bf8e61081755.slice - libcontainer container kubepods-burstable-pod0d8f2a60_f36e_4182_bdfe_bf8e61081755.slice. Feb 13 16:14:58.182668 systemd[1]: Created slice kubepods-burstable-podab26d23e_0653_4e6a_b70b_820c3adcb1a5.slice - libcontainer container kubepods-burstable-podab26d23e_0653_4e6a_b70b_820c3adcb1a5.slice. Feb 13 16:14:58.216564 kubelet[2609]: I0213 16:14:58.216517 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcwg\" (UniqueName: \"kubernetes.io/projected/ab26d23e-0653-4e6a-b70b-820c3adcb1a5-kube-api-access-hgcwg\") pod \"coredns-76f75df574-262cg\" (UID: \"ab26d23e-0653-4e6a-b70b-820c3adcb1a5\") " pod="kube-system/coredns-76f75df574-262cg" Feb 13 16:14:58.216564 kubelet[2609]: I0213 16:14:58.216577 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm65w\" (UniqueName: \"kubernetes.io/projected/0d8f2a60-f36e-4182-bdfe-bf8e61081755-kube-api-access-fm65w\") pod \"coredns-76f75df574-5fxg9\" (UID: \"0d8f2a60-f36e-4182-bdfe-bf8e61081755\") " pod="kube-system/coredns-76f75df574-5fxg9" Feb 13 16:14:58.216795 kubelet[2609]: I0213 16:14:58.216603 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d8f2a60-f36e-4182-bdfe-bf8e61081755-config-volume\") pod \"coredns-76f75df574-5fxg9\" (UID: \"0d8f2a60-f36e-4182-bdfe-bf8e61081755\") " pod="kube-system/coredns-76f75df574-5fxg9" Feb 13 16:14:58.216795 kubelet[2609]: I0213 16:14:58.216625 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab26d23e-0653-4e6a-b70b-820c3adcb1a5-config-volume\") pod \"coredns-76f75df574-262cg\" (UID: \"ab26d23e-0653-4e6a-b70b-820c3adcb1a5\") " pod="kube-system/coredns-76f75df574-262cg" Feb 13 16:14:58.481139 kubelet[2609]: E0213 16:14:58.480821 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:58.484016 containerd[1476]: time="2025-02-13T16:14:58.483135173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5fxg9,Uid:0d8f2a60-f36e-4182-bdfe-bf8e61081755,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:58.486752 kubelet[2609]: E0213 16:14:58.486712 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:58.490511 containerd[1476]: time="2025-02-13T16:14:58.490340113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-262cg,Uid:ab26d23e-0653-4e6a-b70b-820c3adcb1a5,Namespace:kube-system,Attempt:0,}" Feb 13 16:14:58.757876 kubelet[2609]: E0213 16:14:58.757733 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:14:59.760761 kubelet[2609]: E0213 16:14:59.760727 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:00.225483 systemd-networkd[1387]: cilium_host: Link UP Feb 13 16:15:00.225705 systemd-networkd[1387]: cilium_net: Link UP Feb 13 16:15:00.225925 systemd-networkd[1387]: cilium_net: Gained carrier Feb 13 16:15:00.226149 systemd-networkd[1387]: cilium_host: Gained carrier Feb 13 16:15:00.388773 systemd-networkd[1387]: cilium_vxlan: Link UP Feb 13 16:15:00.388785 systemd-networkd[1387]: cilium_vxlan: Gained carrier Feb 13 16:15:00.763424 kubelet[2609]: E0213 16:15:00.763224 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:00.787069 kernel: NET: Registered PF_ALG protocol family Feb 13 16:15:01.005202 systemd-networkd[1387]: cilium_net: Gained IPv6LL Feb 13 16:15:01.261219 systemd-networkd[1387]: cilium_host: Gained IPv6LL Feb 13 16:15:01.581249 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL Feb 13 16:15:02.015117 systemd-networkd[1387]: lxc_health: Link UP Feb 13 16:15:02.026237 systemd-networkd[1387]: lxc_health: Gained carrier Feb 13 16:15:02.591146 systemd-networkd[1387]: lxc22e48321660d: Link UP Feb 13 16:15:02.598571 kernel: eth0: renamed from tmp7bdcf Feb 13 16:15:02.606044 systemd-networkd[1387]: lxc22e48321660d: Gained carrier Feb 13 16:15:02.620929 systemd-networkd[1387]: lxc5d8ea116e5e4: Link UP Feb 13 16:15:02.625351 kernel: eth0: renamed from tmpc32b9 Feb 13 16:15:02.637350 systemd-networkd[1387]: lxc5d8ea116e5e4: Gained carrier Feb 13 16:15:03.757254 systemd-networkd[1387]: lxc_health: Gained IPv6LL Feb 13 16:15:03.832290 kubelet[2609]: E0213 16:15:03.832121 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:03.879740 kubelet[2609]: I0213 16:15:03.878789 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-crxfp" podStartSLOduration=10.894066483 podStartE2EDuration="18.878704307s" podCreationTimestamp="2025-02-13 16:14:45 +0000 UTC" firstStartedPulling="2025-02-13 16:14:45.98887935 +0000 UTC m=+14.632134762" lastFinishedPulling="2025-02-13 16:14:53.973517178 +0000 UTC m=+22.616772586" observedRunningTime="2025-02-13 16:14:58.778934035 +0000 UTC m=+27.422189465" watchObservedRunningTime="2025-02-13 16:15:03.878704307 +0000 UTC m=+32.521959755" Feb 13 16:15:04.079111 systemd-networkd[1387]: lxc5d8ea116e5e4: Gained IPv6LL Feb 13 16:15:04.397334 systemd-networkd[1387]: lxc22e48321660d: Gained IPv6LL Feb 13 16:15:04.776871 kubelet[2609]: E0213 16:15:04.776686 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:05.778651 kubelet[2609]: E0213 16:15:05.778605 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:07.695504 containerd[1476]: time="2025-02-13T16:15:07.695186831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:15:07.695504 containerd[1476]: time="2025-02-13T16:15:07.695303161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:15:07.695504 containerd[1476]: time="2025-02-13T16:15:07.695326675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:15:07.699418 containerd[1476]: time="2025-02-13T16:15:07.695490804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:15:07.712132 containerd[1476]: time="2025-02-13T16:15:07.709907730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:15:07.712132 containerd[1476]: time="2025-02-13T16:15:07.710028746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:15:07.712132 containerd[1476]: time="2025-02-13T16:15:07.710049768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:15:07.712132 containerd[1476]: time="2025-02-13T16:15:07.710145636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:15:07.769235 systemd[1]: Started cri-containerd-7bdcfb8c7d1b42577f2b45881028f5badb77f591d631f3b754505ea17e730fc2.scope - libcontainer container 7bdcfb8c7d1b42577f2b45881028f5badb77f591d631f3b754505ea17e730fc2. Feb 13 16:15:07.770918 systemd[1]: Started cri-containerd-c32b998cf88827688cb6dc2a71b7e4576874e2bb5ca38c936010c54532da6685.scope - libcontainer container c32b998cf88827688cb6dc2a71b7e4576874e2bb5ca38c936010c54532da6685. Feb 13 16:15:07.885340 containerd[1476]: time="2025-02-13T16:15:07.885163497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5fxg9,Uid:0d8f2a60-f36e-4182-bdfe-bf8e61081755,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bdcfb8c7d1b42577f2b45881028f5badb77f591d631f3b754505ea17e730fc2\"" Feb 13 16:15:07.888691 kubelet[2609]: E0213 16:15:07.888049 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:07.896640 containerd[1476]: time="2025-02-13T16:15:07.896025809Z" level=info msg="CreateContainer within sandbox \"7bdcfb8c7d1b42577f2b45881028f5badb77f591d631f3b754505ea17e730fc2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:15:07.910721 containerd[1476]: time="2025-02-13T16:15:07.910631496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-262cg,Uid:ab26d23e-0653-4e6a-b70b-820c3adcb1a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c32b998cf88827688cb6dc2a71b7e4576874e2bb5ca38c936010c54532da6685\"" Feb 13 16:15:07.913866 kubelet[2609]: E0213 16:15:07.913213 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:07.919919 containerd[1476]: time="2025-02-13T16:15:07.919565294Z" level=info msg="CreateContainer within sandbox \"c32b998cf88827688cb6dc2a71b7e4576874e2bb5ca38c936010c54532da6685\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:15:07.963394 containerd[1476]: time="2025-02-13T16:15:07.962299541Z" level=info msg="CreateContainer within sandbox \"7bdcfb8c7d1b42577f2b45881028f5badb77f591d631f3b754505ea17e730fc2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f509c49672ac2b5eb01afc409be61d7347e715c9857c0778423f8ca9ae31f7c\"" Feb 13 16:15:07.965771 containerd[1476]: time="2025-02-13T16:15:07.964567050Z" level=info msg="StartContainer for \"8f509c49672ac2b5eb01afc409be61d7347e715c9857c0778423f8ca9ae31f7c\"" Feb 13 16:15:07.969371 containerd[1476]: time="2025-02-13T16:15:07.969309169Z" level=info msg="CreateContainer within sandbox \"c32b998cf88827688cb6dc2a71b7e4576874e2bb5ca38c936010c54532da6685\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36f579efef1f2e1b80d781b56b66888992468298d0d7d20bcbc8ef76e112e561\"" Feb 13 16:15:07.970905 containerd[1476]: time="2025-02-13T16:15:07.970854789Z" level=info msg="StartContainer for \"36f579efef1f2e1b80d781b56b66888992468298d0d7d20bcbc8ef76e112e561\"" Feb 13 16:15:08.018323 systemd[1]: Started cri-containerd-8f509c49672ac2b5eb01afc409be61d7347e715c9857c0778423f8ca9ae31f7c.scope - libcontainer container 8f509c49672ac2b5eb01afc409be61d7347e715c9857c0778423f8ca9ae31f7c. Feb 13 16:15:08.038238 systemd[1]: Started cri-containerd-36f579efef1f2e1b80d781b56b66888992468298d0d7d20bcbc8ef76e112e561.scope - libcontainer container 36f579efef1f2e1b80d781b56b66888992468298d0d7d20bcbc8ef76e112e561. Feb 13 16:15:08.079698 containerd[1476]: time="2025-02-13T16:15:08.079637185Z" level=info msg="StartContainer for \"8f509c49672ac2b5eb01afc409be61d7347e715c9857c0778423f8ca9ae31f7c\" returns successfully" Feb 13 16:15:08.102786 containerd[1476]: time="2025-02-13T16:15:08.102713914Z" level=info msg="StartContainer for \"36f579efef1f2e1b80d781b56b66888992468298d0d7d20bcbc8ef76e112e561\" returns successfully" Feb 13 16:15:08.715655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640046275.mount: Deactivated successfully. Feb 13 16:15:08.790037 kubelet[2609]: E0213 16:15:08.789735 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:08.798289 kubelet[2609]: E0213 16:15:08.798251 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:08.842170 kubelet[2609]: I0213 16:15:08.842054 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-262cg" podStartSLOduration=23.84176287 podStartE2EDuration="23.84176287s" podCreationTimestamp="2025-02-13 16:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:15:08.812353479 +0000 UTC m=+37.455608912" watchObservedRunningTime="2025-02-13 16:15:08.84176287 +0000 UTC m=+37.485018277" Feb 13 16:15:08.843417 kubelet[2609]: I0213 16:15:08.843270 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5fxg9" podStartSLOduration=23.843105505 podStartE2EDuration="23.843105505s" podCreationTimestamp="2025-02-13 16:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:15:08.842288975 +0000 UTC m=+37.485544381" watchObservedRunningTime="2025-02-13 16:15:08.843105505 +0000 UTC m=+37.486360935" Feb 13 16:15:09.800559 kubelet[2609]: E0213 16:15:09.800244 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:09.800559 kubelet[2609]: E0213 16:15:09.800476 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:10.802774 kubelet[2609]: E0213 16:15:10.802130 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:10.802774 kubelet[2609]: E0213 16:15:10.802587 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:23.849419 systemd[1]: Started sshd@8-24.199.97.58:22-139.178.89.65:52094.service - OpenSSH per-connection server daemon (139.178.89.65:52094). Feb 13 16:15:23.927106 sshd[3982]: Accepted publickey for core from 139.178.89.65 port 52094 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:23.929477 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:23.939525 systemd-logind[1452]: New session 8 of user core. Feb 13 16:15:23.942657 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:15:24.492889 sshd[3984]: Connection closed by 139.178.89.65 port 52094 Feb 13 16:15:24.493799 sshd-session[3982]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:24.499884 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:15:24.500215 systemd[1]: sshd@8-24.199.97.58:22-139.178.89.65:52094.service: Deactivated successfully. Feb 13 16:15:24.503032 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:15:24.506152 systemd-logind[1452]: Removed session 8. Feb 13 16:15:29.510329 systemd[1]: Started sshd@9-24.199.97.58:22-139.178.89.65:33442.service - OpenSSH per-connection server daemon (139.178.89.65:33442). Feb 13 16:15:29.576438 sshd[3996]: Accepted publickey for core from 139.178.89.65 port 33442 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:29.578242 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:29.584040 systemd-logind[1452]: New session 9 of user core. Feb 13 16:15:29.594295 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:15:29.737275 sshd[3998]: Connection closed by 139.178.89.65 port 33442 Feb 13 16:15:29.737919 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:29.741659 systemd[1]: sshd@9-24.199.97.58:22-139.178.89.65:33442.service: Deactivated successfully. Feb 13 16:15:29.744018 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:15:29.745945 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:15:29.747014 systemd-logind[1452]: Removed session 9. Feb 13 16:15:34.757453 systemd[1]: Started sshd@10-24.199.97.58:22-139.178.89.65:50802.service - OpenSSH per-connection server daemon (139.178.89.65:50802). Feb 13 16:15:34.811837 sshd[4012]: Accepted publickey for core from 139.178.89.65 port 50802 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:34.813454 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:34.818803 systemd-logind[1452]: New session 10 of user core. Feb 13 16:15:34.827357 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:15:34.972529 sshd[4014]: Connection closed by 139.178.89.65 port 50802 Feb 13 16:15:34.973584 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:34.976751 systemd[1]: sshd@10-24.199.97.58:22-139.178.89.65:50802.service: Deactivated successfully. Feb 13 16:15:34.979735 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:15:34.982309 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:15:34.983463 systemd-logind[1452]: Removed session 10. Feb 13 16:15:39.991390 systemd[1]: Started sshd@11-24.199.97.58:22-139.178.89.65:50808.service - OpenSSH per-connection server daemon (139.178.89.65:50808). Feb 13 16:15:40.051480 sshd[4025]: Accepted publickey for core from 139.178.89.65 port 50808 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:40.053321 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:40.059674 systemd-logind[1452]: New session 11 of user core. Feb 13 16:15:40.068213 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:15:40.212813 sshd[4027]: Connection closed by 139.178.89.65 port 50808 Feb 13 16:15:40.213985 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:40.218141 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:15:40.219602 systemd[1]: sshd@11-24.199.97.58:22-139.178.89.65:50808.service: Deactivated successfully. Feb 13 16:15:40.223505 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:15:40.226207 systemd-logind[1452]: Removed session 11. Feb 13 16:15:45.231409 systemd[1]: Started sshd@12-24.199.97.58:22-139.178.89.65:53726.service - OpenSSH per-connection server daemon (139.178.89.65:53726). Feb 13 16:15:45.288645 sshd[4039]: Accepted publickey for core from 139.178.89.65 port 53726 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:45.292469 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:45.299768 systemd-logind[1452]: New session 12 of user core. Feb 13 16:15:45.302186 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:15:45.434731 sshd[4041]: Connection closed by 139.178.89.65 port 53726 Feb 13 16:15:45.436577 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:45.445605 systemd[1]: sshd@12-24.199.97.58:22-139.178.89.65:53726.service: Deactivated successfully. Feb 13 16:15:45.448128 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:15:45.450037 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:15:45.455411 systemd[1]: Started sshd@13-24.199.97.58:22-139.178.89.65:53732.service - OpenSSH per-connection server daemon (139.178.89.65:53732). Feb 13 16:15:45.457185 systemd-logind[1452]: Removed session 12. Feb 13 16:15:45.510053 sshd[4053]: Accepted publickey for core from 139.178.89.65 port 53732 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:45.511488 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:45.516935 systemd-logind[1452]: New session 13 of user core. Feb 13 16:15:45.521148 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:15:45.723076 sshd[4055]: Connection closed by 139.178.89.65 port 53732 Feb 13 16:15:45.723936 sshd-session[4053]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:45.737558 systemd[1]: sshd@13-24.199.97.58:22-139.178.89.65:53732.service: Deactivated successfully. Feb 13 16:15:45.747092 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:15:45.750120 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:15:45.760404 systemd[1]: Started sshd@14-24.199.97.58:22-139.178.89.65:53746.service - OpenSSH per-connection server daemon (139.178.89.65:53746). Feb 13 16:15:45.766797 systemd-logind[1452]: Removed session 13. Feb 13 16:15:45.829023 sshd[4064]: Accepted publickey for core from 139.178.89.65 port 53746 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:45.830647 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:45.836726 systemd-logind[1452]: New session 14 of user core. Feb 13 16:15:45.847383 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:15:46.001784 sshd[4066]: Connection closed by 139.178.89.65 port 53746 Feb 13 16:15:46.002571 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:46.007132 systemd[1]: sshd@14-24.199.97.58:22-139.178.89.65:53746.service: Deactivated successfully. Feb 13 16:15:46.009932 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:15:46.011336 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:15:46.013776 systemd-logind[1452]: Removed session 14. Feb 13 16:15:51.020377 systemd[1]: Started sshd@15-24.199.97.58:22-139.178.89.65:53760.service - OpenSSH per-connection server daemon (139.178.89.65:53760). Feb 13 16:15:51.072110 sshd[4080]: Accepted publickey for core from 139.178.89.65 port 53760 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:51.073664 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:51.079431 systemd-logind[1452]: New session 15 of user core. Feb 13 16:15:51.084223 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:15:51.229251 sshd[4082]: Connection closed by 139.178.89.65 port 53760 Feb 13 16:15:51.230057 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:51.234198 systemd[1]: sshd@15-24.199.97.58:22-139.178.89.65:53760.service: Deactivated successfully. Feb 13 16:15:51.236244 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:15:51.237251 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:15:51.238106 systemd-logind[1452]: Removed session 15. Feb 13 16:15:52.594994 kubelet[2609]: E0213 16:15:52.594846 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:53.596390 kubelet[2609]: E0213 16:15:53.595500 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:56.249555 systemd[1]: Started sshd@16-24.199.97.58:22-139.178.89.65:53298.service - OpenSSH per-connection server daemon (139.178.89.65:53298). Feb 13 16:15:56.310059 sshd[4095]: Accepted publickey for core from 139.178.89.65 port 53298 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:56.311813 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:56.317213 systemd-logind[1452]: New session 16 of user core. Feb 13 16:15:56.325255 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:15:56.462229 sshd[4097]: Connection closed by 139.178.89.65 port 53298 Feb 13 16:15:56.463909 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:56.473102 systemd[1]: sshd@16-24.199.97.58:22-139.178.89.65:53298.service: Deactivated successfully. Feb 13 16:15:56.475473 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:15:56.477582 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:15:56.483531 systemd[1]: Started sshd@17-24.199.97.58:22-139.178.89.65:53308.service - OpenSSH per-connection server daemon (139.178.89.65:53308). Feb 13 16:15:56.486183 systemd-logind[1452]: Removed session 16. Feb 13 16:15:56.542396 sshd[4108]: Accepted publickey for core from 139.178.89.65 port 53308 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:56.544074 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:56.550154 systemd-logind[1452]: New session 17 of user core. Feb 13 16:15:56.556273 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:15:56.803371 sshd[4110]: Connection closed by 139.178.89.65 port 53308 Feb 13 16:15:56.805204 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:56.810807 systemd[1]: sshd@17-24.199.97.58:22-139.178.89.65:53308.service: Deactivated successfully. Feb 13 16:15:56.813533 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:15:56.814538 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:15:56.821403 systemd[1]: Started sshd@18-24.199.97.58:22-139.178.89.65:53318.service - OpenSSH per-connection server daemon (139.178.89.65:53318). Feb 13 16:15:56.823762 systemd-logind[1452]: Removed session 17. Feb 13 16:15:56.913588 sshd[4119]: Accepted publickey for core from 139.178.89.65 port 53318 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:56.918238 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:56.924107 systemd-logind[1452]: New session 18 of user core. Feb 13 16:15:56.935274 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:15:57.597485 kubelet[2609]: E0213 16:15:57.597198 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:15:58.796910 sshd[4121]: Connection closed by 139.178.89.65 port 53318 Feb 13 16:15:58.798183 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:58.812656 systemd[1]: Started sshd@19-24.199.97.58:22-139.178.89.65:53332.service - OpenSSH per-connection server daemon (139.178.89.65:53332). Feb 13 16:15:58.813578 systemd[1]: sshd@18-24.199.97.58:22-139.178.89.65:53318.service: Deactivated successfully. Feb 13 16:15:58.816781 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:15:58.819513 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:15:58.821533 systemd-logind[1452]: Removed session 18. Feb 13 16:15:58.888560 sshd[4135]: Accepted publickey for core from 139.178.89.65 port 53332 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:58.890512 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:58.897422 systemd-logind[1452]: New session 19 of user core. Feb 13 16:15:58.906313 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:15:59.268487 sshd[4139]: Connection closed by 139.178.89.65 port 53332 Feb 13 16:15:59.269601 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:59.279284 systemd[1]: sshd@19-24.199.97.58:22-139.178.89.65:53332.service: Deactivated successfully. Feb 13 16:15:59.283558 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:15:59.287109 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:15:59.296117 systemd[1]: Started sshd@20-24.199.97.58:22-139.178.89.65:53346.service - OpenSSH per-connection server daemon (139.178.89.65:53346). Feb 13 16:15:59.297644 systemd-logind[1452]: Removed session 19. Feb 13 16:15:59.347788 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 53346 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:15:59.349893 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:15:59.356215 systemd-logind[1452]: New session 20 of user core. Feb 13 16:15:59.365274 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:15:59.500531 sshd[4151]: Connection closed by 139.178.89.65 port 53346 Feb 13 16:15:59.501241 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Feb 13 16:15:59.505099 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:15:59.505796 systemd[1]: sshd@20-24.199.97.58:22-139.178.89.65:53346.service: Deactivated successfully. Feb 13 16:15:59.507799 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:15:59.508786 systemd-logind[1452]: Removed session 20. Feb 13 16:16:04.536555 systemd[1]: Started sshd@21-24.199.97.58:22-139.178.89.65:38890.service - OpenSSH per-connection server daemon (139.178.89.65:38890). Feb 13 16:16:04.628182 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 38890 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:04.632012 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:04.641289 systemd-logind[1452]: New session 21 of user core. Feb 13 16:16:04.649273 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:16:04.844539 sshd[4165]: Connection closed by 139.178.89.65 port 38890 Feb 13 16:16:04.843804 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:04.850481 systemd[1]: sshd@21-24.199.97.58:22-139.178.89.65:38890.service: Deactivated successfully. Feb 13 16:16:04.855938 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:16:04.858664 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:16:04.860648 systemd-logind[1452]: Removed session 21. Feb 13 16:16:06.598811 kubelet[2609]: E0213 16:16:06.597904 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:08.595624 kubelet[2609]: E0213 16:16:08.595571 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:09.871473 systemd[1]: Started sshd@22-24.199.97.58:22-139.178.89.65:38900.service - OpenSSH per-connection server daemon (139.178.89.65:38900). Feb 13 16:16:09.924706 sshd[4179]: Accepted publickey for core from 139.178.89.65 port 38900 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:09.925431 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:09.931636 systemd-logind[1452]: New session 22 of user core. Feb 13 16:16:09.937250 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:16:10.105835 sshd[4181]: Connection closed by 139.178.89.65 port 38900 Feb 13 16:16:10.106684 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:10.112515 systemd[1]: sshd@22-24.199.97.58:22-139.178.89.65:38900.service: Deactivated successfully. Feb 13 16:16:10.114633 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:16:10.115944 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:16:10.117033 systemd-logind[1452]: Removed session 22. Feb 13 16:16:11.597010 kubelet[2609]: E0213 16:16:11.596316 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:15.128222 systemd[1]: Started sshd@23-24.199.97.58:22-139.178.89.65:56262.service - OpenSSH per-connection server daemon (139.178.89.65:56262). Feb 13 16:16:15.191762 sshd[4192]: Accepted publickey for core from 139.178.89.65 port 56262 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:15.193983 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:15.201089 systemd-logind[1452]: New session 23 of user core. Feb 13 16:16:15.208338 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:16:15.381682 sshd[4194]: Connection closed by 139.178.89.65 port 56262 Feb 13 16:16:15.382913 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:15.388154 systemd[1]: sshd@23-24.199.97.58:22-139.178.89.65:56262.service: Deactivated successfully. Feb 13 16:16:15.391354 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:16:15.393708 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:16:15.394855 systemd-logind[1452]: Removed session 23. Feb 13 16:16:20.400358 systemd[1]: Started sshd@24-24.199.97.58:22-139.178.89.65:56276.service - OpenSSH per-connection server daemon (139.178.89.65:56276). Feb 13 16:16:20.478372 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 56276 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:20.480361 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:20.486769 systemd-logind[1452]: New session 24 of user core. Feb 13 16:16:20.499473 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:16:20.640478 sshd[4210]: Connection closed by 139.178.89.65 port 56276 Feb 13 16:16:20.641356 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:20.652168 systemd[1]: sshd@24-24.199.97.58:22-139.178.89.65:56276.service: Deactivated successfully. Feb 13 16:16:20.656570 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:16:20.659733 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:16:20.666454 systemd[1]: Started sshd@25-24.199.97.58:22-139.178.89.65:56286.service - OpenSSH per-connection server daemon (139.178.89.65:56286). Feb 13 16:16:20.669620 systemd-logind[1452]: Removed session 24. Feb 13 16:16:20.720512 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 56286 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:20.722199 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:20.729198 systemd-logind[1452]: New session 25 of user core. Feb 13 16:16:20.735386 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:16:22.419500 containerd[1476]: time="2025-02-13T16:16:22.418635452Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:16:22.431024 containerd[1476]: time="2025-02-13T16:16:22.430223040Z" level=info msg="StopContainer for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" with timeout 30 (s)" Feb 13 16:16:22.433434 containerd[1476]: time="2025-02-13T16:16:22.433254811Z" level=info msg="Stop container \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" with signal terminated" Feb 13 16:16:22.434294 containerd[1476]: time="2025-02-13T16:16:22.434260903Z" level=info msg="StopContainer for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" with timeout 2 (s)" Feb 13 16:16:22.434693 containerd[1476]: time="2025-02-13T16:16:22.434594131Z" level=info msg="Stop container \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" with signal terminated" Feb 13 16:16:22.454327 systemd-networkd[1387]: lxc_health: Link DOWN Feb 13 16:16:22.455891 systemd-networkd[1387]: lxc_health: Lost carrier Feb 13 16:16:22.483575 systemd[1]: cri-containerd-7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f.scope: Deactivated successfully. Feb 13 16:16:22.499785 systemd[1]: cri-containerd-a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66.scope: Deactivated successfully. Feb 13 16:16:22.501636 systemd[1]: cri-containerd-a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66.scope: Consumed 9.365s CPU time. Feb 13 16:16:22.530925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f-rootfs.mount: Deactivated successfully. Feb 13 16:16:22.547589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66-rootfs.mount: Deactivated successfully. Feb 13 16:16:22.549509 containerd[1476]: time="2025-02-13T16:16:22.547399398Z" level=info msg="shim disconnected" id=7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f namespace=k8s.io Feb 13 16:16:22.550181 containerd[1476]: time="2025-02-13T16:16:22.549795225Z" level=warning msg="cleaning up after shim disconnected" id=7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f namespace=k8s.io Feb 13 16:16:22.550181 containerd[1476]: time="2025-02-13T16:16:22.549902311Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:22.558373 containerd[1476]: time="2025-02-13T16:16:22.558010165Z" level=info msg="shim disconnected" id=a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66 namespace=k8s.io Feb 13 16:16:22.558373 containerd[1476]: time="2025-02-13T16:16:22.558372076Z" level=warning msg="cleaning up after shim disconnected" id=a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66 namespace=k8s.io Feb 13 16:16:22.558680 containerd[1476]: time="2025-02-13T16:16:22.558395405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:22.590810 containerd[1476]: time="2025-02-13T16:16:22.590548715Z" level=info msg="StopContainer for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" returns successfully" Feb 13 16:16:22.603818 containerd[1476]: time="2025-02-13T16:16:22.602940960Z" level=info msg="StopContainer for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" returns successfully" Feb 13 16:16:22.604414 containerd[1476]: time="2025-02-13T16:16:22.604378827Z" level=info msg="StopPodSandbox for \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\"" Feb 13 16:16:22.606325 containerd[1476]: time="2025-02-13T16:16:22.604492540Z" level=info msg="StopPodSandbox for \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\"" Feb 13 16:16:22.606628 containerd[1476]: time="2025-02-13T16:16:22.606513872Z" level=info msg="Container to stop \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:16:22.606838 containerd[1476]: time="2025-02-13T16:16:22.606638597Z" level=info msg="Container to stop \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:16:22.606914 containerd[1476]: time="2025-02-13T16:16:22.606846924Z" level=info msg="Container to stop \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:16:22.607007 containerd[1476]: time="2025-02-13T16:16:22.606940505Z" level=info msg="Container to stop \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:16:22.607062 containerd[1476]: time="2025-02-13T16:16:22.607021887Z" level=info msg="Container to stop \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:16:22.607728 containerd[1476]: time="2025-02-13T16:16:22.606516732Z" level=info msg="Container to stop \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:16:22.609873 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0-shm.mount: Deactivated successfully. Feb 13 16:16:22.610294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900-shm.mount: Deactivated successfully. Feb 13 16:16:22.623984 systemd[1]: cri-containerd-1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900.scope: Deactivated successfully. Feb 13 16:16:22.626921 systemd[1]: cri-containerd-0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0.scope: Deactivated successfully. Feb 13 16:16:22.667816 containerd[1476]: time="2025-02-13T16:16:22.667716921Z" level=info msg="shim disconnected" id=1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900 namespace=k8s.io Feb 13 16:16:22.668102 containerd[1476]: time="2025-02-13T16:16:22.668025554Z" level=warning msg="cleaning up after shim disconnected" id=1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900 namespace=k8s.io Feb 13 16:16:22.668102 containerd[1476]: time="2025-02-13T16:16:22.668041951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:22.680930 containerd[1476]: time="2025-02-13T16:16:22.680734686Z" level=info msg="shim disconnected" id=0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0 namespace=k8s.io Feb 13 16:16:22.680930 containerd[1476]: time="2025-02-13T16:16:22.680814998Z" level=warning msg="cleaning up after shim disconnected" id=0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0 namespace=k8s.io Feb 13 16:16:22.680930 containerd[1476]: time="2025-02-13T16:16:22.680825864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:22.703204 containerd[1476]: time="2025-02-13T16:16:22.703119745Z" level=info msg="TearDown network for sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" successfully" Feb 13 16:16:22.703204 containerd[1476]: time="2025-02-13T16:16:22.703186880Z" level=info msg="StopPodSandbox for \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" returns successfully" Feb 13 16:16:22.725272 containerd[1476]: time="2025-02-13T16:16:22.725220445Z" level=info msg="TearDown network for sandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" successfully" Feb 13 16:16:22.725272 containerd[1476]: time="2025-02-13T16:16:22.725259986Z" level=info msg="StopPodSandbox for \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" returns successfully" Feb 13 16:16:22.754101 kubelet[2609]: I0213 16:16:22.753307 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-etc-cni-netd\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.754101 kubelet[2609]: I0213 16:16:22.753455 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-hubble-tls\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.754101 kubelet[2609]: I0213 16:16:22.753500 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-bpf-maps\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.754101 kubelet[2609]: I0213 16:16:22.753540 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-cgroup\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.754101 kubelet[2609]: I0213 16:16:22.753577 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-net\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.754101 kubelet[2609]: I0213 16:16:22.753623 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/551f2793-fe96-43d6-aae3-be601caad768-clustermesh-secrets\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.755093 kubelet[2609]: I0213 16:16:22.753659 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-run\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.755093 kubelet[2609]: I0213 16:16:22.753698 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/551f2793-fe96-43d6-aae3-be601caad768-cilium-config-path\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.755093 kubelet[2609]: I0213 16:16:22.753731 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-kernel\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.755093 kubelet[2609]: I0213 16:16:22.753773 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwlb5\" (UniqueName: \"kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-kube-api-access-jwlb5\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.755093 kubelet[2609]: I0213 16:16:22.753810 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-lib-modules\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.755093 kubelet[2609]: I0213 16:16:22.753841 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-hostproc\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.756233 kubelet[2609]: I0213 16:16:22.753876 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cni-path\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.756233 kubelet[2609]: I0213 16:16:22.753915 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-xtables-lock\") pod \"551f2793-fe96-43d6-aae3-be601caad768\" (UID: \"551f2793-fe96-43d6-aae3-be601caad768\") " Feb 13 16:16:22.756233 kubelet[2609]: I0213 16:16:22.754075 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.756233 kubelet[2609]: I0213 16:16:22.755029 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.756233 kubelet[2609]: I0213 16:16:22.755449 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.761450 kubelet[2609]: I0213 16:16:22.759647 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551f2793-fe96-43d6-aae3-be601caad768-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:16:22.761450 kubelet[2609]: I0213 16:16:22.759784 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.766543 kubelet[2609]: I0213 16:16:22.765837 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-kube-api-access-jwlb5" (OuterVolumeSpecName: "kube-api-access-jwlb5") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "kube-api-access-jwlb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:16:22.766543 kubelet[2609]: I0213 16:16:22.766011 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.766543 kubelet[2609]: I0213 16:16:22.766075 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-hostproc" (OuterVolumeSpecName: "hostproc") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.766543 kubelet[2609]: I0213 16:16:22.766105 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cni-path" (OuterVolumeSpecName: "cni-path") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.769096 kubelet[2609]: I0213 16:16:22.768006 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.769096 kubelet[2609]: I0213 16:16:22.768091 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.769096 kubelet[2609]: I0213 16:16:22.768452 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:16:22.769096 kubelet[2609]: I0213 16:16:22.768109 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:16:22.772911 kubelet[2609]: I0213 16:16:22.772841 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551f2793-fe96-43d6-aae3-be601caad768-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "551f2793-fe96-43d6-aae3-be601caad768" (UID: "551f2793-fe96-43d6-aae3-be601caad768"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 16:16:22.855175 kubelet[2609]: I0213 16:16:22.855108 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d87rs\" (UniqueName: \"kubernetes.io/projected/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-kube-api-access-d87rs\") pod \"d1b4ba2d-0739-4161-94a5-72d8d6297f0d\" (UID: \"d1b4ba2d-0739-4161-94a5-72d8d6297f0d\") " Feb 13 16:16:22.855821 kubelet[2609]: I0213 16:16:22.855772 2609 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-cilium-config-path\") pod \"d1b4ba2d-0739-4161-94a5-72d8d6297f0d\" (UID: \"d1b4ba2d-0739-4161-94a5-72d8d6297f0d\") " Feb 13 16:16:22.855934 kubelet[2609]: I0213 16:16:22.855867 2609 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-bpf-maps\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.855934 kubelet[2609]: I0213 16:16:22.855885 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-cgroup\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.855934 kubelet[2609]: I0213 16:16:22.855904 2609 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-hubble-tls\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.855934 kubelet[2609]: I0213 16:16:22.855925 2609 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-net\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.855941 2609 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/551f2793-fe96-43d6-aae3-be601caad768-clustermesh-secrets\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856021 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cilium-run\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856040 2609 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-host-proc-sys-kernel\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856051 2609 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jwlb5\" (UniqueName: \"kubernetes.io/projected/551f2793-fe96-43d6-aae3-be601caad768-kube-api-access-jwlb5\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856061 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/551f2793-fe96-43d6-aae3-be601caad768-cilium-config-path\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856072 2609 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-hostproc\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856083 2609 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-lib-modules\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856221 kubelet[2609]: I0213 16:16:22.856092 2609 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-cni-path\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856687 kubelet[2609]: I0213 16:16:22.856100 2609 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-xtables-lock\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.856687 kubelet[2609]: I0213 16:16:22.856122 2609 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/551f2793-fe96-43d6-aae3-be601caad768-etc-cni-netd\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.859880 kubelet[2609]: I0213 16:16:22.859742 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-kube-api-access-d87rs" (OuterVolumeSpecName: "kube-api-access-d87rs") pod "d1b4ba2d-0739-4161-94a5-72d8d6297f0d" (UID: "d1b4ba2d-0739-4161-94a5-72d8d6297f0d"). InnerVolumeSpecName "kube-api-access-d87rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:16:22.860586 kubelet[2609]: I0213 16:16:22.860499 2609 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d1b4ba2d-0739-4161-94a5-72d8d6297f0d" (UID: "d1b4ba2d-0739-4161-94a5-72d8d6297f0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:16:22.957321 kubelet[2609]: I0213 16:16:22.957104 2609 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d87rs\" (UniqueName: \"kubernetes.io/projected/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-kube-api-access-d87rs\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:22.957321 kubelet[2609]: I0213 16:16:22.957163 2609 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1b4ba2d-0739-4161-94a5-72d8d6297f0d-cilium-config-path\") on node \"ci-4152.2.1-f-cf79e5d115\" DevicePath \"\"" Feb 13 16:16:23.057443 kubelet[2609]: I0213 16:16:23.055906 2609 scope.go:117] "RemoveContainer" containerID="a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66" Feb 13 16:16:23.075807 systemd[1]: Removed slice kubepods-burstable-pod551f2793_fe96_43d6_aae3_be601caad768.slice - libcontainer container kubepods-burstable-pod551f2793_fe96_43d6_aae3_be601caad768.slice. Feb 13 16:16:23.076011 systemd[1]: kubepods-burstable-pod551f2793_fe96_43d6_aae3_be601caad768.slice: Consumed 9.465s CPU time. Feb 13 16:16:23.083533 containerd[1476]: time="2025-02-13T16:16:23.083455533Z" level=info msg="RemoveContainer for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\"" Feb 13 16:16:23.089382 containerd[1476]: time="2025-02-13T16:16:23.089112056Z" level=info msg="RemoveContainer for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" returns successfully" Feb 13 16:16:23.090875 kubelet[2609]: I0213 16:16:23.090375 2609 scope.go:117] "RemoveContainer" containerID="0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b" Feb 13 16:16:23.091718 systemd[1]: Removed slice kubepods-besteffort-podd1b4ba2d_0739_4161_94a5_72d8d6297f0d.slice - libcontainer container kubepods-besteffort-podd1b4ba2d_0739_4161_94a5_72d8d6297f0d.slice. Feb 13 16:16:23.095852 containerd[1476]: time="2025-02-13T16:16:23.095732888Z" level=info msg="RemoveContainer for \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\"" Feb 13 16:16:23.102744 containerd[1476]: time="2025-02-13T16:16:23.102670127Z" level=info msg="RemoveContainer for \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\" returns successfully" Feb 13 16:16:23.103532 kubelet[2609]: I0213 16:16:23.103381 2609 scope.go:117] "RemoveContainer" containerID="5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998" Feb 13 16:16:23.108875 containerd[1476]: time="2025-02-13T16:16:23.108572141Z" level=info msg="RemoveContainer for \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\"" Feb 13 16:16:23.117035 containerd[1476]: time="2025-02-13T16:16:23.115988820Z" level=info msg="RemoveContainer for \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\" returns successfully" Feb 13 16:16:23.123367 kubelet[2609]: I0213 16:16:23.122185 2609 scope.go:117] "RemoveContainer" containerID="871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3" Feb 13 16:16:23.130234 containerd[1476]: time="2025-02-13T16:16:23.129495399Z" level=info msg="RemoveContainer for \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\"" Feb 13 16:16:23.137497 containerd[1476]: time="2025-02-13T16:16:23.136320196Z" level=info msg="RemoveContainer for \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\" returns successfully" Feb 13 16:16:23.141991 kubelet[2609]: I0213 16:16:23.141928 2609 scope.go:117] "RemoveContainer" containerID="4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc" Feb 13 16:16:23.145571 containerd[1476]: time="2025-02-13T16:16:23.145519474Z" level=info msg="RemoveContainer for \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\"" Feb 13 16:16:23.149431 containerd[1476]: time="2025-02-13T16:16:23.149367887Z" level=info msg="RemoveContainer for \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\" returns successfully" Feb 13 16:16:23.150090 kubelet[2609]: I0213 16:16:23.150048 2609 scope.go:117] "RemoveContainer" containerID="a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66" Feb 13 16:16:23.150833 containerd[1476]: time="2025-02-13T16:16:23.150774538Z" level=error msg="ContainerStatus for \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\": not found" Feb 13 16:16:23.151399 kubelet[2609]: E0213 16:16:23.151238 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\": not found" containerID="a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66" Feb 13 16:16:23.157408 kubelet[2609]: I0213 16:16:23.157240 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66"} err="failed to get container status \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9f37800f3bed079182a0a3f4db71e94775227b2dde46f156d9b8489b4919f66\": not found" Feb 13 16:16:23.157408 kubelet[2609]: I0213 16:16:23.157366 2609 scope.go:117] "RemoveContainer" containerID="0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b" Feb 13 16:16:23.158714 containerd[1476]: time="2025-02-13T16:16:23.158130274Z" level=error msg="ContainerStatus for \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\": not found" Feb 13 16:16:23.158881 kubelet[2609]: E0213 16:16:23.158430 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\": not found" containerID="0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b" Feb 13 16:16:23.158881 kubelet[2609]: I0213 16:16:23.158484 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b"} err="failed to get container status \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d24e7409b4d09fbafd289f961e389bd313cf9829c3f06d109ac0caec3abfc2b\": not found" Feb 13 16:16:23.158881 kubelet[2609]: I0213 16:16:23.158502 2609 scope.go:117] "RemoveContainer" containerID="5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998" Feb 13 16:16:23.159584 containerd[1476]: time="2025-02-13T16:16:23.159391839Z" level=error msg="ContainerStatus for \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\": not found" Feb 13 16:16:23.159696 kubelet[2609]: E0213 16:16:23.159674 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\": not found" containerID="5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998" Feb 13 16:16:23.159774 kubelet[2609]: I0213 16:16:23.159719 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998"} err="failed to get container status \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ee681c86619d5801de5f724792007ddb223c722607158246eba5421ccc7e998\": not found" Feb 13 16:16:23.159774 kubelet[2609]: I0213 16:16:23.159734 2609 scope.go:117] "RemoveContainer" containerID="871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3" Feb 13 16:16:23.160355 kubelet[2609]: E0213 16:16:23.160168 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\": not found" containerID="871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3" Feb 13 16:16:23.160355 kubelet[2609]: I0213 16:16:23.160192 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3"} err="failed to get container status \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\": not found" Feb 13 16:16:23.160355 kubelet[2609]: I0213 16:16:23.160209 2609 scope.go:117] "RemoveContainer" containerID="4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc" Feb 13 16:16:23.160516 containerd[1476]: time="2025-02-13T16:16:23.160024723Z" level=error msg="ContainerStatus for \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"871a87906706d666a0471453d5b5e11f72f9a840879f78eedc99f3fb5a16edc3\": not found" Feb 13 16:16:23.161003 containerd[1476]: time="2025-02-13T16:16:23.160712195Z" level=error msg="ContainerStatus for \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\": not found" Feb 13 16:16:23.161118 kubelet[2609]: E0213 16:16:23.161093 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\": not found" containerID="4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc" Feb 13 16:16:23.161681 kubelet[2609]: I0213 16:16:23.161275 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc"} err="failed to get container status \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dc61e0b6ce7d10b9ebc9165e0d33027a2b2b08f08458fe0c76a9670fbc0e4bc\": not found" Feb 13 16:16:23.161681 kubelet[2609]: I0213 16:16:23.161300 2609 scope.go:117] "RemoveContainer" containerID="7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f" Feb 13 16:16:23.162771 containerd[1476]: time="2025-02-13T16:16:23.162739153Z" level=info msg="RemoveContainer for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\"" Feb 13 16:16:23.169667 containerd[1476]: time="2025-02-13T16:16:23.169507864Z" level=info msg="RemoveContainer for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" returns successfully" Feb 13 16:16:23.170115 kubelet[2609]: I0213 16:16:23.170076 2609 scope.go:117] "RemoveContainer" containerID="7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f" Feb 13 16:16:23.170931 containerd[1476]: time="2025-02-13T16:16:23.170845025Z" level=error msg="ContainerStatus for \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\": not found" Feb 13 16:16:23.171182 kubelet[2609]: E0213 16:16:23.171095 2609 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\": not found" containerID="7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f" Feb 13 16:16:23.171182 kubelet[2609]: I0213 16:16:23.171161 2609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f"} err="failed to get container status \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7afcaa750f0bd462cf27f13b66090cf0eed193fb5c351afccd0fc03f83fb116f\": not found" Feb 13 16:16:23.379032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0-rootfs.mount: Deactivated successfully. Feb 13 16:16:23.379208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900-rootfs.mount: Deactivated successfully. Feb 13 16:16:23.379398 systemd[1]: var-lib-kubelet-pods-d1b4ba2d\x2d0739\x2d4161\x2d94a5\x2d72d8d6297f0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd87rs.mount: Deactivated successfully. Feb 13 16:16:23.379504 systemd[1]: var-lib-kubelet-pods-551f2793\x2dfe96\x2d43d6\x2daae3\x2dbe601caad768-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwlb5.mount: Deactivated successfully. Feb 13 16:16:23.379616 systemd[1]: var-lib-kubelet-pods-551f2793\x2dfe96\x2d43d6\x2daae3\x2dbe601caad768-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 16:16:23.379717 systemd[1]: var-lib-kubelet-pods-551f2793\x2dfe96\x2d43d6\x2daae3\x2dbe601caad768-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 16:16:23.599001 kubelet[2609]: I0213 16:16:23.597681 2609 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="551f2793-fe96-43d6-aae3-be601caad768" path="/var/lib/kubelet/pods/551f2793-fe96-43d6-aae3-be601caad768/volumes" Feb 13 16:16:23.599001 kubelet[2609]: I0213 16:16:23.598647 2609 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d1b4ba2d-0739-4161-94a5-72d8d6297f0d" path="/var/lib/kubelet/pods/d1b4ba2d-0739-4161-94a5-72d8d6297f0d/volumes" Feb 13 16:16:24.258002 sshd[4223]: Connection closed by 139.178.89.65 port 56286 Feb 13 16:16:24.260632 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:24.269626 systemd[1]: sshd@25-24.199.97.58:22-139.178.89.65:56286.service: Deactivated successfully. Feb 13 16:16:24.273604 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:16:24.275812 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:16:24.279725 systemd-logind[1452]: Removed session 25. Feb 13 16:16:24.286524 systemd[1]: Started sshd@26-24.199.97.58:22-139.178.89.65:56300.service - OpenSSH per-connection server daemon (139.178.89.65:56300). Feb 13 16:16:24.402480 sshd[4381]: Accepted publickey for core from 139.178.89.65 port 56300 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:24.405511 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:24.414546 systemd-logind[1452]: New session 26 of user core. Feb 13 16:16:24.417275 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:16:25.261464 sshd[4383]: Connection closed by 139.178.89.65 port 56300 Feb 13 16:16:25.261318 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:25.272471 systemd[1]: sshd@26-24.199.97.58:22-139.178.89.65:56300.service: Deactivated successfully. Feb 13 16:16:25.276834 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:16:25.280657 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:16:25.291535 systemd[1]: Started sshd@27-24.199.97.58:22-139.178.89.65:34026.service - OpenSSH per-connection server daemon (139.178.89.65:34026). Feb 13 16:16:25.294375 systemd-logind[1452]: Removed session 26. Feb 13 16:16:25.321335 kubelet[2609]: I0213 16:16:25.321125 2609 topology_manager.go:215] "Topology Admit Handler" podUID="e584d06b-0f0b-439c-bedd-a1534b6314d9" podNamespace="kube-system" podName="cilium-8q9sn" Feb 13 16:16:25.323235 kubelet[2609]: E0213 16:16:25.323190 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="551f2793-fe96-43d6-aae3-be601caad768" containerName="apply-sysctl-overwrites" Feb 13 16:16:25.323502 kubelet[2609]: E0213 16:16:25.323486 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1b4ba2d-0739-4161-94a5-72d8d6297f0d" containerName="cilium-operator" Feb 13 16:16:25.323697 kubelet[2609]: E0213 16:16:25.323633 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="551f2793-fe96-43d6-aae3-be601caad768" containerName="clean-cilium-state" Feb 13 16:16:25.324224 kubelet[2609]: E0213 16:16:25.324198 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="551f2793-fe96-43d6-aae3-be601caad768" containerName="cilium-agent" Feb 13 16:16:25.324363 kubelet[2609]: E0213 16:16:25.324354 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="551f2793-fe96-43d6-aae3-be601caad768" containerName="mount-cgroup" Feb 13 16:16:25.324521 kubelet[2609]: E0213 16:16:25.324432 2609 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="551f2793-fe96-43d6-aae3-be601caad768" containerName="mount-bpf-fs" Feb 13 16:16:25.325299 kubelet[2609]: I0213 16:16:25.325261 2609 memory_manager.go:354] "RemoveStaleState removing state" podUID="551f2793-fe96-43d6-aae3-be601caad768" containerName="cilium-agent" Feb 13 16:16:25.325388 kubelet[2609]: I0213 16:16:25.325381 2609 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b4ba2d-0739-4161-94a5-72d8d6297f0d" containerName="cilium-operator" Feb 13 16:16:25.348862 systemd[1]: Created slice kubepods-burstable-pode584d06b_0f0b_439c_bedd_a1534b6314d9.slice - libcontainer container kubepods-burstable-pode584d06b_0f0b_439c_bedd_a1534b6314d9.slice. Feb 13 16:16:25.369810 sshd[4392]: Accepted publickey for core from 139.178.89.65 port 34026 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:25.374675 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:25.386312 systemd-logind[1452]: New session 27 of user core. Feb 13 16:16:25.388802 kubelet[2609]: I0213 16:16:25.388761 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-host-proc-sys-kernel\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390660 kubelet[2609]: I0213 16:16:25.390065 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-host-proc-sys-net\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390660 kubelet[2609]: I0213 16:16:25.390111 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-etc-cni-netd\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390660 kubelet[2609]: I0213 16:16:25.390133 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e584d06b-0f0b-439c-bedd-a1534b6314d9-clustermesh-secrets\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390660 kubelet[2609]: I0213 16:16:25.390162 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e584d06b-0f0b-439c-bedd-a1534b6314d9-cilium-ipsec-secrets\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390660 kubelet[2609]: I0213 16:16:25.390197 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e584d06b-0f0b-439c-bedd-a1534b6314d9-cilium-config-path\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390660 kubelet[2609]: I0213 16:16:25.390227 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-bpf-maps\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390974 kubelet[2609]: I0213 16:16:25.390254 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e584d06b-0f0b-439c-bedd-a1534b6314d9-hubble-tls\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390974 kubelet[2609]: I0213 16:16:25.390279 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-cilium-run\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390974 kubelet[2609]: I0213 16:16:25.390305 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-cilium-cgroup\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390974 kubelet[2609]: I0213 16:16:25.390324 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-xtables-lock\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390974 kubelet[2609]: I0213 16:16:25.390344 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-lib-modules\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.390974 kubelet[2609]: I0213 16:16:25.390365 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpdn\" (UniqueName: \"kubernetes.io/projected/e584d06b-0f0b-439c-bedd-a1534b6314d9-kube-api-access-trpdn\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.391125 kubelet[2609]: I0213 16:16:25.390382 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-hostproc\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.391125 kubelet[2609]: I0213 16:16:25.390402 2609 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e584d06b-0f0b-439c-bedd-a1534b6314d9-cni-path\") pod \"cilium-8q9sn\" (UID: \"e584d06b-0f0b-439c-bedd-a1534b6314d9\") " pod="kube-system/cilium-8q9sn" Feb 13 16:16:25.391301 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:16:25.458499 sshd[4394]: Connection closed by 139.178.89.65 port 34026 Feb 13 16:16:25.459734 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:25.471375 systemd[1]: sshd@27-24.199.97.58:22-139.178.89.65:34026.service: Deactivated successfully. Feb 13 16:16:25.474622 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:16:25.477347 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:16:25.485366 systemd[1]: Started sshd@28-24.199.97.58:22-139.178.89.65:34038.service - OpenSSH per-connection server daemon (139.178.89.65:34038). Feb 13 16:16:25.486691 systemd-logind[1452]: Removed session 27. Feb 13 16:16:25.562057 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 34038 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:16:25.563855 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:16:25.569660 systemd-logind[1452]: New session 28 of user core. Feb 13 16:16:25.573330 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:16:25.666417 kubelet[2609]: E0213 16:16:25.666350 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:25.667666 containerd[1476]: time="2025-02-13T16:16:25.667588069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8q9sn,Uid:e584d06b-0f0b-439c-bedd-a1534b6314d9,Namespace:kube-system,Attempt:0,}" Feb 13 16:16:25.724715 containerd[1476]: time="2025-02-13T16:16:25.724470518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:16:25.725982 containerd[1476]: time="2025-02-13T16:16:25.725791823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:16:25.725982 containerd[1476]: time="2025-02-13T16:16:25.725853045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:25.726486 containerd[1476]: time="2025-02-13T16:16:25.726400393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:16:25.783481 systemd[1]: Started cri-containerd-36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8.scope - libcontainer container 36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8. Feb 13 16:16:25.827226 containerd[1476]: time="2025-02-13T16:16:25.827040006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8q9sn,Uid:e584d06b-0f0b-439c-bedd-a1534b6314d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\"" Feb 13 16:16:25.829636 kubelet[2609]: E0213 16:16:25.829302 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:25.834444 containerd[1476]: time="2025-02-13T16:16:25.833990173Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:16:25.863980 containerd[1476]: time="2025-02-13T16:16:25.863858779Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd\"" Feb 13 16:16:25.866987 containerd[1476]: time="2025-02-13T16:16:25.865885276Z" level=info msg="StartContainer for \"18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd\"" Feb 13 16:16:25.905409 systemd[1]: Started cri-containerd-18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd.scope - libcontainer container 18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd. Feb 13 16:16:25.944913 containerd[1476]: time="2025-02-13T16:16:25.944842340Z" level=info msg="StartContainer for \"18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd\" returns successfully" Feb 13 16:16:25.960847 systemd[1]: cri-containerd-18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd.scope: Deactivated successfully. Feb 13 16:16:26.005765 containerd[1476]: time="2025-02-13T16:16:26.005421741Z" level=info msg="shim disconnected" id=18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd namespace=k8s.io Feb 13 16:16:26.005765 containerd[1476]: time="2025-02-13T16:16:26.005508366Z" level=warning msg="cleaning up after shim disconnected" id=18909d103c76f1d44982c31b505bff917d14d31b3d3b0f8a5541160fede8ebbd namespace=k8s.io Feb 13 16:16:26.005765 containerd[1476]: time="2025-02-13T16:16:26.005524009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:26.101048 kubelet[2609]: E0213 16:16:26.101000 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:26.105509 containerd[1476]: time="2025-02-13T16:16:26.105434606Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:16:26.125227 containerd[1476]: time="2025-02-13T16:16:26.124860308Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445\"" Feb 13 16:16:26.127699 containerd[1476]: time="2025-02-13T16:16:26.127648308Z" level=info msg="StartContainer for \"2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445\"" Feb 13 16:16:26.168296 systemd[1]: Started cri-containerd-2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445.scope - libcontainer container 2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445. Feb 13 16:16:26.224410 containerd[1476]: time="2025-02-13T16:16:26.224350179Z" level=info msg="StartContainer for \"2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445\" returns successfully" Feb 13 16:16:26.235656 systemd[1]: cri-containerd-2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445.scope: Deactivated successfully. Feb 13 16:16:26.272898 containerd[1476]: time="2025-02-13T16:16:26.272796116Z" level=info msg="shim disconnected" id=2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445 namespace=k8s.io Feb 13 16:16:26.273171 containerd[1476]: time="2025-02-13T16:16:26.272889017Z" level=warning msg="cleaning up after shim disconnected" id=2a4512b3877c33197928b67d7cc49b53aebd5a99221653075a7b02570c759445 namespace=k8s.io Feb 13 16:16:26.273171 containerd[1476]: time="2025-02-13T16:16:26.272926024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:26.513390 systemd[1]: run-containerd-runc-k8s.io-36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8-runc.PkHfJn.mount: Deactivated successfully. Feb 13 16:16:26.744136 kubelet[2609]: E0213 16:16:26.744085 2609 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:16:27.104838 kubelet[2609]: E0213 16:16:27.104794 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:27.110332 containerd[1476]: time="2025-02-13T16:16:27.109538598Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:16:27.132842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161134131.mount: Deactivated successfully. Feb 13 16:16:27.138619 containerd[1476]: time="2025-02-13T16:16:27.138449525Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9\"" Feb 13 16:16:27.144420 containerd[1476]: time="2025-02-13T16:16:27.144218230Z" level=info msg="StartContainer for \"7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9\"" Feb 13 16:16:27.192423 systemd[1]: Started cri-containerd-7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9.scope - libcontainer container 7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9. Feb 13 16:16:27.247137 containerd[1476]: time="2025-02-13T16:16:27.247072786Z" level=info msg="StartContainer for \"7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9\" returns successfully" Feb 13 16:16:27.251774 systemd[1]: cri-containerd-7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9.scope: Deactivated successfully. Feb 13 16:16:27.289770 containerd[1476]: time="2025-02-13T16:16:27.289488374Z" level=info msg="shim disconnected" id=7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9 namespace=k8s.io Feb 13 16:16:27.289770 containerd[1476]: time="2025-02-13T16:16:27.289550426Z" level=warning msg="cleaning up after shim disconnected" id=7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9 namespace=k8s.io Feb 13 16:16:27.289770 containerd[1476]: time="2025-02-13T16:16:27.289559384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:27.515534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a2241803bc3b7e55b18beb119d674549266ab86e98c2c82d7107e2f7678ced9-rootfs.mount: Deactivated successfully. Feb 13 16:16:28.111443 kubelet[2609]: E0213 16:16:28.110022 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:28.115849 containerd[1476]: time="2025-02-13T16:16:28.115713238Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:16:28.143539 containerd[1476]: time="2025-02-13T16:16:28.143365119Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a\"" Feb 13 16:16:28.144274 containerd[1476]: time="2025-02-13T16:16:28.144242978Z" level=info msg="StartContainer for \"8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a\"" Feb 13 16:16:28.201250 systemd[1]: Started cri-containerd-8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a.scope - libcontainer container 8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a. Feb 13 16:16:28.242094 systemd[1]: cri-containerd-8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a.scope: Deactivated successfully. Feb 13 16:16:28.244252 containerd[1476]: time="2025-02-13T16:16:28.243793046Z" level=info msg="StartContainer for \"8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a\" returns successfully" Feb 13 16:16:28.278011 containerd[1476]: time="2025-02-13T16:16:28.277699416Z" level=info msg="shim disconnected" id=8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a namespace=k8s.io Feb 13 16:16:28.278011 containerd[1476]: time="2025-02-13T16:16:28.277761521Z" level=warning msg="cleaning up after shim disconnected" id=8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a namespace=k8s.io Feb 13 16:16:28.278011 containerd[1476]: time="2025-02-13T16:16:28.277769689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:16:28.513573 systemd[1]: run-containerd-runc-k8s.io-8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a-runc.ehidlJ.mount: Deactivated successfully. Feb 13 16:16:28.514206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e024dea3a59b8d8ae0ded538d6623d9410ac514d476af8faced7fbe393c748a-rootfs.mount: Deactivated successfully. Feb 13 16:16:29.117000 kubelet[2609]: E0213 16:16:29.116236 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:29.121895 containerd[1476]: time="2025-02-13T16:16:29.120699402Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:16:29.142076 containerd[1476]: time="2025-02-13T16:16:29.141891081Z" level=info msg="CreateContainer within sandbox \"36bd29458aa9dfbe90b9bdaa79b417c6a3c59bfc2589e671ea728a831401f0f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f\"" Feb 13 16:16:29.143702 containerd[1476]: time="2025-02-13T16:16:29.143022600Z" level=info msg="StartContainer for \"c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f\"" Feb 13 16:16:29.199276 systemd[1]: Started cri-containerd-c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f.scope - libcontainer container c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f. Feb 13 16:16:29.251730 containerd[1476]: time="2025-02-13T16:16:29.251500721Z" level=info msg="StartContainer for \"c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f\" returns successfully" Feb 13 16:16:29.847180 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 16:16:30.129396 kubelet[2609]: E0213 16:16:30.129259 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:30.161824 kubelet[2609]: I0213 16:16:30.161761 2609 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8q9sn" podStartSLOduration=5.161665455 podStartE2EDuration="5.161665455s" podCreationTimestamp="2025-02-13 16:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:16:30.160902941 +0000 UTC m=+118.804158378" watchObservedRunningTime="2025-02-13 16:16:30.161665455 +0000 UTC m=+118.804920894" Feb 13 16:16:30.596993 kubelet[2609]: E0213 16:16:30.594821 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-262cg" podUID="ab26d23e-0653-4e6a-b70b-820c3adcb1a5" Feb 13 16:16:31.542729 containerd[1476]: time="2025-02-13T16:16:31.542454739Z" level=info msg="StopPodSandbox for \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\"" Feb 13 16:16:31.542729 containerd[1476]: time="2025-02-13T16:16:31.542609568Z" level=info msg="TearDown network for sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" successfully" Feb 13 16:16:31.542729 containerd[1476]: time="2025-02-13T16:16:31.542666913Z" level=info msg="StopPodSandbox for \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" returns successfully" Feb 13 16:16:31.546052 containerd[1476]: time="2025-02-13T16:16:31.545166996Z" level=info msg="RemovePodSandbox for \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\"" Feb 13 16:16:31.546052 containerd[1476]: time="2025-02-13T16:16:31.545231227Z" level=info msg="Forcibly stopping sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\"" Feb 13 16:16:31.546052 containerd[1476]: time="2025-02-13T16:16:31.545330345Z" level=info msg="TearDown network for sandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" successfully" Feb 13 16:16:31.556836 containerd[1476]: time="2025-02-13T16:16:31.556507342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:16:31.556836 containerd[1476]: time="2025-02-13T16:16:31.556641917Z" level=info msg="RemovePodSandbox \"1175625abf1391cf66bdec6ad5830c58780f58dce0f12507656377bb60079900\" returns successfully" Feb 13 16:16:31.558112 containerd[1476]: time="2025-02-13T16:16:31.557644493Z" level=info msg="StopPodSandbox for \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\"" Feb 13 16:16:31.558112 containerd[1476]: time="2025-02-13T16:16:31.557766444Z" level=info msg="TearDown network for sandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" successfully" Feb 13 16:16:31.558112 containerd[1476]: time="2025-02-13T16:16:31.557782826Z" level=info msg="StopPodSandbox for \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" returns successfully" Feb 13 16:16:31.559764 containerd[1476]: time="2025-02-13T16:16:31.558485245Z" level=info msg="RemovePodSandbox for \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\"" Feb 13 16:16:31.559764 containerd[1476]: time="2025-02-13T16:16:31.558561540Z" level=info msg="Forcibly stopping sandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\"" Feb 13 16:16:31.559764 containerd[1476]: time="2025-02-13T16:16:31.558646980Z" level=info msg="TearDown network for sandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" successfully" Feb 13 16:16:31.565759 containerd[1476]: time="2025-02-13T16:16:31.565691205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:16:31.567028 containerd[1476]: time="2025-02-13T16:16:31.566485153Z" level=info msg="RemovePodSandbox \"0462d3f37889d4861cbbe7d14ff5297afbff16571532a83719db86f8fdb450a0\" returns successfully" Feb 13 16:16:31.672294 kubelet[2609]: E0213 16:16:31.672251 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:32.594847 kubelet[2609]: E0213 16:16:32.594796 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:33.678812 kubelet[2609]: E0213 16:16:33.674577 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:33.681673 systemd-networkd[1387]: lxc_health: Link UP Feb 13 16:16:33.711228 systemd-networkd[1387]: lxc_health: Gained carrier Feb 13 16:16:34.146706 kubelet[2609]: E0213 16:16:34.146433 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:34.492856 systemd[1]: run-containerd-runc-k8s.io-c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f-runc.clKCXA.mount: Deactivated successfully. Feb 13 16:16:34.596722 kubelet[2609]: E0213 16:16:34.596673 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:35.149884 kubelet[2609]: E0213 16:16:35.149832 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 16:16:35.469194 systemd-networkd[1387]: lxc_health: Gained IPv6LL Feb 13 16:16:39.253809 systemd[1]: run-containerd-runc-k8s.io-c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f-runc.Ibq6yx.mount: Deactivated successfully. Feb 13 16:16:41.453526 systemd[1]: run-containerd-runc-k8s.io-c96d6c424f57566faa1004582e14c8a9fbcff693c1c3fbf8ce2bc50f30beb45f-runc.hBaZVo.mount: Deactivated successfully. Feb 13 16:16:41.530285 sshd[4407]: Connection closed by 139.178.89.65 port 34038 Feb 13 16:16:41.531525 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Feb 13 16:16:41.537470 systemd[1]: sshd@28-24.199.97.58:22-139.178.89.65:34038.service: Deactivated successfully. Feb 13 16:16:41.539983 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:16:41.541011 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:16:41.542429 systemd-logind[1452]: Removed session 28.